Entries |
Document | Title | Date |
20080209118 | MAGNETIC RANDOM ACCESS MEMORY AND MANUFACTURING METHOD THEREOF - A magnetic random access memory includes a semiconductor substrate in which a step portion having a side surface and a top face is formed, a gate electrode formed on the side surface of the step portion through a gate insulating film, a drain diffusion layer formed in the top face of the step portion, a source diffusion layer formed in the semiconductor substrate below the drain diffusion layer to be separated from the drain diffusion layer, a magnetoresistive effect element which is connected with the drain diffusion layer, and has a fixed layer, a recording layer and a non-magnetic layer, the magnetization directions of the fixed layer and the recording layer entering a parallel state or an antiparallel state in accordance with a direction of a current flowing through a space between the fixed layer and the recording layer, and a bit line connected with the magnetoresistive effect element. | 08-28-2008 |
20080209119 | METHODS AND SYSTEMS FOR GENERATING ERROR CORRECTION CODES - Methods and systems for generating ECC encode a data block to generate corresponding error correction codes. A first buffer sequentially stores a first section and a second section of the data block, wherein each of the first and second sections is composed of X data rows and Y data columns of the data block, and Y is greater than or equal to 2. A second buffer stores Y partial-parity columns. An encoder is used for encoding the first section read from the first buffer to generate the partial-parity columns, and then storing the partial-parity columns in the second buffer. The second section read from the first buffer and the partial-parity columns read from the second buffer are encoded to generate updated partial-parity columns. Next, the partial-parity columns in the second buffer are updated by storing the updated partial-parity columns. | 08-28-2008 |
20080229007 | Enhancements to an XDR Memory Controller to Allow for Conversion to DDR2 - A memory control apparatus includes a data stream format converter and a physical layer converter. The data stream format converter is configured to convert an incoming data stream that has a data stream format corresponding to a first memory type into a format-converted data stream that has a data stream format corresponding to a second memory type. The second memory type is different from the first memory type. The physical layer converter is configured to convert the format-converted data stream into a physical-layer-converted data stream that has at least one physical parameter corresponding to the second memory type. The format-converted data stream has at least one physical parameter corresponding to the first memory type. | 09-18-2008 |
20080235444 | SYSTEM AND METHOD FOR PROVIDING SYNCHRONOUS DYNAMIC RANDOM ACCESS MEMORY (SDRAM) MODE REGISTER SHADOWING IN A MEMORY SYSTEM - A system and method for providing SDRAM mode register shadowing in a memory system. A system includes a memory interface device adapted for use in a memory system. The memory interface device includes an interface to one or more ranks of memory devices, and each memory device includes one or more types of mode registers. The memory interface device also includes an interface to a memory bus for receiving commands from a memory controller. The commands include a mode register set command specifying a new mode register setting for one or more ranks of memory devices and a mode register type. The memory interface device further includes a mode register shadow module to capture settings applied to the mode registers. The module includes a shadow register for each type of mode register and a shadow log for each type of mode register. The module also includes mode register shadow logic to detect a mode register set command, to store the new mode register setting in the shadow register corresponding to the specified mode register type, and to set one or more bits in the shadow log corresponding to the specified mode register type to indicate which of the ranks of memory devices have been programmed with the new mode register setting. | 09-25-2008 |
20080244168 | METHOD AND APPARATUS FOR A PRIMARY OPERATING SYSTEM AND AN APPLIANCE OPERATING SYSTEM - One embodiment includes a personal computer device comprising at least one machine to execute a primary user operating system, a first physical memory to be used by the primary user operating system, at least one appliance operating system that is independent from the primary user operating system, a second physical memory to be sequestered from the primary user operating system and an access violation monitor to restrict access from the at least one appliance operating system to the second physical memory, wherein the access violation monitor is to run only when the at least one appliance operating system is invoked and at least one appliance operating system is to be invoked only after the primary user operating system has been suspended to a standby state. | 10-02-2008 |
20080250196 | Data Sequence Sample and Hold Method, Apparatus and Semiconductor Integrated Circuit - To provide a sample-and-hold method which can limit the storage capacity of storage media needed to a bare minimum and can independently manage a series of data contained in a predetermined interval before the arrival time of a trigger signal and a series of data contained in a predetermined interval after the arrival time of the trigger signal by separating them clearly. | 10-09-2008 |
20080256290 | METHOD AND SYSTEM OF RANDOMIZING MEMORY LOCATIONS - A memory system that disperses memory addresses of stings of data throughout a memory is provided. The memory system includes a memory, a central processing unit (CPU) and an address randomizer. The memory is configured to store stings of data. The CPU is configured to direct the storing and retrieving of the strings of data from the memory at select memory addresses. The address randomizer is coupled between the CPU and the memory. Moreover, the address randomizer is configured to disburse the strings of data throughout locations of the memory by changing the select memory addresses directed by the CPU. | 10-16-2008 |
20080270683 | SYSTEMS AND METHODS FOR A DRAM CONCURRENT REFRESH ENGINE WITH PROCESSOR INTERFACE - Systems and methods for a DRAM concurrent refresh engine with processor interface. In exemplary embodiments, memory cells requiring periodic refresh at least once each for a specified refresh interval and words of an array organized banks in which the banks are selected for access by a bank-enable signal, each bank having a word decoder accepting one of two refresh word addresses, one refresh word address for a normal access, and the other for a refresh access, one of the word addresses selected by two separate enable signals, provided by on-macro refresh logic, which includes instructions to select one bank for refresh when no normal access occurs and select one bank for refresh concurrently with a normal access having no bank conflicts, the refresh logic maintaining the refresh status, timing of the refresh interval, and insuring all memory cells are refreshed within the refresh interval. | 10-30-2008 |
20080282028 | DYNAMIC OPTIMIZATION OF DYNAMIC RANDOM ACCESS MEMORY (DRAM) CONTROLLER PAGE POLICY - Embodiments of the present invention address deficiencies of the art in respect to memory management and provide a method, system and computer program product for dynamic optimization of DRAM controller page policy. In one embodiment of the invention, a memory module can include multiple different memories, each including a memory controller coupled to a memory array of memory pages. Each of the memory pages in turn can include a corresponding locality tendency state. A memory bank can be coupled to a sense amplifier and configured to latch selected ones of the memory pages responsive to the memory controller. Finally, the module can include open page policy management logic coupled to the memory controller. The logic can include program code enabled to granularly change open page policy management of the memory bank responsive to identifying a locality tendency state for a page loaded in the memory bank. | 11-13-2008 |
20080282029 | STRUCTURE FOR DYNAMIC OPTIMIZATION OF DYNAMIC RANDOM ACCESS MEMORY (DRAM) CONTROLLER PAGE POLICY - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design for dynamic optimization of DRAM controller page policy is provided. The design structure can include a memory module, which can include multiple different memories, each including a memory controller coupled to a memory array of memory pages. Each of the memory pages in turn can include a corresponding locality tendency state. A memory bank can be coupled to a sense amplifier and configured to latch selected ones of the memory pages responsive to the memory controller. Finally, the module can include open page policy management logic coupled to the memory controller. The logic can include program code enabled to granularly change open page policy management of the memory bank responsive to identifying a locality tendency state for a page loaded in the memory bank. | 11-13-2008 |
20080294840 | READ/WRITE CHANNEL CODING AND METHODS FOR USE THEREWITH - A write channel includes a pre-encoding module that encodes write data to produce pre-encoded data. An error correcting code (ECC) module generates ECC data based on the pre-encoded data. A post-encoding module encodes the ECC data to produce post-encoded data. A combining module combines the pre-encoded data and the post-encoded data for writing to the storage medium. | 11-27-2008 |
20080294841 | APPARATUS FOR IMPLEMENTING ENHANCED VERTICAL ECC STORAGE IN A DYNAMIC RANDOM ACCESS MEMORY - A method and apparatus are provided for implementing enhanced vertical ECC storage in a dynamic random access memory. A dynamic random access memory (DRAM) is split into a plurality of groups. Each group resides inside a DRAM row address strobe (RAS) page so that multiple locations inside a group can be accessed without incurring an additional RAS access penalty. Each group is logically split into a plurality of segments for storing data with at least one segment for storing ECC for the data segments. For a write operation, data are written in a data segment and then ECC for the data are written in an ECC segment. For a read operation, ECC are read from an ECC segment, then data are read from the data segment. | 11-27-2008 |
20080313393 | Device for writing data into memory - A device for writing data into a memory and a method thereof. The memory comprises a plurality of memory arrays. Each of the memory arrays comprises a plurality of memory cells. The data are divided into a plurality of segments. The segments are written into first memory cells of the memory cells of the memory arrays in sequence. The segments start being written into the second memory cells of the memory cells of the memory arrays when the first memory cells of the memory cells of the memory arrays are full, and so forth, till the operation of writing the segments into the memory is completed. | 12-18-2008 |
20080313394 | MOTHERBOARD AND MEMORY DEVICE THEREOF - A memory device can be directly mounted on a motherboard supporting DDR3 SDRAM, and then the memory device have advantages of the fly-by bus topology and the T branch topology established by the joint electron device engineering council (JEDEC). Thus, the system performance of a desktop computer in a unit interval can be enhanced. | 12-18-2008 |
20080320215 | Semiconductor memory device and method for operating semiconductor memory device - A semiconductor memory device includes a memory array section configured to serve as an information storage area and an interface section configured to interface between an external memory controller and the memory array section, the memory array section and the interface section being sealed in a package. The interface section includes a plurality of interface modules configured to correspond to a plurality of memory types on a one-to-one basis, and a clock generation section configured to generate a plurality of clock signals based on a system clock signal supplied by the external memory controller. The generated clock signals are used by the plurality of interface modules. The interface section further includes a mode interpretation section configured to interpret an input mode designation signal as indicative of one of the memory types in order to output a mode signal denoting the interpreted memory type. | 12-25-2008 |
20090006730 | DATA EYE MONITOR METHOD AND APPARATUS - An apparatus and method for providing a data eye monitor. The data eye monitor apparatus utilizes an inverter/latch string circuit and a set of latches to save the data eye for providing an infinite persistent data eye. In operation, incoming read data signals are adjusted in the first stage individually and latched to provide the read data to the requesting unit. The data is also simultaneously fed into a balanced XOR tree to combine the transitions of all incoming read data signals into a single signal. This signal is passed along a delay chain and tapped at constant intervals. The tap points are fed into latches, capturing the transitions at a delay element interval resolution. Using XORs, differences between adjacent taps and therefore transitions are detected. The eye is defined by segments that show no transitions over a series of samples. The eye size and position can be used to readjust the delay of incoming signals and/or to control environment parameters like voltage, clock speed and temperature. | 01-01-2009 |
20090019219 | Compressing address communications between processors - In one embodiment, the present invention includes a method for determining if data of a memory request by a first agent is in a memory region represented by a region indicator of a region table of the first agent, and transmitting a compressed address for the memory request to other agents of a system if the memory region is represented by the region indicator, otherwise transmitting a full address. Other embodiments are described and claimed. | 01-15-2009 |
20090024789 | MEMORY CIRCUIT SYSTEM AND METHOD - A memory circuit system and method are provided in the context of various embodiments. In one embodiment, an interface circuit remains in communication with a plurality of memory circuits and a system. The interface circuit is operable to interface the memory circuits and the system for performing various functionality (e.g. power management, simulation/emulation, etc.). | 01-22-2009 |
20090024790 | MEMORY CIRCUIT SYSTEM AND METHOD - A memory circuit system and method are provided in the context of various embodiments. In one embodiment, an interface circuit remains in communication with a plurality of memory circuits and a system. The interface circuit is operable to interface the memory circuits and the system for performing various functionality (e.g. power management, simulation/emulation, etc.). | 01-22-2009 |
20090031077 | INTEGRATED CIRCUIT INCLUDING MULTIPLE MEMORY DEVICES - An integrated circuit includes a data bus and a first memory device coupled to the data bus. The first memory device is configured to provide a first signal in response to completing a power-up sequence of the first memory device. The integrated circuit includes a second memory device coupled to the data bus. The second memory device is configured to provide a second signal in response to completing a power-up sequence of the second memory device. The integrated circuit includes a controller configured to access the first memory device and the second memory device based on the first signal and the second signal. | 01-29-2009 |
20090031078 | Rank sparing system and method - A system, and a corresponding method, are used to implement rank sparing. The system includes a memory controller and one or more DIMM channels coupled to the memory controller, where each DIMM channel includes one or more DIMMS, and where each of the one or more DIMMs includes at least one rank of DRAM devices. The memory controller is loaded with programming to test the DIMMs to designate at least one specific rank of DRAM devices as a spare rank. | 01-29-2009 |
20090043953 | MEMORY CONTROL METHODS CAPABLE OF DYNAMICALLY ADJUSTING SAMPLING POINTS, AND RELATED CIRCUITS - A memory control method for adjusting sampling points utilized by a memory control circuit receiving a data signal and an original data strobe signal of a memory includes: utilizing at least one delay unit to provide a plurality of sampling points according to the original data strobe signal; sampling according to the data signal by utilizing the plurality of sampling points; and analyzing sampling results to dynamically determine a delay amount for delaying the original data strobe signal, whereby a sampling point corresponding to the delayed data strobe signal is kept centered at data carried by the data signal. | 02-12-2009 |
20090043954 | Information Recording/Playback Apparatus and Memory Control Method - This information recording/playback apparatus has a memory for storing data and which includes a plurality of storage cells configured from a capacitor for accumulating charge. When an issue interval time for issuing a read/write command to an arbitrary storage cell is shorter than a threshold time for retaining a charge amount for the arbitrary storage cell to read correct data, a dummy read command for simulatively reading data stored in storage cells other than the arbitrary storage cell is issued to storage cells other than the arbitrary storage cell, and dummy read processing is executed for replenishing charge in the capacitor configuring storage cells other than the arbitrary storage cell. | 02-12-2009 |
20090043955 | Configurable high-speed memory interface subsystem - A memory interface subsystem including a write logic and a read logic. The write logic may be configured to communicate data from a memory controller to a memory. The read logic may be configured to communicate data from the memory to the memory controller. The read logic may comprise a plurality of physical read datapaths. Each of the physical read datapaths may be configured to receive (i) a respective portion of read data signals from the memory, (ii) a respective read data strobe signal associated with the respective portion of the received read data signals, (iii) a gating signal, (iv) a base delay signal and (v) an offset delay signal. | 02-12-2009 |
20090063761 | Buffered Memory Module Supporting Two Independent Memory Channels - A memory system is provided that enhances the memory bandwidth available through a memory module. The memory system includes a memory controller and a memory module coupled to the memory controller. In the memory system, the memory controller is coupled to the memory module via at least two independent memory channels. In the memory system, the at least two independent memory channels are coupled to one or more memory hub devices of the memory module. | 03-05-2009 |
20090070524 | NON-SNOOP READ/WRITE OPERATIONS IN A SYSTEM SUPPORTING SNOOPING - Techniques that may utilize generic tracker structures to provide data coherency in a multi-node system that supports non-snoop read and write operations. The trackers may be organized as a two-dimensional queue structure that may be utilized to resolve conflicting read and/or write operations. Multiple queues having differing associated priorities may be utilized. | 03-12-2009 |
20090089493 | SEMICONDUCTOR MEMORY, OPERATING METHOD OF SEMICONDUCTOR MEMORY, AND SYSTEM - Operation control circuits start a first operation of any of memory cores in response to a first operation command, start a second operation of any of the memory cores in response to a second operation command, and terminate the first operation and continue the second operation in response to a termination command to terminate operations of the plurality of memory cores. For example, the semiconductor memory is mounted on a system together with a controller accessing the semiconductor memory. The termination of the operation in response to the termination command is judged in accordance with an operation state of the memory core. Accordingly, it is possible to terminate the operation of the memory core requiring the termination of operation without specifying the memory core from outside. | 04-02-2009 |
20090119451 | Redriven/Retimed Registered Dual Inline Memory Module - A memory module may include a plurality of dynamic random access memory (DRAM) chips, each of which may have one or more data input/output (D/Q) terminals. The memory module may include data redriving/retiming circuits connected to the D/Q terminals of the plurality of DRAM chips. The data redriving/retiming circuits may provide isolation between a system memory bus and the D/Q terminals of the DRAM chips. | 05-07-2009 |
20090132759 | Information processing apparatus and method for controlling information processing apparatus - Disclosed herein is an information processing apparatus including: a dynamic random access memory; a memory controller that manages accesses to the dynamic random access memory on a bank basis; a cache memory that is connected to the memory controller via a bus and which caches data stored in the dynamic random access memory; and an information processing block that performs a read access to the dynamic random access memory via the cache memory. The cache memory includes: a refill request generation section configured to generate a refill request for caching the data stored in the dynamic random access memory in response to a cache miss for the read access; and a read access section configured to, when the refill requests have been accumulated for a predetermined number of banks, perform a read access to the dynamic random access memory while combining the refill requests for the predetermined number of banks. | 05-21-2009 |
20090150602 | MEMORY POWER CONTROL - In a memory device to store information, the device includes a memory core to store information, a memory controller to control storage and retrieval of the information, and a regulator coupled to the memory controller and the memory core, wherein the regulator is operable to adjust an internal voltage to the memory core in response to commands from the memory controller. | 06-11-2009 |
20090172270 | DEVICE, SYSTEM, AND METHOD OF MEMORY ALLOCATION - Device, system, and method of memory allocation. For example, an apparatus includes: a Dual In-line Memory Module (DIMM) including a plurality of Dynamic Random Access Memory (DRAM) units to store data, wherein each DRAM unit includes a plurality of banks and each bank is divided into a plurality of sub-banks; and a memory management unit to allocate a set of interleaved sub-banks of said DIMM to a memory page of an Operating System, wherein a combined memory size of the set of interleaved sub-banks is equal to a size of the memory page of the Operating System. | 07-02-2009 |
20090172271 | SYSTEM AND METHOD FOR EXECUTING FULL AND PARTIAL WRITES TO DRAM IN A DIMM CONFIGURATION - In an embodiment of the invention, a host or other controller writing to multiple DRAMs in a DIMM configuration determines whether there is full write request to at least one of the multiple DRAM's and a partial write request to at least another one of the multiple DRAM's. If so, then the host parses data associated with the full write request into a first portion and a second portion. The host then outputs a first partial write command associated with the first portion and a second partial write command associated with the second portion to the DIMM. Other embodiments are described. | 07-02-2009 |
20090187704 | METHOD AND SYSTEM FOR SECURE CODE ENCRYPTION FOR PC-SLAVE DEVICES - A PC-slave device may securely load and decrypt an execution code and/or data, which may be stored, encrypted, in a PC hard-drive. The PC-slave device may utilize a dedicated memory, which may be partitioned into an accessible region and a restricted region that may only be accessible by the PC-slave device. The encrypted execution code and/or may be loaded into the accessible region of the dedicated memory; the PC-slave device may decrypt the execution code and/or data, internally, and store the decrypted execution code and/or data into the restricted region of the dedicated memory. The decrypted execution code and/or data may be validated, and may be utilized from the restricted region. The partitioning of the dedicated memory, into accessible and restricted regions, may be performed dynamically during secure code loading. The PC-slave device may comprise a dedicated secure processor that may perform and/or manage secure code loading. | 07-23-2009 |
20090204752 | MEMORY DEVICE AND REFRESH ADJUSTING METHOD - When a single error of data is detected by an ECC circuit, a cycle adjusting unit provided on a memory board shortens a refresh cycle T | 08-13-2009 |
20090210616 | MEMORY MODULES FOR TWO-DIMENSIONAL MAIN MEMORY - In one embodiment of the invention, a memory module is disclosed including a printed circuit board with an edge connector; an address controller coupled to the printed circuit board; and a plurality of memory slices. Each of the plurality of memory slices of the memory module includes one or more memory integrated circuits coupled to the printed circuit board, and a slave memory controller coupled to the printed circuit board and the one or more memory integrated circuits. The slave memory controller receives memory access requests for the memory module from the address controller. The slave memory controller selectively activates one or more of the one or more memory integrated circuits in the respective memory slice in response to the address received from the address controller to read data from or write data into selected memory locations in the memory integrated circuits. | 08-20-2009 |
20090216939 | Emulation of abstracted DIMMs using abstracted DRAMs - One embodiment of the present invention sets forth an abstracted memory subsystem comprising abstracted memories, which each may be configured to present memory-related characteristics onto a memory system interface. The characteristics can be presented on the memory system interface via logic signals or protocol exchanges, and the characteristics may include any one or more of, an address space, a protocol, a memory type, a power management rule, a number of pipeline stages, a number of banks, a mapping to physical banks, a number of ranks, a timing characteristic, an address decoding option, a bus turnaround time parameter, an additional signal assertion, a sub-rank, a number of planes, or other memory-related characteristics. Some embodiments include an intelligent register device and/or, an intelligent buffer device. One advantage of the disclosed subsystem is that memory performance may be optimized regardless of the specific protocols used by the underlying memory hardware devices. | 08-27-2009 |
20090235019 | SECURING SAFETY-CRITICAL VARIABLES - A system comprises a general-purpose memory, a lockable memory, a memory management unit, and a processor. The general-purpose memory includes data for a first set of addresses. The lockable memory includes data for a second set of addresses. The memory management unit selectively writes data to one of the general-purpose memory and the lockable memory and selectively locks the lockable memory by preventing writes to the lockable memory. The processor instructs the memory management unit to unlock the lockable memory before requesting a write to one of the second set of addresses. | 09-17-2009 |
20090240874 | FRAMEWORK FOR USER-LEVEL PACKET PROCESSING - A method of processing network packets can include allocating a first portion of a physical memory device to kernel-space control and allocating a second portion of the physical memory device to direct user-space process control. Network packets can be received from a computer network, and the received network packets can be written to the second portion of the physical memory without writing the received packets to the first portion of the physical memory. The network packets can be processed with a user-space application program that directly accesses the packets that have been written to the second portion of physical memory, and the processed packets can be sent over the computer network | 09-24-2009 |
20090248969 | REGISTERED DIMM MEMORY SYSTEM - A Registered DIMM (RDIMM) system with reduced electrical loading on the data bus for increases memory capacity and operation frequency. In one embodiment, the data bus is buffered on the DIMM. In another embodiment, the data bus is selectively coupled to a group of memory chips via switches. | 10-01-2009 |
20090248970 | DUAL EDGE COMMAND - A technique to increase transfer rate of command and address signals via a given number of command and address pins in each of one or more integrated circuit memory devices during a clock cycle of a clock signal. In one example embodiment, the command and address signals are sent on both rising and falling edges of a clock cycle of a clock signal to increase the transfer rate and essentially reduce the number of required command and address pins in each integrated circuit memory device. | 10-01-2009 |
20090248971 | System and Dynamic Random Access Memory Device Having a Receiver - A dynamic random access memory device (DRAM) receiver circuit includes an input to receive a data signal, and also includes decision circuitry to make a decision about the received data signal based on a present sampled data signal and a coefficient value corresponding to at least one of a previously sampled data signals. | 10-01-2009 |
20090254697 | MEMORY WITH EMBEDDED ASSOCIATIVE SECTION FOR COMPUTATIONS - An integrated circuit device includes a semiconductor substrate and an array of random access memory (RAM) cells, which are arranged on the substrate in first columns and are configured to store data. A computational section in the device includes associative memory cells, which are arranged on the substrate in second columns, which are aligned with respective first columns of the RAM cells and are in communication with the respective first columns so as to receive the data from the array of the RAM cells and to perform an associative computation on the data. | 10-08-2009 |
20090254698 | MULTI PORT MEMORY DEVICE WITH SHARED MEMORY AREA USING LATCH TYPE MEMORY CELLS AND DRIVING METHOD - A multiport semiconductor memory device includes; first and second port units respectively coupled to first and second processors, first and second dedicated memory area accessed by first and second processors, respectively and implemented using DRAM cells, a shared memory area commonly accessed by the first and second processors via respective first and second port units and implemented using memory cells different from the DRAM cells implementing the first and second dedicated memory areas, and a port connection control unit controlling data path configuration between the shared memory area and the first and second port units to enable data communication between the first and second processors through the shared memory area. | 10-08-2009 |
20090254699 | Synchronous dynamic random access memory interface and method - A memory interface allows access to SDRAM by receiving a column address for a data read or write of a burst of data units. Each data unit in the burst has an expected bit size. The interface generates n (n>1) column memory addresses from the received column address. The interface accesses the synchronous dynamic memory to read or write n bursts of data at the n column memory addresses. Preferably, the SDRAM is clocked at n times the rate of the interconnected memory accessing device, and the memory units. The data units in the n bursts preferably have one n | 10-08-2009 |
20090254700 | DRAM CONTROLLER FOR GRAPHICS PROCESSING OPERABLE TO ENABLE/DISABLE BURST TRANSFER - An interface unit | 10-08-2009 |
20090259809 | MEMORY ACCESS APPARATUS AND DISPLAY USING THE SAME - A memory access apparatus and a display using the same are provided. The memory access apparatus includes a dynamic memory, a plurality of clients and a memory management unit. The dynamic memory is used to store a plurality of memory data. The clients access the dynamic memory and each client has a priority. The memory management unit executes an access action of the clients for the dynamic memory respectively according to the priorities thereof. Besides, the memory management unit has at least one buffer area built therein. The buffer area is used to temporarily store a plurality of buffer data generated while the access action is performed. | 10-15-2009 |
20090265509 | MEMORY SYSTEM AND METHOD HAVING VOLATILE AND NON-VOLATILE MEMORY DEVICES AT SAME HIERARCHICAL LEVEL - A processor-based system includes a processor coupled to core logic through a processor bus. This includes a dynamic random access memory (“DRAM”) memory buffer controller. The DRAM memory buffer controller is coupled through a memory bus to a plurality of a dynamic random access memory (“DRAM”) modules and a flash memory module, which are at the same hierarchical level from the processor. Each of the DRAM modules includes a memory buffer to the memory bus and to a plurality of dynamic random access memory devices. The flash memory module includes a flash memory buffer coupled to the memory bus and to at least one flash memory device. The flash memory buffer includes a DRAM-to-flash memory converter operable to convert the DRAM memory requests to flash memory requests, which are then applied to the flash memory device. | 10-22-2009 |
20090300278 | Embedded Programmable Component for Memory Device Training - A system and method by which a memory device can adapt or retrain itself in response to changes in its inputs or operating environment. The memory device, such as a DRAM, includes in its interface an embedded programmable component. The programmable component can be, for example and without limitation, a microprocessor, a microcontroller, or a microsequencer. A programmable component is programmed to make changes to the operation of the interface of the memory device, in response to changes in the environment of the memory device. | 12-03-2009 |
20090307417 | INTEGRATED BUFFER DEVICE - An integrated buffer device. One embodiment provides a receiving unit and a logic unit to control the operation of the buffer device based on a setting signal. | 12-10-2009 |
20090307418 | Multi-channel hybrid density memory storage device and control method thereof - The present invention discloses a control method of a multi-channel hybrid density memory storage device for access a user data. The storage device includes a plurality of low density memories (LDM) and high density memories (HDM). The steps of the method comprises: first, determining where the user data transmitted; then, using one of two error correction circuits which have different error correction capability to encode or decode the user data. | 12-10-2009 |
20090327596 | MEMORY CONTROLLER USING TIME-STAGGERED LOCKSTEP SUB-CHANNELS WITH BUFFERED MEMORY - Memory control techniques for dual channel lockstep configurations are disclosed. In accordance with one example embodiment, a memory controller issues two burst-length 4 DRAM commands to two double-data-rate (DDR) DRAM sub-channels behind a memory buffer (e.g., FB-DIMM or buffer-on-board). The two commands are in time-staggered lockstep. The time-stagger allows data coming back from the two back-side DDR sub-channels to flow naturally on the host channel without conflict. Multiple DIMMs can be used to obtain chip-fail ECC capabilities and to reclaim at least some of the lost performance imposed by the burst-length of 4 s typically associated with dual channel lockstep memory controllers. The techniques can be implemented, for instance, with a buffered memory solution such as fully buffered DIMM (FB-DIMM) or buffer-on-board configurations. | 12-31-2009 |
20090327597 | DUAL INTERFACE MEMORY ARRANGEMENT AND METHOD - The present invention provides for a dual interface memory arrangement employing the checkered memory mapping formed from combined vertically and horizontally sliced memory mapping, and including 2D access means arranged for access to the mapping memory wherein the said to the access means is arranged such that the access overlaps memory mapped to both interfaces both horizontally and vertically, and which arrangement preferably provides for two DTL channels for each interface wherein a highly efficient unified memory arrangement can be achieved for all processing aspects such as CPU, audio, video and gfx processing. | 12-31-2009 |
20100005233 | STORAGE REGION ALLOCATION SYSTEM, STORAGE REGION ALLOCATION METHOD, AND CONTROL APPARATUS - There are provided a memory space allocation method and a memory space allocation device that aim at higher-speed accesses when a memory is shared by a plurality of circuits. In this memory, one data is accessed by issuing addresses a plurality of times. Memory allocation is performed so that high-order addresses of memory spaces of an external memory | 01-07-2010 |
20100005234 | Enabling functional dependency in a multi-function device - In one embodiment, the present invention includes a method for reading configuration information from a multi-function device (MFD), building a dependency tree of a functional dependency of functions performed by the MFD based on the configuration information, which indicates that the MFD is capable of performing at least one function dependent upon another function, and loading software associated with the functions in order based at least in part on the indicated functional dependency. Other embodiments are described and claimed. | 01-07-2010 |
20100005235 | COMPUTER SYSTEM - A computer system includes a CPU and a system on chip (SoC) processor electronically connected with the CPU in the computer system. The CPU and the SoC processor do not work simultaneously. The CPU processes work and a service when the computer system is powered on. The SoC processor continues processing the work and the service that are unfinished after the computer system is shut down. | 01-07-2010 |
20100030955 | MASK KEY SELECTION BASED ON DEFINED SELECTION CRITERIA - An improved data system permits power efficient mask key write operations. A mask key selector implements criteria-based selection of mask keys for mask key write operations on blocks data. In one embodiment, a first set of mask keys is compared to data bytes of a data block that will be written to memory. The comparison culls keys from the list of candidates that match unmasked data bytes, that is, values that will be written to memory as “changed” data. A mask key is selected from the resulting set of candidates so a memory write operation consumes less power (relative to selection of other keys), or so that the operation minimizes switching noise. The selected mask key is then substituted by a controller into masked data values, and a modified data block is transmitted to memory, with the memory detecting masked data by identifying mask keys in the modified data block. | 02-04-2010 |
20100037013 | MEMORY ACCESS METHOD - A memory access method intended for a memory required to provide an interval of a predetermined number of clock cycles or longer between successive occurrences of access when the same bank is successively accessed, and that eliminates an idle time between successive occurrences of access to allow for improved performance. Pieces of data are written into 0th, the first, the second, and the third banks, respectively. No idle time is caused between successive occurrences of access because different banks are successively accessed. Since a burst length of each of the pieces of data is eight, an interval of 16 cycles which is longer than 15 cycles is provided between a start of writing of first data and a start of second writing of data. Accordingly, no idle time is caused also between completion of writing of the first data and start of writing of the second data. | 02-11-2010 |
20100037014 | MEMORY DEVICE, MEMORY SYSTEM AND DUAL PORT MEMORY DEVICE WITH SELF-COPY FUNCTION - A memory device with a self-copy function includes a memory cell array having first and second banks, and a memory interface. The memory interface reads data from a memory area of the first bank corresponding to a source address contained in previously set self-copy information and writes the read data to a memory area of the second bank corresponding to a destination address contained in the self-copy information via a self-copy data path when a self-copy signal is activated by an external self-copy start request. | 02-11-2010 |
20100037015 | MEMORY CONTROL UNIT AND MEMORY CONTROL METHOD - An object of the invention is to provide a memory control unit and a memory control method capable of making the operation setting of SDRAM without intentionally stopping access to the SDRAM. | 02-11-2010 |
20100042778 | Memory System Such as a Dual-Inline Memory Module (DIMm) and Computer System Using the Memory System - A memory system ( | 02-18-2010 |
20100042779 | Implementing Vector Memory Operations - In one embodiment, the present invention includes an apparatus having a register file to store vector data, an address generator coupled to the register file to generate addresses for a vector memory operation, and a controller to generate an output slice from one or more slices each including multiple addresses, where the output slice includes addresses each corresponding to a separately addressable portion of a memory. Other embodiments are described and claimed. | 02-18-2010 |
20100049911 | Circuit and Method for Generating Data Input Buffer Control Signal - A data input buffer control signal generating device is capable of preventing unnecessary operation and current consumption of blocks and thus stabilizing an internal operation of DRAM by generating a control signal which controls an enabling timing of a data input buffer not to be conflicted with an output data. The data input buffer control signal generating device includes a write-related control unit configured to generate a data input buffer reference signal generated on the basis of a write latency by a write command, a read-related control unit configured to replicate a delay through a data output path, delay an end command for a data output termination and generate a delayed end command, wherein the end command is generated by a read command, and an output unit configured to output a data input buffer control signal by combining the data input buffer reference signal and the output of the delayed end command. | 02-25-2010 |
20100057983 | METHOD AND APPARATUS FOR AN ACTIVE LOW POWER MODE OF A PORTABLE COMPUTING DEVICE - The present invention discloses a portable computing device ( | 03-04-2010 |
20100064099 | INPUT-OUTPUT MODULE, PROCESSING PLATFORM AND METHOD FOR EXTENDING A MEMORY INTERFACE FOR INPUT-OUTPUT OPERATIONS - Embodiments of an I/O module, processing platform, and method for extending a memory interface are generally described herein. In some embodiments, the I/O module may be configured to operate in a memory module socket, such as a DIMM socket, to provide increased I/O functionality in a host system. Some system management bus address lines and some unused system clock signal lines may be reconfigured as serial data lines for serial data communications between the I/O module and a PCIe switch of the host system. | 03-11-2010 |
20100064100 | SYSTEMS, METHODS, AND APPARATUSES FOR IN-BAND DATA MASK BIT TRANSMISSION - Embodiments of the invention are generally directed to systems, methods, and apparatuses for in-band data mask bit transmission. In some embodiments, one or more data mask bits are integrated into a partial write frame and are transferred to a memory device via the data bus. Since the data mask bits are transferred via the data bus, the system does not need (costly) data mask pin(s). In some embodiments, a mechanism is provided to enable a memory device (e.g., a DRAM) to check for valid data mask bits before completing the partial write to the DRAM array. | 03-11-2010 |
20100070696 | System and Method for Packaged Memory - In one embodiment, a memory system is disclosed. The memory system has at least one memory chip having an address and data interface coupled to an internal address and data bus, and a memory controller and interface chip also having a an address and data interface coupled to the address and data interface of the at least one memory chip via an internal address and data bus. The at least one memory chip, the memory controller and interface chip and the internal address and data bus are disposed within a common chip package. The memory controller and interface chip has an external interface configured to be coupled to a standard memory bus via external contacts of the common chip package. | 03-18-2010 |
20100070697 | Memory Controller Circuit, Electronic Apparatus Controller Device and Multifunction Apparatus - A memory controller circuit configured to control an SDRAM is provided. The memory controller circuit includes a first unit configured to accept an access request provided by one of a plurality of masters for access to a page included in the SDRAM. The memory controller circuit includes a second unit configured to record an access request period of each of the masters. The memory controller circuit includes a third unit configured to set an open period of the page on the basis of the access request period recorded in the second unit in accordance with the master having provided the access request. The third unit is configured to open the page requested to be accessed for the open period having been set. | 03-18-2010 |
20100077139 | MULTI-PORT DRAM ARCHITECTURE - Embodiments of the invention provide a memory device that may be accessed by a plurality of controllers or processor cores via respective ports of the memory device. Each controller may be coupled to a respective port of the memory device via a data bus. Each port of the memory device may be associated a predefined section of memory, thereby giving each controller access to a distinct section of memory without interference from other controllers. A common command/address bus may couple the plurality of controllers to the memory device. Each controller may assert an active signal on a memory access control bus to gain access to the command/address bus to initiate a memory access. | 03-25-2010 |
20100082894 | COMMUNICATION SYSTEM AND METHOS BETWEEN PROCESSORS - A system communicating processors is provided. The system comprises a first processor, a second processor, a SRAM and a DMA unit. The DMA unit further comprises a detection unit to determine whether the SRAM is accessed by the second processor, wherein when the SRAM is not accessed by the second processor, the access control of the SRAM is transferred to the DMA unit, and data communication between the first processor and the second processor is transmitted by the DMA unit. | 04-01-2010 |
20100100670 | Out of Order Dram Sequencer - Memory access requests are successively received in a memory request queue of a memory controller. Any conflicts or potential delays between temporally proximate requests that would occur if the memory access requests were to be executed in the received order are detected, and the received order of the memory access requests is rearranged to avoid or minimize the conflicts or delays and to optimize the flow of data to and from the memory data bus. The memory access requests are executed in the reordered sequence, while the originally received order of the requests is tracked. After execution, data read from the memory device by the execution of the read-type memory access requests are transferred to the respective requestors in the order in which the read requests were originally received. | 04-22-2010 |
20100106900 | Semiconductor memory device and method thereof - A semiconductor memory device and method thereof are provided. The example method may be directed to performing a memory operation in a semiconductor memory device, and may include receiving data and a data masking signal corresponding to at least a portion of the received data, the received data scheduled to be written into memory in response to a write command and the data masking signal configured to block the at least a portion of the received data from being written into the memory and configuring timing parameters differently for each of the received data and the data masking signal so as to execute the write command without writing the at least a portion of the received data into the memory. | 04-29-2010 |
20100138597 | Information Processing System, System Controller, and Memory Control Method - According to one embodiment, an extreme data rate DRAM is a DRAM resetting data thereof in response to a reset signal. When power is initially supplied to a system, a system controller outputs the reset signal to the extreme data rate DRAM in response to a reset signal input from a memory controller through a level shifter. When shutting down power of a system while suspending data stored in the extreme data rate DRAM, the system controller shuts down power of the memory controller while maintaining supply of power to the extreme data rate DRAM in response to the reset signal input from the memory controller through the level shifter. | 06-03-2010 |
20100138598 | MEMORY DEVICES WITH BUFFERED COMMAND ADDRESS BUS - Circuits and methods are provided that alleviate overloading of the command address bus and limit decreases in command address bus bandwidth to allow increased numbers of memory modules to be included in a computer system. A plurality of switches is coupled between the command address bus (which is coupled to the memory controller) and a respective plurality of memory modules. Each switch provides command address bus data only to its respective memory module. Preferably, only one switch does so at a time, limiting the loading seen by the memory controller. | 06-03-2010 |
20100146199 | Memory System Topologies Including A Buffer Device And An Integrated Circuit Memory Device - Systems, among other embodiments, include topologies (data and/or control/address information) between an integrated circuit buffer device (that may be coupled to a master, such as a memory controller) and a plurality of integrated circuit memory devices. For example, data may be provided between the plurality of integrated circuit memory devices and the integrated circuit buffer device using separate segmented (or point-to-point link) signal paths in response to control/address information provided from the integrated circuit buffer device to the plurality of integrated circuit buffer devices using a single fly-by (or bus) signal path. An integrated circuit buffer device enables configurable effective memory organization of the plurality of integrated circuit memory devices. The memory organization represented by the integrated circuit buffer device to a memory controller may be different than the actual memory organization behind or coupled to the integrated circuit buffer device. The buffer device segments and merges the data transferred between the memory controller that expects a particular memory organization and actual memory organization. | 06-10-2010 |
20100146200 | NON-SNOOP READ/WRITE OPERATIONS IN A SYSTEM SUPPORTING SNOOPING - Techniques that may utilize generic tracker structures to provide data coherency in a multi-node system that supports non-snoop read and write operations. The trackers may be organized as a two-dimensional queue structure that may be utilized to resolve conflicting read and/or write operations. Multiple queues having differing associated priorities may be utilized. | 06-10-2010 |
20100153636 | CONTROL SYSTEM AND METHOD FOR MEMORY ACCESS - A control system for memory access includes a system memory access command buffer, a memory access command parallel processor, a DRAM command controller and a read data buffer. The system memory access command buffer stores plural system memory access commands. The memory access command parallel processor is connected to the system memory access command buffer for fetching and decoding the system memory access commands to plural DRAM access commands, storing the DRAM access commands in DRAM bank command FIFOs, and performing priority setting according to a DRAM bank priority table. The DRAM command controller is connected to the memory access command parallel processor and a DRAM for receiving the DRAM access commands, and sending control commands to the DRAM. The read data buffer is connected to the DRAM command controller and the system bus for storing the read data and rearranging a sequence of the read data. | 06-17-2010 |
20100153637 | Arbitration for memory device with commands - A plurality of masters arbitrate for access to a shared memory device, such as a SDRAM (synchronous dynamic random access memory), amongst themselves using software and arbitration interfaces. The masters generate additional commands upon arbitration, such as MRS and PALL commands, for prevention of collision of commands, refresh starvation, and/or a missing pre-charge operation in the shared memory device. | 06-17-2010 |
20100169562 | PROCESSING SYSTEM AND ELECTRONIC DEVICE WITH SAME - A processing system for use in an electronic device is disclosed. The processing system includes a memory unit, an application processor connected to the memory unit, and a baseband processor connected to the memory unit and the application processor. The memory unit is configured for storing information of the electronic device. The application processor is configured for handling applications of the electronic device. The baseband processor is configured for providing communication capabilities for the electronic device. The application processor includes a temperature detector configured for detecting the temperature of the application processor. When the sensed temperature of the application processor is higher than a predetermined temperature, the baseband processor is instructed by the application processor to share workload of the application processor. | 07-01-2010 |
20100174858 | EXTRA HIGH BANDWIDTH MEMORY DIE STACK - A system includes a central processing unit (CPU); a memory device in communication with the CPU, and a direct memory access (DMA) controller in communication with the CPU and the memory device. The memory device includes a plurality of vertically stacked chips and a plurality of input/output (I/O) ports. Each of the I/O ports connected to at least one of the plurality of chips through a through silicon via. The DMA controller is configured to manage to transfer of data to and from the memory device. | 07-08-2010 |
20100185810 | IN-DRAM CYCLE-BASED LEVELIZATION - Systems and methods are provided for in-DRAM cycle-based levelization. In a multi-rank, multi-lane memory system, an in-DRAM cycle-based levelization mechanism couples to a memory device in a rank and individually controls additive write latency and/or additive read latency for the memory device. The in-DRAM levelization mechanism ensures that a distribution of relative total write or read latencies across the lanes in the rank is substantially similar to that in another rank. | 07-22-2010 |
20100185811 | Data processing system and method - A data processing system including a non-volatile memory and a processor controlling an operation of the non-volatile memory is provided. The processor transmits and receives a first type of data to and from an outside through a first path through which a first command and a first address, which are used to write/read the first data to/from the non-volatile memory, are transmitted. The processor also transmits and receives a second type of data to and from the outside through a second path different from the first path through which a second command and a second address, which are used to write/read the second data to/from the non-volatile memory, are transmitted. | 07-22-2010 |
20100199034 | METHOD AND APPARATUS FOR ADDRESS FIFO FOR HIGH BANDWIDTH COMMAND/ADDRESS BUSSES IN DIGITAL STORAGE SYSTEM - A method of buffering a data stream in an electronic device using a first-in first-out (FIFO) buffer system wherein a first read latch signal does not change a pointer location of a read pointer. A dynamic random access memory (DRAM) and system are also disclosed in accordance with the invention to include a FIFO buffer system to buffer memory addresses and commands within the DRAM until corresponding data is available. | 08-05-2010 |
20100211728 | APPARATUS AND METHOD FOR BUFFERING DATA BETWEEN MEMORY CONTROLLER AND DRAM - A apparatus is provided for buffering data between a memory controller and a DRAM. The apparatus includes a phase locked loop (PLL), a phase interpolator for aligning a phase of an output clock signal in response to a phase aligning control word, and a non-volatile storage location permanently storing the phase aligning control word. The phase aligning control word is determined through an initial training procedure of the device under predetermined training conditions of at least a supply voltage level and a temperature, and the predetermined training conditions are set so as to optimize the phase alignment of an edge of the output clock signal with respect to the buffered data signal. | 08-19-2010 |
20100217928 | Semiconductor Memory Asynchronous Pipeline - An asynchronously pipelined SDRAM has separate pipeline stages that are controlled by asynchronous signals. Rather than using a clock signal to synchronize data at each stage, an asynchronous signal is used to latch data at every stage. The asynchronous control signals are generated within the chip and are optimized to the different latency stages. Longer latency stages require larger delays elements, while shorter latency states require shorter delay elements. The data is synchronized to the clock at the end of the read data path before being read out of the chip. Because the data has been latched at each pipeline stage, it suffers from less skew than would be seen in a conventional wave pipeline architecture. Furthermore, since the stages are independent of the system clock, the read data path can be run at any CAS latency as long as the re-synchronizing output is built to support it. | 08-26-2010 |
20100223426 | Variable-width memory - Described is a memory system in which the memory core organization changes with device width. The number of physical memory banks accessed reduces with device width, resulting in reduced power usage for relatively narrow memory configurations. Increasing the number of logic memory banks for narrow memory widths reduces the likelihood of bank conflicts, and consequently improves speed performance. | 09-02-2010 |
20100228910 | Single-Port SRAM and Method of Accessing the Same - A system and method for resolving request collision in a single-port static random access memory (SRAM) are disclosed. A first SRAM part and a second SRAM part of the single-port SRAM are accessed in turn. When request collision occurs, data is temporarily stored in a first or second shadow bank associated with the first or the second SRAM part which is under access. The temporarily stored data is then transferred, at a later time, to an associated one of the first/second SRAM parts while the other one of the first/second SRAM parts is being accessed. | 09-09-2010 |
20100250841 | Memory controlling device - A memory controlling device includes: a request generating section; a row selecting information retaining section; a column selecting information retaining section; a memory bank information managing section; a command generating section; and a command aligning section. | 09-30-2010 |
20100274959 | METHODS FOR MAIN MEMORY WITH NON-VOLATILE TYPE MEMORY MODULES - A computing system is disclosed that includes a memory controller in a processor socket normally reserved for a processor. A plurality of non-volatile memory modules may be plugged into memory sockets normally reserved for DRAM memory modules. The non-volatile memory modules may be accessed using a data communication protocol to access the non-volatile memory modules. The memory controller controls read and write accesses to the non-volatile memory modules. The memory sockets are coupled to the processor socket by printed circuit board traces. The data communication protocol to access the non-volatile memory modules is communicated over the printed circuit board traces and through the sockets normally used to access DRAM type memory modules. | 10-28-2010 |
20100293325 | MEMORY DEVICES AND SYSTEMS INCLUDING MULTI-SPEED ACCESS OF MEMORY MODULES - A system, comprising: a plurality of modules, each module comprising a plurality of integrated circuits devices coupled to a module bus and a channel interface that communicates with a memory controller, at least a first module having a portion of its total module address space composed of first type memory cells having a first maximum access speed, and at least a second module having a portion of its total module address space composed of second type memory cells having a second maximum access speed slower than the first access speed. | 11-18-2010 |
20100306458 | Memory device having integral instruction buffer - A dynamic random access memory integrated circuit includes an interface to a serial interconnect, where the interface is configured to receive a plurality of memory access instructions over the serial interconnect, and a buffer configured to store the plurality of memory access instructions prior to execution of the buffered memory access instructions by the dynamic random access memory integrated circuit. The memory access instructions are received over at least one serial link that forms the serial interconnect, and the at least one serial link may be a shared bi-directional serial link or a uni-directional serial link. | 12-02-2010 |
20100306459 | Memory Controllers - Techniques pertaining the designs of memory controller are disclosed. According to one aspect of the present invention, a memory controller reduces delays in a data strobe signal of a DDR memory relative to a clock signal of a memory controller thereof. In one embodiment, the memory controller employs four IO ports, two inverters, six edge triggers and a multiplexer. By feeding back an inverted clock signal and utilizing the rising and filing edges of the clock signal, the delays in a data strobe signal of a DDR memory relative to a clock signal of a memory controller are considerably reduced or minimized. | 12-02-2010 |
20100306460 | MEMORY CONTROLLER, SYSTEM, AND METHOD FOR ACCESSING SEMICONDUCTOR MEMORY - A memory controller includes a sorting determination circuit which activates a sorting signal when an access request address for wrapping access to at least one memory block of a semiconductor memory is different from a first leading address of the at least one memory block, an address conversion circuit which sets the first leading address to an access starting address when the sorting signal is activated, a first data sorting circuit which sorts, when the sorting signal is activated, data sequentially read from the semiconductor memory in accordance with the access starting address starting from data corresponding to the access request address and a first output circuit which outputs the sorted data to an external bus. | 12-02-2010 |
20100306461 | MEMORY SYSTEM AND METHOD HAVING VOLATILE AND NON-VOLATILE MEMORY DEVICES AT SAME HIERARCHICAL LEVEL - A processor-based system includes a processor coupled to core logic through a processor bus. This includes a dynamic random access memory (“DRAM”) memory buffer controller. The DRAM memory buffer controller is coupled through a memory bus to a plurality of a dynamic random access memory (“DRAM”) modules and a flash memory module, which are at the same hierarchical level from the processor. Each of the DRAM modules includes a memory buffer to the memory bus and to a plurality of dynamic random access memory devices. The flash memory module includes a flash memory buffer coupled to the memory bus and to at least one flash memory device. The flash memory buffer includes a DRAM-to-flash memory converter operable to convert the DRAM memory requests to flash memory requests, which are then applied to the flash memory device. | 12-02-2010 |
20100312955 | MEMORY SYSTEM AND METHOD OF MANAGING THE SAME - A memory system to manage a memory using a virtual memory is provided. The memory system may use an asymmetric memory as a swap storage of a dynamic random access memory (DRAM). The asymmetric memory may access on a byte basis, allowing a process to directly access a page swapped out to the asymmetric memory through direct mapping. | 12-09-2010 |
20100312956 | Load reduced memory module - A memory module includes a plurality of memory chips and a plurality of data register buffers mounted on the module substrate. At least two memory chips are allocated to each of the data register buffers. Each of the data register buffers includes M input/output terminals (M is a positive integer equal to or larger than 1) that are connected to the data connectors via a first data line and N input/output terminals (N is a positive integer equal to or larger than 2M) that are connected to corresponding memory chips via second and third data lines, so that the number of the second and third data lines is N/M times the number of the first data lines. According to the present invention, because the load capacities of the second and third data lines are reduced by a considerable amount, it is possible to realize a considerably high data transfer rate. | 12-09-2010 |
20100318732 | DATA PROCESSOR - The data processor enhances the bus throughput or data throughput of an external memory, when there are frequent continuous reads with a smaller data size than the data bus width of the external memory. The data processor includes a memory control unit being capable of controlling in response to a clock an external memory having plural banks that are individually independently controllable, plural buses connected to the memory control unit, and circuit modules capable of commanding memory accesses, which are provided in correspondence with each of the buses. The memory control unit contains bank caches each corresponding to the banks of the external memory. Thereby, the data processor enhances the bus throughput or data throughput of the external memory, since the data processor stores the data read out from the external memory temporarily in the bank caches and to use the stored data without invalidating them, when performing a continuous data read with a smaller data size than the data bus width of the external memory. | 12-16-2010 |
20100332743 | SYSTEM AND METHOD FOR WRITING CACHE DATA AND SYSTEM AND METHOD FOR READING CACHE DATA - A system and a method for writing cache data and a system and a method for reading cache data are disclosed. The system for writing the cache data includes: an on-chip memory device, configured to cache received write requests and write data associated with the write requests and sort the write requests; a request judging device, configured to extract the sorted write requests and the write data associated with the write requests according to write time sequence restriction information of an off-chip memory device; and an off-chip memory device controller, configured to write the write data extracted by the request judging device in the off-chip memory device. With a combination of the on-chip and off-chip memory devices, a large-capacity data storage space and a high-speed read and write efficiency is achieved. | 12-30-2010 |
20110010494 | MEMORY CONTROL CIRCUIT AND MEMORY CONTROL METHOD - The memory control circuit has an access count setting circuit and a DRAM access control circuit. The access count setting circuit receives a minimum activation interval time for different rows in the same bank of the SDRAM, an operating speed, and the number of banks, and calculates an optimal number of readings or writings to each bank. The DRAM access control circuit generates a command sequence and an address for reading or writing a image signal to the SDRAM. | 01-13-2011 |
20110016268 | PHASE CHANGE MEMORY IN A DUAL INLINE MEMORY MODULE - Subject matter disclosed herein relates to management of a memory device. | 01-20-2011 |
20110016269 | SYSTEM AND METHOD OF INCREASING ADDRESSABLE MEMORY SPACE ON A MEMORY BOARD - A load-reducing memory module includes a plurality of memory components such as DRAMs. The memory components are organized into sets or ranks such that they can be accessed simultaneously for the full data bit-width of the memory module. A plurality of load reducing switching circuits is used to drive data bits from a memory controller to the plurality of memory components. The load reducing switching circuits are also used to multiplex the data lines from the memory components and drive the data bits to the memory controller. | 01-20-2011 |
20110016270 | RAPID STARTUP COMPUTER SYSTEM AND METHOD - A computer system includes a north bridge chipset, a south bridge chipset, a memory, and a rapid startup apparatus. The rapid startup apparatus includes a DRAM module to install application programs or operation system programs, a battery, a control chip to control data reading and writing for the DRAM module, a PCI-E interface, and a switch circuit. The application programs or the operation system programs are loaded into the memory via the PCI-E interface, the south bridge chipset, and the north bridge chipset in series. The switch circuit processes voltage of the battery or the PCI-E interface and supply power to the DRAM module. | 01-20-2011 |
20110022791 | High speed memory systems and methods for designing hierarchical memory systems - A system and method for designing and constructing hierarchical memory systems is disclosed. A plurality of different algorithmic memory blocks are disclosed. Each algorithmic memory block includes a memory controller that implements a specific storage algorithm and a set of lower level memory components. Each of those lower level memory components may be constructed with another algorithmic memory block or with a fundamental memory block. By organizing algorithmic memory blocks in various different hierarchical organizations, may different complex memory systems that provide new features may be created. | 01-27-2011 |
20110035544 | MULTI-PATH ACCESSIBLE SEMICONDUCTOR MEMORY DEVICE HAVING MAILBOX AREAS AND MAILBOX ACCESS CONTROL METHOD THEREOF - A multipath accessible semiconductor memory device having a mailbox area and a mailbox access control method thereof are provided. The semiconductor memory device includes N number of ports, at least one shared memory area allocated in a memory cell array, and N number of mailbox areas for message communication. The at least one shared memory area is operationally connected to the N number of ports, and is accessible through a plurality of data input/output lines to form a data access path between the at least one shared memory area and one port, having an access right to the at least one memory area, among the N number of ports. The N number of mailbox areas are provided in one-to-one correspondence with the N number of ports and are accessible through the plurality of data input/output lines when an address of a predetermined area of the at least one shared memory area is applied to the semiconductor memory device. An efficient layout of mailboxes and an efficient message access path can be obtained. | 02-10-2011 |
20110055469 | Providing State Storage In A Processor For System Management Mode - In one embodiment, the present invention includes a processor that has an on-die storage such as a static random access memory to store an architectural state of one or more threads that are swapped out of architectural state storage of the processor on entry to a system management mode (SMM). In this way communication of this state information to a system management memory can be avoided, reducing latency associated with entry into SMM. Embodiments may also enable the processor to update a status of executing agents that are either in a long instruction flow or in a system management interrupt (SMI) blocked state, in order to provide an indication to agents inside the SMM. Other embodiments are described and claimed. | 03-03-2011 |
20110066796 | AUTONOMOUS SUBSYSTEM ARCHITECTURE - An autonomous sub-system receives a database downloaded from a host controller. A controller monitors bus traffic and/or allocated resources in the subsystem and re-allocates resources based on the monitored results to dynamically improve system performance. | 03-17-2011 |
20110066797 | MEMORY SYSTEM - A memory system according to the present invention includes a bus connected to process units, a first DRAM which has a first storage area and a second storage area and which is controlled in operation by a DRAM control signal, a second DRAM which has the same bit width as that of the first DRAM, which has a third storage area having the same address space as that of the first storage area and having a capacity equal to that of the first storage area, and which is controlled in operation by the DRAM control signal, and a controller which is provided with a read command and a logical address from the process units via the bus, which controls operation of the first DRAM and the second DRAM according to the read command and the logical address, and thereby outputs data read from the first DRAM or the second DRAM to the process units via the bus. | 03-17-2011 |
20110072205 | MEMORY DEVICE AND MEMORY SYSTEM COMPRISING SAME - A memory device comprises a memory cell array comprising a plurality of memory blocks each comprising a plurality of memory cells and a control setting circuit. The control setting circuit divides the memory blocks into at least first and second groups based on whether each of the memory blocks comprises at least one substandard memory cell, and sets individually control parameters of the first and second groups. The substandard memory cells are identified based on test results of the memory cells with respect to at least one of the control parameters. Each memory block in the first group comprises at least one substandard memory cell, and each memory block in the second group comprises no substandard memory cell. | 03-24-2011 |
20110078370 | MEMORY LINK INITIALIZATION - Link initialization techniques to decouple the read training from the write training. Read training may be accomplished in a robust manner before write training is performed. These techniques may provide significantly improved link initialization times. A user-programmable register within a dynamic random access memory (DRAM) module may be utilized by the decoupled read training and write training processes. The decoupling may result in shorter and more robust training segments that may support faster training and/or increased link speeds. | 03-31-2011 |
20110082971 | INFORMATION HANDLING SYSTEM MEMORY MODULE OPTIMIZATION - A memory system includes a first memory module and a second memory module. A memory controller is coupled to the first and second memory modules and reads configuration information from the first and second memory modules using a memory channel. The controller also configures a switch coupled between the controller and one of the memory modules to communicate using either a chip select line or a memory address line. | 04-07-2011 |
20110087834 | Memory Package Utilizing At Least Two Types of Memories - A memory system and methods for memory manage are presented. The memory system includes a volatile memory electrically connected to a high-density memory; a memory controller that expects data to be written or read to or from the memory system at a bandwidth and a latency associated with the volatile memory; a directory within the volatile memory that associates a volatile memory address with data stored in the high-density memory; and redundant storage in the high-density memory that stores a copy of the association between the volatile memory address and the data stored in the high-density memory. The methods for memory management allow writing to and reading from the memory system using a first memory read/write interface (e.g. DRAM interface, etc.), though data is stored in a device of a different memory type (e.g. FLASH, etc.). | 04-14-2011 |
20110093654 | Memory control - A data processing apparatus | 04-21-2011 |
20110113189 | MULTIPLE PROCESSOR SYSTEM AND METHOD INCLUDING MULTIPLE MEMORY HUB MODULES - A processor-based electronic system includes several memory modules arranged in first and second ranks. The memory modules in the first rank are directly accessed by any of several processors, and the memory modules in the second rank are accessed by the processors through the memory modules in the first rank. The data bandwidth between the processors and the memory modules in the second rank is varied by varying the number of memory modules in the first rank that are used to access the memory module in the second set. Each of the memory modules includes several memory devices coupled to a memory hub. The memory hub includes a memory controller coupled to each memory device, a link interface coupled to a respective processor or memory module, and a cross bar switch coupling any of the memory controllers to any of the link interfaces. | 05-12-2011 |
20110119439 | Spacing Periodic Commands to a Volatile Memory for Increased Performance and Decreased Collision - A periodic command spacing mechanism is provided for spacing periodic commands (e.g., refresh commands, ZQ calibration, etc.) to a volatile memory (e.g., SDRAM, DRAM, EDRAM, etc.) for increased performance and decreased collision. In one embodiment, periodic command requests are monitored and if a collision is detected between two or more of the requests, the colliding requests are spaced with respect to one another by a timer offset applied on a chip select basis. The periodic command spacing mechanism may be used in conjunction with command arbitration to make sure the periodic commands are executed without significantly impacting performance (e.g., Reads and Writes are allowed to flow). Preferably, the periodic command requests are initialized by generating an initial sequence of individual requests, each successive request in the initial sequence being generated spaced apart with respect to the previous request by a timer offset applied on a chip select basis. | 05-19-2011 |
20110119440 | DYNAMIC PROGRAMMABLE INTELLIGENT SEARCH MEMORY - Memory architecture provides capabilities for high performance content search. The architecture creates an innovative memory derived using randomly accessible dynamic memory circuits that can be programmed with content search rules which are used by the memory to evaluate presented content for matching with the programmed rules. When the content being searched matches any of the rules programmed in the dynamic Programmable Intelligent Search Memory (PRISM) action(s) associated with the matched rule(s) are taken. Content search rules comprise of regular expressions which are converted to finite state automata and then programmed in dynamic PRISM for evaluating content with the search rules. | 05-19-2011 |
20110125961 | DRAM Control Method and the DRAM Controller Utilizing the Same - A Dynamic Random Access Memory (DRAM) controller for controlling read and write operations of a DRAM includes a storage unit and a control unit. The storage unit stores a first predetermined size of data including data written into the DRAM in response to a previous partial write request, and stores the corresponding store addresses of the first predetermined size of data in the DRAM. The control unit, in response to a read request, determines whether there exists any address in the store addresses equal to a read address of the read request, and read data corresponding to the read address from the storage unit when there exists same address in the store addresses equal to the read address. | 05-26-2011 |
20110131370 | DISABLING OUTBOUND DRIVERS FOR A LAST MEMORY BUFFER ON A MEMORY CHANNEL - Memory apparatus and methods utilizing multiple bit lanes may redirect one or more signals on the bit lanes. A memory agent may include a redrive circuit having a plurality of bit lanes, a memory device or interface, and a fail-over circuit coupled between the plurality of bit lanes and the memory device or interface. | 06-02-2011 |
20110145492 | POLYMORPHOUS SIGNAL INTERFACE BETWEEN PROCESSING UNITS - A single interconnect is provided between a first processor and a second processor, such that the first processor may access a common memory through the second processor while the second processor can be mostly powered off. The first processor accesses the memory through a memory controller using a standard dynamic random access memory (DRAM) bus protocol. Instead of the memory controller directly connecting to the memory, the access path is through the second processor to the memory. Additionally, a bidirectional communication protocol bus is mapped to the existing DRAM bus signals. When both the first processor and the second processor are active, the bus protocol between the processors switches from the DRAM protocol to the bidirectional communication protocol. This enables the necessary chip-to-chip transaction semantics without requiring the additional cost burden of a dedicated interface for the bidirectional communication protocol. | 06-16-2011 |
20110145493 | Independently Controlled Virtual Memory Devices In Memory Modules - Various embodiments of the present invention are directed a multi-core memory modules. In one embodiment, a memory module ( | 06-16-2011 |
20110153924 | CORE SNOOP HANDLING DURING PERFORMANCE STATE AND POWER STATE TRANSITIONS IN A DISTRIBUTED CACHING AGENT - A method and apparatus may provide for detecting a performance state transition in a processor core and bouncing a core snoop message on a shared interconnect ring in response to detecting the performance state transition. The core snoop message may be associated with the processor core, wherein a plurality of processor cores may be coupled to the shared interconnect ring via a distributed last level cache controller. | 06-23-2011 |
20110153925 | MEMORY CONTROLLER FUNCTIONALITIES TO SUPPORT DATA SWIZZLING - A memory controller that can determine a swizzling pattern between the memory controller and memory devices. The memory controller generates a swizzling map based on the determined swizzling pattern. The memory controller may internally swizzle data using the swizzling map before writing the data to memory so that the data appears in the correct order at the pins of the memory chip(s). On reads, the controller can internally de-swizzle the data before performing the error correction operations using the swizzling map. | 06-23-2011 |
20110153926 | Controlling Access To A Cache Memory Using Privilege Level Information - In one embodiment, a cache memory includes entries each to store a ring level identifier, which may indicate a privilege level of information stored in the entry. This identifier may be used in performing read accesses to the cache memory. As an example, a logic coupled to the cache memory may filter an access to one or more ways of a selected set of the cache memory based at least in part on a current privilege level of a processor and the ring level identifier of the one or more ways. Other embodiments are described and claimed. | 06-23-2011 |
20110161576 | MEMORY MODULE AND MEMORY SYSTEM COMPRISING MEMORY MODULE - A memory module comprises a plurality of semiconductor memory devices each having a termination circuit for a command/address bus. The semiconductor memory devices are formed in a substrate of the memory module, and they operate in response to a command/address signal, a data signal, and a termination resistance control signal. | 06-30-2011 |
20110161577 | DATA STORAGE SYSTEM, ELECTRONIC SYSTEM, AND TELECOMMUNICATIONS SYSTEM - A data storage system comprising a plurality of buffers configured to store data, a read pointer to indicate a particular one of the plurality of buffers from which data should be read, and a write pointer to indicate a particular one of the plurality of buffers to which data should be written is disclosed. The write pointer points at least one buffer ahead of the buffer to which the read pointer is pointing. An electronic system and a telecommunication system are further disclosed. | 06-30-2011 |
20110167210 | SEMICONDUCTOR DEVICE AND SYSTEM COMPRISING MEMORIES ACCESSIBLE THROUGH DRAM INTERFACE AND SHARED MEMORY REGION - A semiconductor device comprises a nonvolatile memory device, a memory device that processes data according to a DRAM protocol, and an ASIC that converts data output from the memory device into a format compatible with a nonvolatile memory device or a hard disk and outputs the converted data to the nonvolatile memory device or the hard disk. | 07-07-2011 |
20110167211 | DRAM CONTROLLER FOR VIDEO SIGNAL PROCESSING OPERABLE TO ENABLE/DISABLE BURST TRANSFER - An interface unit | 07-07-2011 |
20110173385 | Methods And Apparatus For Demand-Based Memory Mirroring - A method includes determining an amount of memory space in a memory device available for memory mirroring. The method further includes presenting the available memory space to an operating system. The method further includes selecting at least a portion of the amount of memory space to be used for memory mirroring with the operating system. The method further includes adding a non-selected portion of the available memory to memory space available to the operating system during operation. An associated system and machine readable medium are also disclosed. | 07-14-2011 |
20110179220 | Memory Controller - A memory controller | 07-21-2011 |
20110179221 | MEMORY REGISTER ENCODING SYSTEMS AND METHODS - Apparatus, systems, and methods are disclosed that operate to encode register bits to generate encoded bits such that, for pairs of addresses, an encoded bit to be coupled to a first address in a memory device may be exchanged with an encoded bit to be coupled to a second address in the memory device. Apparatus, systems, and methods are disclosed that operate to invert encoded bits in logic circuits in the memory device if original bits were inverted. Additional apparatus, systems, and methods are disclosed. | 07-21-2011 |
20110191532 | PROTOCOL ENGINE FOR PROCESSING DATA IN A WIRELESS TRANSMIT/RECEIVE UNIT - A protocol engine (PE) for processing data within a protocol stack in a wireless transmit/receive unit (WTRU) is disclosed. The protocol stack executes decision and control operations. The data processing and re-formatting which was performed in a conventional protocol stack is removed from the protocol stack and performed by the PE. The protocol stack issues a control word for processing data and the PE processes the data based on the control word. Preferably, the WTRU includes a shared memory and a second memory. The shared memory is used as a data block place holder to transfer the data amongst processing entities. For transmit processing, the PE retrieves source data from the second memory and processes the data while moving the data to the shared memory based on the control word. For receive processing, the PE retrieves received data from the shared memory and processes it while moving the data to the second memory. | 08-04-2011 |
20110202713 | Semiconductor Memory Asynchronous Pipeline - An asynchronously pipelined SDRAM has separate pipeline stages that are controlled by asynchronous signals. Rather than using a clock signal to synchronize data at each stage, an asynchronous signal is used to latch data at every stage. The asynchronous control signals are generated within the chip and are optimized to the different latency stages. Longer latency stages require larger delays elements, while shorter latency states require shorter delay elements. The data is synchronized to the clock at the end of the read data path before being read out of the chip. Because the data has been latched at each pipeline stage, it suffers from less skew than would be seen in a conventional wave pipeline architecture. Furthermore, since the stages are independent of the system clock, the read data path can be run at any CAS latency as long as the re-synchronizing output is built to support it. | 08-18-2011 |
20110208906 | SEMICONDUCTOR MEMORY DEVICE WITH PLURAL MEMORY DIE AND CONTROLLER DIE - A semiconductor memory device including a plurality of memory die and a controller die. The controller die is connected to an internal control bus. The controller die is configured to provide to a selected one of the memory die an internal read command responsive to an external read command. The selected memory die is configured to provide read data to the controller in response to the internal read command; wherein latency between receipt by the controller die of the external read command and receipt of the read data from the selected memory die differs for at least two of the memory die when selected as the selected memory die. | 08-25-2011 |
20110208907 | Protected Cache Architecture And Secure Programming Paradigm To Protect Applications - Embodiments of the present invention provide a secure programming paradigm, and a protected cache that enable a processor to handle secret/private information while preventing, at the hardware level, malicious applications from accessing this information by circumventing the other protection mechanisms. A protected cache may be used as a building block to enhance the security of applications trying to create, manage and protect secure data. Other embodiments are described and claimed. | 08-25-2011 |
20110225354 | ELECTRONIC APPARATUS - An electronic apparatus includes a memory control circuit that controls a first memory and a second memory, the first memory is connected to the memory control circuit through a first data bus, the second memory is connected to the memory control circuit through the first data bus and a second data bus, and a sum of bus widths of the first data bus and the second data bus is larger than the bus width of the first data bus by a times. When the memory control circuit receives an access request for the second memory, the memory control circuit generates a command for accessing the second memory b times on the basis of an address of the access request point and accesses the second memory. | 09-15-2011 |
20110231601 | PROVIDING HARDWARE RESOURCES HAVING DIFFERENT RELIABILITIES FOR USE BY AN APPLICATION - Power management functionality is described for implementing an application in an energy-efficient manner, without substantially degrading overall performance of the application. The functionality operates by identifying at least first data and second data associated with the application. The first data is considered to have a greater potential impact on performance of the application compared to the second data. The functionality then instructs a first set of hardware-level resources to handle the first data and a second set of hardware-level resources to handle the second data. The first set of hardware-level resources has a higher reliability compared to the second set of hardware-level resources. In one case, the first and second hardware-level resources comprise DRAM memory units. Here, the first set of hardware-level resources achieves greater reliability than the second set of hardware-level resources by being refreshed at a higher rate than the second set of hardware-level resources. | 09-22-2011 |
20110246712 | METHOD AND APPARATUS FOR INTERFACING WITH HETEROGENEOUS DUAL IN-LINE MEMORY MODULES - Described herein is a method and apparatus to interface a processor with a heterogeneous dual in-line memory module (DIMM). The method comprises determining an identity of a DIMM having data lanes; mapping the data lanes based on the determining of the identity of the DIMM; training input-output (I/O) transceivers in response to the mapping of the data lanes; and transferring data to and from the DIMM after training the I/O transceivers. | 10-06-2011 |
20110252191 | METHOD OF DYNAMICALLY SWITCHING PARTITIONS, MEMORY CARD CONTROLLER AND MEMORY CARD STORAGE SYSTEM - A method of dynamically switching partitions for a memory card having a plurality of physical blocks is provided. The method includes configuring logical blocks for mapping to at least a portion of the physical blocks and dividing the logical blocks into first and second partitions; coupling the memory card to a host system and setting CSD corresponding to the memory card as a first default value corresponding to the first partition, wherein the host system requests the CSD to obtain the first default value and accesses the first partition according to the first default value; and setting the CSD corresponding to the memory card as a second default value corresponding to the second partition in response to a switch command from the host system, wherein the host system re-requests the CSD to obtain the second default value and accesses the second partition according to the second default value. | 10-13-2011 |
20110252192 | EFFICIENT FLASH MEMORY-BASED OBJECT STORE - Approaches for an object store implemented, at least in part, on one or more solid state devices. The object store may store objects on a plurality of solid state devices. The object store may include a transaction model means for ensuring that the object store performs transactions in compliance with atomicity, concurrency, isolation, and durability (ACID) properties. The object store may include means for providing parallel flushing in a write cache maintained on each of the solid state devices. The object store may include means for maintaining one or more double-write buffers, for the object store, at a location other than the solid state devices. The object store may optionally comprise means for maintaining one or more circular transaction logs, for the object store, at a location other than the solid state devices. The object store may operate to minimize write operations performed on the solid state devices. | 10-13-2011 |
20110276751 | INTEGRATED MEMORY CONTROL APPARATUS AND METHOD THEREOF - An integrated memory control apparatus including a first interface decoder, a second interface decoder and an interface controller is provided. Wherein, the first interface decoder is coupled to a control chip through a first serial peripheral interface (SPI), the second interface decoder is coupled to a micro-processor unit through a general transmission interface, and the interface controller is coupled to a memory through a second SPI. When the interface controller receives the request signals from the control chip and the micro-processor unit, the control chip may correctly read data from the memory through the first and second SPI. On the other hand, the micro-processor unit may stop reading data from the memory through the general transmission interface. Therefore, the control chip and the micro-processor unit may share the same memory. | 11-10-2011 |
20110289268 | FACILITATING COMMUNICATION BETWEEN MEMORY DEVICES AND CPUS - According to one embodiment, an apparatus comprises one or more memory devices and one or more processors coupled to a circuit board. The memory devices are configured according to a second memory technology. The processors are configured to receive messages conforming to a first memory technology, translate the messages from the first memory technology to the second memory technology, and send the translated messages to the memory devices. | 11-24-2011 |
20110289269 | MEMORY SYSTEM AND METHOD HAVING POINT-TO-POINT LINK - A memory system includes a controller for generating a control signal and a primary memory for receiving the control signal from the controller. A secondary memory is coupled to the primary memory, the secondary memory being adapted to receive the control signal from the primary memory. The control signal defines a background operation to be performed by one of the primary and secondary memories and a foreground operation to be performed by the other of the primary and secondary memories. The primary memory and the secondary memory are connected by a point-to-point link. At least one of the links between the primary and secondary memories can be an at least partially serialized link. At least one of the primary and secondary memories can include an on-board internal cache memory. | 11-24-2011 |
20110296095 | DATA MOVEMENT ENGINE AND MEMORY CONTROL METHODS THEREOF - A data movement engine (DME) for an electronic device is disclosed. The DME has an address generating module and a direct memory access (DMA) module. When the memory is switched to a lower power consumption state, a refresh area of a memory of the electronic device is refreshed and a non-refresh area of the memory is not refreshed. The address generating module obtains at least one source address of data in the non-refresh area, and generates at least one destination address for moving data from the non-refresh area to the refresh area and thereby a source-to-destination mapping table is generated. The DMA module performs a first data movement to move data from the non-refresh area to the refresh area according to the source-to-destination mapping table and independently of a microprocessor of the electronic device. | 12-01-2011 |
20110296096 | Method And Apparatus For Virtualized Microcode Sequencing - In one embodiment, the present invention includes a processor having multiple cores and an uncore. The uncore may include a microcode read only memory to store microcode to be executed in the cores (that themselves do not include such memory). The cores can include a microcode sequencer to sequence a plurality of micro-instructions (uops) of microcode that corresponds to a macro-instruction to be executed in an execution unit of the corresponding core. Other embodiments are described and claimed. | 12-01-2011 |
20110302366 | Memory expansion using rank aggregation - In one embodiment, a method includes receiving from a memory controller, a request to access memory stored at memory modules, the request directed to one of a plurality of logical ranks, mapping at a rank aggregator, the logical rank to one of a plurality of physical ranks at the memory modules, and forwarding the request to one of the memory modules according to the mapping. Two or more of the memory modules are combined to represent the number of logical ranks at the memory controller such that there is a one-to-one mapping between the logical ranks and the physical ranks. An apparatus for rank aggregation is also disclosed. | 12-08-2011 |
20110302367 | Write Buffer for Improved DRAM Write Access Patterns - The present invention relates to a method and respective system for operating a DRAM main memory. One buffer line is provided for multiple pages. When writing data to the buffer it is decided which to which buffer-line the data is written to based on its destination main memory address. A tuple consisting of lower memory address and data is stored. Data entered into the buffer-line will be sorted by page in case the line is flushed to the main memory. Sorting the buffer entries results in less page openings and closings, since the data is re-arranged by memory address and therefore in logical order. By using one line for multiple pages only a fraction of memory of a common set-associative cache is needed, thus decreasing the amount of overhead significantly. | 12-08-2011 |
20110307652 | HYBRID STORAGE SYSTEM WITH MID-PLANE - The present invention relates to semiconductor storage systems (SSDs). Specifically, the present invention relates to a hybrid storage system with a mid-plane. In a typical embodiment, a mid-plane is provided. Coupled to one side of the mid-plane is a system control board and a communications module having a set (at least one) of ports. Coupled to a second side of the mid-plane is (among other components) a first RAID controller, which itself is coupled to a double data rate semiconductor storage device (DDR SSD) module having a set of DDR SSD units. Also coupled to the second side of the mid-plane is a second RAID controller, which itself is coupled to a hard disk drive (HDD) module having a set of HDD/Flash SDD units. | 12-15-2011 |
20110307653 | CACHE COHERENCE PROTOCOL FOR PERSISTENT MEMORIES - Subject matter disclosed herein relates to cache coherence of a processor system that includes persistent memory. | 12-15-2011 |
20110307654 | WRITE OPERATIONS IN A FLASH MEMORY-BASED OBJECT STORE - Approaches for improving writing to solid state devices. An object cache or store, maintained on one or more flash storage devices, comprises two or more slabs. A slab is an allocated amount of memory for storing objects of a particular size. A request to write requested data to a slab is received. The size of the requested data is less than the maximum capacity of objects stored in the slab. After writing the requested data to the slab, unrequested data is written up to the maximum capacity of an object in the slab in the same write operation. Writing the unrequested data to the particular slab is performed for purposes of reducing the time required to write the requested data to the SSD. | 12-15-2011 |
20110314210 | LEVERAGING CHIP VARIABILITY - Embodiments are described that leverage variability of a chip. Different areas of a chip vary in terms of reliability under a same operating condition. The variability may be captured by measuring errors over different areas of the chip. A physical factor that affects or controls the likelihood of an error on the chip can be varied. For example, the voltage supplied to a chip may be provided at different levels. At each level of the physical factor, the chip is tested for errors within the regions. Some indication of the error statistics for the regions is stored and then used to adjust power used by the chip, to adjust reliability behavior of the chip, to allow applications to control how the chip is used, to compute a signature uniquely identifying the chip, etc. | 12-22-2011 |
20110314211 | RECOVER STORE DATA MERGING - Various embodiments of the present invention merge data in a cache memory. In one embodiment a set of store data is received from a processing core. A store merge command and a merge mask from are also received from the processing core. A portion of the store data to perform a merging operation thereon is identified based on the store merge command. A sub-portion of the portion of the store data to be merged with a corresponding set of data from a cache memory is identified based on the merge mask. The sub-portion is merged with the corresponding set of data from the cache memory. | 12-22-2011 |
20110314212 | MANAGING IN-LINE STORE THROUGHPUT REDUCTION - Various embodiments of the present invention manage a hierarchical store-through memory cache structure. A store request queue is associated with a processing core in multiple processing cores. At least one blocking condition is determined to have occurred at the store request queue. Multiple non-store requests and a set of store requests associated with a remaining set of processing cores in the multiple processing cores are dynamically blocked from accessing a memory cache in response to the blocking condition having occurred. | 12-22-2011 |
20110314213 | PROCESSOR SYSTEM USING SYNCHRONOUS DYNAMIC MEMORY - A processor system including: a processor having a processor core and a controller core; and a plurality of synchronous memory chips, wherein the processor and the plurality of synchronous memory chips are connected via an external bus; wherein the processor core and the controller core are connected via an internal bus; wherein the plurality of synchronous memory chips are operated according to a clock signal; wherein the controller core comprises a mode register selected by an address signal from the processor core and written with an information by a data signal from the processor core to select the operation mode of the plurality of synchronous memory chips, and a control unit to prescribe the operate mode to the plurality of synchronous memory chips based on the information written in the mode register, wherein the controller core outputs a mode setting signal based on the information written in the mode register or an access address signal from the processor core to the plurality of synchronous memory chips via the external bus selectively; and wherein the clock signal is commonly supplied to the plurality of synchronous memory chips. | 12-22-2011 |
20110320694 | CACHED LATENCY REDUCTION UTILIZING EARLY ACCESS TO A SHARED PIPELINE - A method of performing operations in a shared cache coupled to a first requestor and a second requestor includes receiving at the shared cache a first request from the second requester; assigning the request to a state machine; transmitting a first pipe pass request from the state machine to an arbiter; providing a first instruction from the first pipe pass request to a cache pipeline, the first instruction causing a first pipe pass; and providing a second pipe pass request to the arbiter before the first pipe pass is completed. | 12-29-2011 |
20110320695 | MITIGATING BUSY TIME IN A HIGH PERFORMANCE CACHE - Various embodiments of the present invention mitigate busy time in a hierarchical store-through memory cache structure. In one embodiment, a cache directory associated with a memory cache is divided into a plurality of portions each associated with a portion memory cache. Simultaneous cache lookup operations and cache write operations between the plurality of portions of the cache directory are supported. Two or more store commands are simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory. | 12-29-2011 |
20110320696 | EDRAM REFRESH IN A HIGH PERFORMANCE CACHE ARCHITECTURE - A memory refresh requestor, a memory request interpreter, a cache memory, and a cache controller on a single chip. The cache controller configured to receive a memory access request, the memory access request for a memory address range in the cache memory, detect that the cache memory located at the memory address range is available, and send the memory access request to the memory request interpreter when the memory address range is available. The memory request interpreter configured to receive the memory access request from the cache controller, determine if the memory access request is a request to refresh a contents of the memory address range, and refresh data in the memory address range when the memory access request is a request to refresh memory. | 12-29-2011 |
20110320697 | DYNAMICALLY SUPPORTING VARIABLE CACHE ARRAY BUSY AND ACCESS TIMES - Various embodiments of the present invention manage access to a cache memory. In or more embodiments a request for a targeted interleave within a cache memory is received. The request is associated with an operation of a given type. The target is determined to be available. The request is granted in response to the determining that the target is available. A first interleave availability table associated with a first busy time associated with the cache memory is updated based on the operation associated with the request in response to granting the request. A second interleave availability table associated with a second busy time associated with the cache memory is updated based on the operation associated with the request in response to granting the request. | 12-29-2011 |
20110320698 | Multi-Channel Multi-Port Memory - A multi-channel multi-port memory is disclosed. In a particular embodiment, the multi-channel memory includes a plurality of channels responsive to a plurality of memory controllers. The multi-channel memory may also include a first multi-port multi-bank structure accessible to a first set of the plurality of channels and a second multi-port multi-bank structure accessible to a second set of the plurality of channels. | 12-29-2011 |
20120005419 | System Architecture For Integrated Hierarchical Query Processing For Key/Value Stores - A key/value store comprising a first tier storage device configured to store information about a plurality of keys for a plurality of values without the values, and a second tier storage device coupled to the first tier storage device and configured to store the values associated with the keys without the keys, wherein the first tier storage device has lower latency and higher throughput than the second tier storage device, and wherein the second tier storage device has higher capacity than the first tier storage device. Also disclosed is a method comprising receiving a key/value operation request at a first tier storage device, mapping a key in the key/value operation request to a locator stored in a second tier storage device if the key/value operation request is valid, and mapping the locator to a value in a third tier storage device if the key has a corresponding locator. | 01-05-2012 |
20120005420 | DYNAMICALLY SETTING BURST LENGTH OF DOUBLE DATA RATE MEMORY DEVICE BY APPLYING SIGNAL TO AT LEAST ONE EXTERNAL PIN DURING A READ OR WRITE TRANSACTION - One or more external control pins and/or addressing pins on a memory device are used to set one or both of a burst length and burst type of the memory device. | 01-05-2012 |
20120005421 | MEMORY CONTROLLER AND DATA PROCESSING SYSTEM - A memory controller and data processor have their operation mode switched from the page-on mode for high-speed access to a same page to the page-off mode in response to consecutive events of access to different pages, so that the memory access is performed at a high speed and low power consumption. | 01-05-2012 |
20120011310 | SIMULATING A MEMORY STANDARD - An apparatus includes multiple first memory circuits, each first memory circuit being associated with a first memory standard, where the first memory standard defines a first set of control signals that each first memory circuit circuits is operable to accept and defines a first version of a protocol. The apparatus also includes an interface circuit coupled to the first memory circuits, in which the interface circuit is operable to emulate at least one second memory circuit, each second memory circuit being associated with a second different memory standard. The second different memory standard defines a second set of control signals that the emulated second memory circuit is operable to accept and defines a second different version of a protocol. Both the first version of the protocol and the second different version of the protocol are associated either with DDR2 dynamic random access memory (DRAM) or with DDR3 DRAM. | 01-12-2012 |
20120017039 | CACHING USING VIRTUAL MEMORY - In a first embodiment of the present invention, a method for caching in a processor system having virtual memory is provided, the method comprising: monitoring slow memory in the processor system to determine frequently accessed pages; for a frequently accessed page in slow memory: copy the frequently accessed page from slow memory to a location in fast memory; and update virtual address page tables to reflect the location of the frequently accessed page in fast memory. | 01-19-2012 |
20120030417 | RAID CONTROLLER HAVING MULTI PCI BUS SWITCHING - Embodiments of the present invention provide a RAID controller with multi PCI bus switching for a storage device of a PCI-Express (PCI-e) type that supports a low-speed data processing speed for a host. Specifically, embodiments of this invention provide a RAID controller having multiple (e.g., two or more) sets of RAID circuitry that are interconnected/coupled to on another via a PCI bus. Each set of RAID circuitry is coupled to a one or more (i.e., a set of) semiconductor storage device (SSD) memory disk units. Among other things, the SSD memory disk units and/or HDD/Flash memory units adjust a synchronization of a data signal transmitted/received between the host and a memory disk during data communications between the host and the memory disk through a PCI-Express interface and simultaneously support a high-speed data processing speed for the memory disk, thereby supporting the performance of the memory to enable high-speed processing in an existing interface environment at the maximum. | 02-02-2012 |
20120030418 | MEMORY CONTROLLER - A method for configuring a memory controller including determining whether a serial number of at least one memory module matches a stored serial number corresponding to at least one of the memory module and utilizing a stored timing data to configure the memory controller when the serial number matches the stored serial number corresponding to at least one of the memory module. | 02-02-2012 |
20120036315 | Morphing Memory Architecture - A memory circuit comprises a memory array including a plurality of memory cells, multiple word lines, and at least one bit line. Each of the memory cells is coupled to a unique pair of a bit line and a word line for selectively accessing the memory cells. The memory circuit further includes at least one control circuit coupled to the word lines and operative to selectively change an operation of the memory array between a first data storage mode and at least a second data storage mode as a function of at least one control signal supplied to the control circuit. In the first data storage mode, each of the memory cells is allocated to a corresponding stored logic bit, and in the second data storage mode, at least two memory cells are allocated to a corresponding stored logic bit. | 02-09-2012 |
20120036316 | EMBEDDED-DRAM PROCESSING APPARATUS AND METHODS - An embedded-DRAM processor architecture includes a DRAM array, a set of register files, set of functional units, and a data assembly unit. The data assembly unit includes a set of row-address registers and is responsive to commands to activate and deactivate DRAM rows and to control the movement of data throughout the system. A pipelined data assembly approach allowing the functional units to perform register-to-register operations, and allowing the data assembly unit to perform all load/store operations using wide data busses. Data masking and switching hardware allows individual data words or groups of words to be transferred between the registers and memory. Other aspects of the disclosure include a memory and logic structure and an associated method to extract data blocks from memory to accelerate, for example, operations related to image compression and decompression. | 02-09-2012 |
20120042121 | Scatter-Gather Intelligent Memory Architecture For Unstructured Streaming Data On Multiprocessor Systems - A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion. | 02-16-2012 |
20120059983 | PREDICTOR-BASED MANAGEMENT OF DRAM ROW-BUFFERS - A method for managing memory includes storing a history of accesses to a memory page, and determining whether to keep the memory page open or to close the memory page based on the stored history. A memory system includes a plurality of memory cells arranged in rows and columns, a row buffer, and a memory controller configured to manage the row buffer at a per-page level using a history-based predictor. A non-transitory computer readable medium is also provided containing instructions therein, wherein the instructions include storing an access history of a memory page in a lookup table, and determining an optimal closing policy for the memory page based on the stored histories. The histories can include access numbers or access durations. | 03-08-2012 |
20120066444 | Resolution Enhancement of Video Stream Based on Spatial and Temporal Correlation - A method, computer program product, and system are provided for associating one or more memory buffers in a computing system with a plurality of memory channels. The method can include associating a first memory buffer to a first plurality of memory banks, where the first plurality of memory banks spans over a first set of one or more memory channels. Similarly, the method can include associating a second memory buffer to a second plurality of memory banks, where the second plurality of memory banks spans over a second set of one or more memory channels. The method can also include associating a first sequence identifier and a second sequence identifier with the first memory buffer and the second memory buffer, respectively. Further, the method can include accessing the first and second memory buffers based on the first and second sequence identifiers. | 03-15-2012 |
20120079179 | Processor and method thereof - A processor and an operating method are described. By diversifying an L1 memory being accessed, based on an execution mode of the processor, an operating performance of the processor may be enhanced. By disposing a local/stack section in a system dynamic random access memory (DRAM) located external to the processor, a size of a scratch pad memory may be reduced without deteriorating a performance. While a core of the processor is performing in a very long instruction word (VLIW) mode, the core may data-access a cache memory and thus, a bottleneck may not occur with respect to the scratch pad memory even though a memory access occurs with respect to the scratch pad memory by an external component. | 03-29-2012 |
20120079180 | DRAM Controller and a method for command controlling - A memory controller and a command control method are disclosed. When there is a need to access an unactivated bank in an external DRAM, an ACT command and an access command of a low rate are generated in parallel for the bank, and the parallel ACT and access commands of the low rate are sequentially output to a bus of the external DRAM in serial at a high rate. | 03-29-2012 |
20120079181 | TRANSLATING MEMORY MODULES FOR MAIN MEMORY - A translating memory module is disclosed including a printed circuit board, at least one memory integrated circuit coupled to the printed board, and at least one support chip coupled to the printed circuit board and coupled between the edge connector and the at least one memory integrated circuit. The at least one support chip includes a bi-directional translator to translate between a first memory communication protocol for the at least one memory integrated circuit and a second memory communication protocol for a memory channel differing from the first memory communication protocol. The second memory communication protocol to communicate data, address, and control signals over the memory channel bus to read and write data into the memory of the translating memory module. | 03-29-2012 |
20120084497 | Instruction Prefetching Using Cache Line History - An apparatus of an aspect includes a prefetch cache line address predictor to receive a cache line address and to predict a next cache line address to be prefetched. The next cache line address may indicate a cache line having at least 64-bytes of instructions. The prefetch cache line address predictor may have a cache line target history storage to store a cache line target history for each of multiple most recent corresponding cache lines. Each cache line target history may indicate whether the corresponding cache line had a sequential cache line target or a non-sequential cache line target. The cache line address predictor may also have a cache line target history predictor. The cache line target history predictor may predict whether the next cache line address is a sequential cache line address or a non-sequential cache line address, based on the cache line target history for the most recent cache lines. | 04-05-2012 |
20120084498 | TRACKING WRITTEN ADDRESSES OF A SHARED MEMORY OF A MULTI-CORE PROCESSOR - Described embodiments provide a method of controlling processing flow in a network processor having one or more processing modules. A given one of the processing modules loads a script into a compute engine. The script includes instructions for the compute engine. The given one of the processing modules loads a register file into the compute engine. The register file includes operands for the instructions of the loaded script. A tracking vector of the compute engine is initialized to a default value, and the compute engine executes the instructions of the loaded script based on the operands of the loaded register file. The compute engine updates corresponding portions of the register file with updated data corresponding to the executed script. The tracking vector tracks the updated portions of the register file. The compute engine provides the tracking vector and the updated register file to the given one of the processing modules. | 04-05-2012 |
20120089771 | Data Processing Apparatus - A data processing apparatus reduces the number of the buffer SRAMs to decrease chip area. The data processing apparatus includes an SDRAM address allocation register that holds information indicating which region of the SDRAM will be allocated to each of the IPs, and a buffer SRAM address allocation register that holds information indicating which region of the first and second buffer SRAMs will be allocated to each of the IPs. The bus I/F stores the data read from the SDRAM into the second buffer SRAM with reference to the SDRAM address allocation register and the buffer SRAM address allocation register. Therefore, it is not necessary to provide each of the IPs with a buffer SRAM, which allows integration into a small number of buffer SRAMs. | 04-12-2012 |
20120089772 | DEVICE, SYSTEM, AND METHOD OF MEMORY ALLOCATION - Device, system, and method of memory allocation. For example, an apparatus includes: a Dual In-line Memory Module (DIMM) including a plurality of Dynamic Random Access Memory (DRAM) units to store data, wherein each DRAM unit includes a plurality of banks and each bank is divided into a plurality of sub-banks; and a memory management unit to allocate a set of interleaved sub-banks of said DIMM to a memory page of an Operating System, wherein a combined memory size of the set of interleaved sub-banks is equal to a size of the memory page of the Operating System. | 04-12-2012 |
20120110255 | METHOD AND APPARATUS FOR SENDING DATA FROM MULTIPLE SOURCES OVER A COMMUNICATIONS BUS - In a memory system, multiple memory modules communicate over a bus. Each memory module may include a hub and at least one memory storage unit. The hub receives local data from the memory storage units, and downstream data from one or more other memory modules. The hub assembles data to be sent over the bus within a data block structure, which is divided into multiple lanes. An indication is made of where, within the data block structure, a breakpoint will occur in the data being placed on the bus by a first source (e.g., the local or downstream data). Based on the indication, data from a second source (e.g., the downstream or local data) is placed in the remainder of the data block, thus reducing gaps on the bus. Additional apparatus, systems, and methods are disclosed. | 05-03-2012 |
20120117318 | HETEROGENEOUS COMPUTING SYSTEM COMPRISING A SWITCH/NETWORK ADAPTER PORT INTERFACE UTILIZING LOAD-REDUCED DUAL IN-LINE MEMORY MODULES (LR-DIMMS) INCORPORATING ISOLATION MEMORY BUFFERS - A heterogeneous computing system comprising a switch/network adapter port interface utilizing load-reduced dual in-line memory modules (LR-DIMMs) incorporating isolation memory buffers. In a particular embodiment of the present invention the computer system comprises at least one dense logic device and a controller coupling it to a memory bus. A plurality of memory slots are coupled to the memory bus and an adaptor port is associated with some number of the plurality of memory slots, each of the adapter ports including associated memory resources. A direct execution logic element is coupled to at least one of the adapter ports. The memory resources are selectively accessible by the at least one dense logic device and the direct execution logic element. | 05-10-2012 |
20120124280 | Memory controller with emulative internal memory buffer - The present application discloses a memory controller for accessing an external memory device. The memory controller comprises a bus interface and an internal memory buffer capable of accessing the bus interface. The internal memory buffer operates as an on-chip storage. In various embodiments of the disclosure, the internal memory buffer operates during a testing of a chip containing the memory controller. For example, the internal memory buffer may emulate the external memory device in response to an input signal. Moreover, in various embodiments of the disclosure, the external memory device may be a dynamic random access memory (DRAM), while the to internal memory buffer may be a static random access memory (SRAM). The memory controller may be adapted to automated test equipment (ATE). Moreover, the memory controller may be incorporated onto a system-on-a-chip (SOC) along with one or more agents. | 05-17-2012 |
20120124281 | APPARATUS AND METHOD FOR POWER MANAGEMENT OF MEMORY CIRCUITS BY A SYSTEM OR COMPONENT THEREOF - An apparatus and method are provided for communicating with a plurality of physical memory circuits. In use, at least one virtual memory circuit is simulated where at least one aspect (e.g. power-related aspect, etc.) of such virtual memory circuit(s) is different from at least one aspect of at least one of the physical memory circuits. Further, in various embodiments, such simulation may be carried out by a system (or component thereof), an interface circuit, etc. | 05-17-2012 |
20120137060 | Multi-stage TCAM search - A method to divide a database of TCAM rules includes selecting a rule of the database having multiple don't care values and selecting a bit of the rule having a don't care value, generating two distributor rules based on the selected rule, where the selected bit has a 1 value in one of the distributor rules and a 0 in the other of the distributor rules, associating rules of the database which match each of the distributor rules with the distributor rule they match thereby to create associated databases, and repeating the steps of selecting, generating and associating on the database and the associated databases until the average number of rules in each associated database is at or below a predefined amount. A search unit includes a distributor TCAM and a DRAM search unit having a DRAM storage unit and an associated DRAM search logic unit. The DRAM storage unit has a section for each associated database, where each section is pointed to by a different distributor rule. The DRAM search unit matches the input key to one of the rules in the section pointed to by the matched distributor rule. | 05-31-2012 |
20120137061 | PRE-CACHE SIMILARITY-BASED DELTA COMPRESSION FOR USE IN A DATA STORAGE SYSTEM - A data storage caching architecture supports using native local memory such as host-based RAM, and if available, Solid State Disk (SSD) memory for storing pre-cache delta-compression based delta, reference, and independent data by exploiting content locality, temporal locality, and spatial locality of data accesses to primary (e.g. disk-based) storage. The architecture makes excellent use of the physical properties of the different types of memory available (fast r/w RAM, low cost fast read SSD, etc) by applying algorithms to determine what types of data to store in each type of memory. Algorithms include similarity detection, delta compression, least popularly used cache management, conservative insertion and promotion cache replacement, and the like. | 05-31-2012 |
20120144104 | Partitioning of Memory Device for Multi-Client Computing System - A method, computer program product, and system are provided for accessing a memory device. For instance, the method can include partitioning one or more memory banks of the memory device into a first and a second set of memory banks. The method also can allocate a first plurality of memory cells within the first set of memory banks to a first memory operation of a first client device and a second plurality of memory cells within the second set of memory banks to a second memory operation of a second client device. This memory allocation can allow access to the first and second sets of memory banks when a first and a second memory operation are requested by the first and second client devices, respectively. Further, access to a data bus between the first client device, or the second client device, and the memory device can also be controlled based on whether the first memory address or the second memory address is accessed to execute the first or second memory operation. | 06-07-2012 |
20120159059 | MEMORY INTERFACE SIGNAL REDUCTION - In some embodiments a controller includes a memory activate pin, one or more combined memory command/address signal pins, and a selection circuit adapted to select in response to the memory activate pin as each of the one or more combined memory command/address signal pins either a memory command signal or a memory address signal. Other embodiments are described and claimed. | 06-21-2012 |
20120159060 | POWER ISOLATION FOR MEMORY BACKUP - Disclosed is a power isolation and backup system. When a power fail condition is detected, temporary storage is flushed to an SDRAM. After the flush, interfaces are halted, and power is removed from most of the chip except the SDRAM subsystem. The SDRAM subsystem copies data from an SDRAM to a flash memory. On the way, the data may be encrypted, and/or a data integrity signature calculated. To restore data, the SDRAM subsystem copies data from the flash memory to the SDRAM. On the way, the data being restored may be decrypted, and/or a data integrity signature checked. | 06-21-2012 |
20120159061 | Memory Module With Reduced Access Granularity - A memory module having reduced access granularity. The memory module includes a substrate having signal lines thereon that form a control path and first and second data paths, and further includes first and second memory devices coupled in common to the control path and coupled respectively to the first and second data paths. The first and second memory devices include control circuitry to receive respective first and second memory access commands via the control path and to effect concurrent data transfer on the first and second data paths in response to the first and second memory access commands. | 06-21-2012 |
20120166722 | APPARATUS AND METHOD FOR CONTROLLING THE ACCESS OPERATION BY A PLURALITY OF DATA PROCESSING DEVICES TO A MEMORY - In an apparatus for controlling the access operation by a plurality of data processing devices to a memory, each data processing device is assigned a respective address region which indicates the part of the addresses of the memory which the respective data processing device can access. A control device blocks an access operation by a data processing device to the memory if the access operation address is not located in the address region which is assigned to the respective data processing device. | 06-28-2012 |
20120173809 | Memory Device Having DRAM Cache and System Including the Memory Device - The present disclosure relates to a memory device and a system including the memory device. The memory device may include a non-volatile memory, a dynamic random access memory (DRAM) cache, a DRAM, and a control circuit. The control circuit may perform interfacing between the DRAM and a host, between the DRAM cache and the host, and between the non-volatile memory and the DRAM cache. The memory device may have a high operating speed and may be incorporated in a simple package, such as a multi-chip package. | 07-05-2012 |
20120173810 | Method and Apparatus for Indicating Mask Information - An apparatus for controlling a dynamic random access memory (DRAM), the apparatus comprising an interface to transmit, over a first plurality of wires, to the DRAM a first code to indicate that first data is to be written to the DRAM and a column address to indicate a column location of a memory core in the DRAM where the first data is to be written. The interface is further to transmit a second code to indicate whether mask information for the first data will be sent to the DRAM. If the second code indicates that mask information will be sent, a portion of the column address and a portion of the mask information are sent after the second code is sent. The interface is further to transmit to the DRAM, over a second plurality of wires separate from the first plurality of wires, the first data. | 07-05-2012 |
20120173811 | Method and Apparatus for Delaying Write Operations - An apparatus for controlling a dynamic random access memory (DRAM), the apparatus comprising an interface to transmit to the DRAM a first code to indicate that first data is to be written to the DRAM. The first code is to be sampled by the DRAM and held by the DRAM for a first period of time before it is issued inside the DRAM. The interface is further to transmit the first data that is to be sampled by the DRAM after a second period of time has elapsed from when the first code is sampled by the DRAM. The interface is further to transmit a second code, different from the first code, to indicate that second data is to be read from the DRAM. The second code is to be sampled by the DRAM on one or more edges of the external clock signal. | 07-05-2012 |
20120179866 | Memory Component Having Write Operation with Multiple Time Periods - A method for storing data in a memory chip that includes a memory core having dynamic random access memory cells, is performed by a memory controller chip. The method includes sending a write command to a first interface of the memory chip, wherein the write command specifies a write operation. After sending the write command, the memory controller chip waits for a first time period corresponding to a time period during which the write command is stored by the memory chip, and sends data associated with the write operation to a second interface of the memory chip, wherein the sending of the data occurs after a second time period transpires, the second time period following the first time period, such that sending the write command and sending the data are separated by a first predetermined delay time that includes both the first time period and the second time period. | 07-12-2012 |
20120191907 | SYSTEMS, METHODS, AND APPARATUSES FOR IN-BAND DATA MASK BIT TRANSMISSION - Embodiments of the invention are generally directed to systems, methods, and apparatuses for in-band data mask bit transmission. In some embodiments, one or more data mask bits are integrated into a partial write frame and are transferred to a memory device via the data bus. Since the data mask bits are transferred via the data bus, the system does not need (costly) data mask pin(s). In some embodiments, a mechanism is provided to enable a memory device (e.g., a DRAM) to check for valid data mask bits before completing the partial write to the DRAM array. | 07-26-2012 |
20120198143 | Memory Package Utilizing At Least Two Types of Memories - A memory package and methods for writing data to and reading data from the memory package are presented. The memory package includes a volatile memory and a high-density memory. Data is written to the memory package at a bandwidth and latency associated with the volatile memory. A directory map associates a volatile memory address with data in the high-density memory. A copy of the directory map is stored in the high-density memory. The methods allow writing to and reading from the memory package using a first memory read/write interface (e.g. DRAM interface, etc.), though data is stored in a device of a different memory type (e.g. FLASH, etc.). | 08-02-2012 |
20120198144 | DYNAMICALLY SETTING BURST LENGTH OF DOUBLE DATA RATE MEMORY DEVICE BY APPLYING SIGNAL TO AT LEAST ONE EXTERNAL PIN DURING A READ OR WRITE TRANSACTION - A microprocessor system having a microprocessor and a double data rate memory device having separate groups of external pins adapted to receive addressing, data, and control information and a memory controller adapted to set a burst type of the double data rate memory to interleaved or sequential by sending a signal through one of the external pins of the double data rate memory device, such that when a read command is sent by the controller, depending on the burst type set, the double data rate memory device returns interleaved or sequentially output data to the memory controller. | 08-02-2012 |
20120198145 | MEMORY ACCESS APPARATUS AND DISPLAY USING THE SAME - A memory access apparatus and a display using the same are provided. The memory access apparatus includes a dynamic memory, a plurality of clients and a memory management unit. The dynamic memory is used to store a plurality of memory data. The clients access the dynamic memory and each client has a priority. The memory management unit executes an access action of the clients for the dynamic memory respectively according to the priorities thereof. Besides, the memory management unit has at least one buffer area built therein. The buffer area is used to temporarily store a plurality of buffer data generated while the access action is performed. | 08-02-2012 |
20120203961 | HIGH SPEED INTERFACE FOR DYNAMIC RANDOM ACCESS MEMORY (DRAM) - An interface for a dynamic random access memory (DRAM) includes an interface element coupled to a DRAM chip using a first attachment structure, a first portion of the first attachment structure being used to form a wide bandwidth, low speed, parallel interface, a second portion of the first attachment structure, a routing element and a through silicon via (TSV) associated with the DRAM chip being used to form a narrow bandwidth, high speed, serial interface, the interface element configured to convert parallel information to serial information and configured to convert serial information to parallel information. | 08-09-2012 |
20120210055 | Controlling latency and power consumption in a memory - Memory circuitry, a data processing apparatus and a method of storing data are disclosed. The memory circuitry comprises: a memory for storing the data; and control circuitry for controlling power consumption of the memory by controlling a rate of access to the memory such that an average access delay between adjacent accesses is maintained at or above a predetermined value; wherein the control circuitry is configured to determine a priority of an access request to the memory and to maintain the average access delay at or above the predetermined value by delaying at least some accesses from access requests having a lower priority for longer than at least some accesses from access requests having a higher priority. | 08-16-2012 |
20120215975 | Dynamic Management of Random Access Memory - The invention proposes a method for managing random access memory in a computer system, with said computer system comprising a processor, a first static random access memory, and a second dynamic random access memory, the method comprising the steps of:—receiving at least one instruction to be executed by the processor,—determining a priority level for the execution of the instruction by the processor, and—loading the instruction into the first memory for its execution by the processor if its priority level indicates that it is a high priority instruction, or if not—loading the instruction into the second memory for its execution by the processor. | 08-23-2012 |
20120221785 | Polymorphic Stacked DRAM Memory Architecture - A 3D stacked processor device is described which includes a processor chip and a stacked polymorphic DRAM memory chip connected to the processor chip through a plurality of through-silicon-via structures, where the stacked DRAM memory chip includes a memory with an adjustable memory portion and an adjustable cache portion such that memory can operate simultaneously in both memory and cache modes. | 08-30-2012 |
20120226852 | CONTROL METHOD AND CONTROLLER FOR DRAM - A DRAM controller including a judging module, a determination module, and a transmission module is provided. The judging module judges an address content difference between a first command and a third command. The determination module determines a plurality of buffering address contents, asoociated with at least one second command, according to the address content difference. The transmission module then sequentially transmits the first command, the at least one second command, and the third command to the DRAM. | 09-06-2012 |
20120233393 | Scheduling Workloads Based On Cache Asymmetry - In one embodiment, a processor includes a first cache and a second cache, a first core associated with the first cache and a second core associated with the second cache. The caches are of asymmetric sizes, and a scheduler can intelligently schedule threads to the cores based at least in part on awareness of this asymmetry and resulting cache performance information obtained during a training phase of at least one of the threads. | 09-13-2012 |
20120233394 | MEMORY CONTROLLER AND A CONTROLLING METHOD ADAPTABLE TO DRAM - A memory controller and controlling method adaptable to a dynamic random access memory (DRAM) are disclosed. A DRAM controller is configured to manage flow of data to and from the DRAM. A write buffer is controlled by the DRAM controller to temporarily store an entry of data to be written to the DRAM. The data to be written is stored in the write buffer if the write buffer is empty, and the stored data and a succeeding data to be written are both written to the DRAM. | 09-13-2012 |
20120233395 | EMULATION OF ABSTRACTED DIMMS USING ABSTRACTED DRAMS - One embodiment of the present invention sets forth an abstracted memory subsystem comprising abstracted memories, which each may be configured to present memory related characteristics onto a memory system interface. The characteristics can be presented on the memory system interface via logic signals or protocol exchanges, and the characteristics may include any one or more of, an address space, a protocol, a memory type, a power management rule, a number of pipeline stages, a number of banks, a mapping to physical banks, a number of ranks, a timing characteristic, an address decoding option, a bus turnaround time parameter, an additional signal assertion, a sub-rank, a number of planes, or other memory-related characteristics. Some embodiments include an intelligent register device and/or, an intelligent buffer device. One advantage of the disclosed subsystem is that memory performance may be optimized regardless of the specific protocols used by the underlying memory hardware devices. | 09-13-2012 |
20120239873 | Memory access system and method for optimizing SDRAM bandwidth - A memory access system for optimizing SDRAM bandwidth includes a memory command processor, and an SDRAM interface and protocol controller. The memory command processor is connected to a memory bus arbiter and data switch circuit for receiving memory access commands outputted by the memory bus arbiter and data switch circuit and converting the memory access commands into reordered SDRAM commands. The SDRAM interface and protocol controller is connected to the memory command processor for receiving and executing the reordered SDRAM commands based on protocol and timing of SDRAM. The memory command processor decodes the memory access commands into general SDRAM commands or alternative SDRAM commands. The memory access commands decoded into alternative SDRAM commands are generated by a specific bus master. | 09-20-2012 |
20120239874 | METHOD AND SYSTEM FOR RESOLVING INTEROPERABILITY OF MULTIPLE TYPES OF DUAL IN-LINE MEMORY MODULES - Systems and methods are described for resolving certain interoperability issues among multiple types of memory modules in the same memory subsystem. The system provides a single data load DIMM for constructing a high density and high speed memory subsystem that supports the standard JEDEC RDIMM interface while presenting a single load to the memory controller. At least one memory module includes one or more DRAM, a bi-directional data buffer and an interface bridge with a conflict resolution block. The interface bridge translates the CAS latency (CL) programming value that a memory controller sends to program the DRAMs, modifies the latency value, and is used for resolving command conflicts between the DRAMs and the memory controller to insure proper operation of the memory subsystem. | 09-20-2012 |
20120246401 | IN-MEMORY PROCESSOR - A memory device includes at least two memory banks storing data and an internal processor. The at least two memory banks are accessible by a host processor. The internal processor receives a timeslot from the host processor and processes a portion of the data from an indicated one of the at least two banks of the memory array during the timeslot while the remaining banks are available to the host processor during the timeslot. A method of operating a memory device having banks storing data includes a host processor issuing per bank timeslots to an internal processor of a memory device, the internal processor operating on an indicated bank of the memory device during the timeslot and the host processor not accessing the indicated bank during the timeslot. | 09-27-2012 |
20120254527 | DYNAMIC RANDOM ACCESS MEMORY FOR A SEMICONDUCTOR STORAGE DEVICE-BASED SYSTEM - Embodiments of the present invention provide an approach for dynamic random access memory (DRAM)/SSD-based memory to improve memory usage. Specifically, embodiments of the present invention provide a field programmable gate array (FPGA) (SSD controller) that comprises a PCI-express interface for receiving and converting serial data to 64 bit data; a data/bit converter coupled to the interface for converting the 64 bit data to 128 bit data; and a memory controller coupled to the data converter for receiving and storing the 128 bit data in a set of DRAM units coupled to the memory controller. In general, the data converter comprises an input address buffer for receiving and buffering address information; an address matching component coupled to the input address buffer for analyzing the address information and determining a matching address based on the address information; an output address buffer coupled to the address matching component for buffering and outputting the matching address; an input data buffer for receiving and buffering 64 bit data; a data matching component coupled to the input data buffer for matching the 64 bit data with a corresponding address; and an output data buffer coupled to the data matching component for buffering and outputting the 128 bit data based on output of the data matching component. | 10-04-2012 |
20120254528 | MEMORY DEVICE AND MEMORY SYSTEM INCLUDING THE SAME - A memory device includes a first bank group, a second bank group, where the first and second bank groups are each configured to output multi-bit data in parallel in response to a read command, a data transferor configured to receive the multi-bit data outputted in parallel from the first bank group or the second bank group and output the multi-bit data at a time interval corresponding to an operation mode, first global data buses configured to transfer the multi-bit data outputted from the first bank group to the data transferor, second global data buses configured to transfer the multi-bit data outputted from the second bank group to the data transferor, and a parallel-to-serial converter configured to convert the multi-bit data outputted from the data transferor into serial data according to the operation mode. | 10-04-2012 |
20120254529 | MOTHERBOARD WITH DDR MEMORY DEVICES - A motherboard includes a central processing unit (CPU) with a reset signal output pin, a buffer circuit, and at least one memory device. The buffer circuit includes an input terminal connected to the reset signal output pin of the CPU and at least one output terminal. The input terminal and the at least one output terminal have the same voltage level. The at least one memory device has a reset signal receiving terminal connected to the at least one output terminal of the buffer circuit. | 10-04-2012 |
20120254530 | MICROPROCESSOR AND MEMORY ACCESS METHOD - A microprocessor according to the present invention includes instruction execution unit that executes an instruction to output an access request to a memory according to a first protocol; memory control unit that converts the access request according to the first protocol to an access request according to a second protocol to perform an access control to an external memory to output the access request; selection unit that selects whether to access the external memory using the memory control unit; and interface unit that externally outputs one of the access request according to the first protocol and the access request according to the second protocol based on the selection result in the selection unit. | 10-04-2012 |
20120260032 | SYSTEMS AND METHODS FOR USING MEMORY COMMANDS - Systems and methods for using memory commands are described. The systems include a memory controller. The memory controller receives a plurality of user transactions. The memory controller converts each user transaction into one or more row and column memory commands. The memory controller reorders the memory commands associated with the plurality of user transactions before sending the memory commands to a memory device. | 10-11-2012 |
20120265930 | CONTROLLING ON-DIE TERMINATION IN A DYNAMIC RANDOM ACCESS MEMORY DEVICE - An integrated circuit device transmits, to a dynamic random access memory device (DRAM), a write command indicating that write data is to be sampled by a data interface of the DRAM, and a plurality of commands that specify programming a plurality of control values into a plurality of corresponding registers in the DRAM. The plurality of control values include first and second control values that indicate respective first and second terminations that the DRAM is to apply to the data interface during a time interval that begins a predetermined amount of time after the DRAM receives the write command, the first termination to be applied during a first portion of the time interval while the data interface is sampling the write data and the second termination to be applied during a second portion of the time interval after the write data is sampled. | 10-18-2012 |
20120297131 | Scheduling-Policy-Aware DRAM Page Management Mechanism - Memory controller page management devices, systems, and methods are disclosed in which a memory controller is configured to access memory in response to a memory access request by applying a scheduler-aware page management policy to at least one memory page based in the memory based on row buffer status information for the pending memory access requests scheduled in a current cycles. | 11-22-2012 |
20120297132 | MOTHERBOARD OF COMPUTING DEVICE - A motherboard of a computing device includes a dual inline memory module (DIMM), a processor socket, a platform controller hub (PCH), a switch, and a switch controller. The DIMM is connected to the processor socket or the PCH through the switch controller. The switch is connected to the switch controller, and generates a signal when the switch is operated. The switch controller controls the DIMM to connect either to the processor socket or to the PCH according to the signal, so that a solid state disk (SSD) or a memory that is connected to the DIMM can be supported appropriately by the motherboard. | 11-22-2012 |
20120303883 | IMPLEMENTING STORAGE ADAPTER PERFORMANCE OPTIMIZATION WITH CACHE DATA/DIRECTORY MIRRORING - A method and controller for implementing storage adapter performance optimization with cache data and cache directory mirroring between dual adapters minimizing firmware operations, and a design structure on which the subject controller circuit resides are provided. One of the first controller or the second controller operates in a first initiator mode includes firmware to set up an initiator write operation building a data frame for transferring data and a respective cache line (CL) for each page index to the other controller operating in a second target mode. Respective initiator hardware engines transfers data, reading CLs from an initiator control store, and writing updated CLs to an initiator data store, and simultaneously sends data and updated CLs to the other controller. Respective target hardware engines write data and updated CLs to the target data store, eliminating firmware operations of the controller operating in the second target mode. | 11-29-2012 |
20120303884 | IMPLEMENTING ENHANCED UPDATES FOR INDIRECTION TABLES - A method and a storage system are provided for implementing indirection tables and providing enhanced updates of the indirection tables for persistent media or disk drives, such as shingled perpendicular magnetic recording (SMR) indirection tables. A plurality of memory pools are used to store indirection data. An exception pointer table provides a pointer to an exception list for an I-Track. The exception list includes predetermined-size exception entries sorted by an offset from a start of the I-Track. An insert exception entry is provided for a new host write and merged to an updated exception list using an offset of the insert exception entry. | 11-29-2012 |
20120303885 | MULTIPLE PROCESSOR SYSTEM AND METHOD INCLUDING MULTIPLE MEMORY HUB MODULES - A processor-based electronic system includes several memory modules arranged in first and second ranks. The memory modules in the first rank are directly accessed by any of several processors, and the memory modules in the second rank are accessed by the processors through the memory modules in the first rank. The data bandwidth between the processors and the memory modules in the second rank is varied by varying the number of memory modules in the first rank that are used to access the memory module in the second set. Each of the memory modules includes several memory devices coupled to a memory hub. The memory hub includes a memory controller coupled to each memory device, a link interface coupled to a respective processor or memory module, and a cross bar switch coupling any of the memory controllers to any of the link interfaces. | 11-29-2012 |
20120311248 | CACHE LINE LOCK FOR PROVIDING DYNAMIC SPARING - A system that includes a memory, a cache, a purge mechanism, and a memory interface mechanism. The memory includes a failing memory element at a failing memory location. The cache is configured for storing corrected contents of the failing memory element in a locked state, with the corrected contents stored in a first cache line. The purge mechanism is configured for selecting and removing cache lines that are not in the locked state from the cache to make room for new cache allocations. The memory interface mechanism is configured for receiving a request to access the failing memory location, determining that corrected contents of the failing memory location are stored in first cache line in the cache, and accessing the first cache line in the cache. | 12-06-2012 |
20120311249 | MEMORY SYSTEM, MEMORY CONTROL METHOD, AND RECORDING MEDIUM STORING MEMORY CONTROL PROGRAM - A memory system includes a dual inline memory module (DIMM) connector to which a DIMM is connected, which is selected from a Joint Electron Device Engineering Council (JEDEC) standard DIMM in compliance with JEDEC standards and a customized DIMM not in compliance with JEDEC standard, and a memory controller to determine whether the DIMM being connected is the JEDEC standard DIMM or the customized DIMM to generate a determination result, and to control access to the DIMM based on the determination result and SPD information obtained from a SPD of the DIMM being connected. | 12-06-2012 |
20120311250 | ARCHITECTURE AND ACCESS METHOD OF HETEROGENEOUS MEMORIES - A heterogeneous memory architecture includes a first memory, a second memory and a memory controller. The first memory has a first memory space. The second memory has a second memory space larger than the first memory space. The memory controller is used for accessing common address space of the first memory and the second memory in a 2X-bit bandwidth, and for disabling the first memory and accessing non-common address space of the second memory in opposite to the first memory in a X-bit bandwidth, X being a positive integer. | 12-06-2012 |
20120311251 | Coordinating Memory Operations Using Memory-Device Generated Reference Signals - A memory system includes a memory controller coupled to multiple memory devices. Each memory device includes an oscillator that generates an internal reference signal that oscillates at a frequency that is a function of physical device structures within the memory device. The frequencies of the internal reference signals are thus device specific. Each memory device develops a shared reference signal from its internal reference signal and communicates the shared reference signal to the common memory controller. The memory controller uses the shared reference signals to recover device-specific frequency information from each memory device, and then communicates with each memory device at a frequency compatible with the corresponding internal reference signal. | 12-06-2012 |
20120317351 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - There is provided with an information processing apparatus comprising a DRAM, a memory controller configured to access the DRAM, and a bus master configured to send, to the memory controller, an access request to the DRAM, the bus master comprises a transmission unit configured to transmit, to the memory controller, using a signal indicating a type of burst access which is requested of the memory controller by the bus master, an instruction to designate that an auto-precharge operation is not to be performed after accessing the first address, and an instruction to designate that an auto-precharge operation is to be performed after accessing the first address. | 12-13-2012 |
20120331219 | EXTENDED-HEIGHT DIMM - An extended-height DIMM for use in a memory system having slots designed to receive DIMMs that comply with a JEDEC standard that specifies a maximum height for the DIMM and a maximum number of devices allowed to reside on the DIMM. The DIMM comprises a PCB having an edge connector designed to mate with a memory system slot and a height which is greater than the maximum height specified in the applicable standard, a plurality of memory devices which exceeds the maximum number of devices specified in the applicable standard, and a memory buffer which operates as an interface between a host controller's data and command/address busses and the memory devices. This arrangement enables the extended-height DIMM to provide greater memory capacity than would a DIMM which complies with the maximum height and maximum number of devices limits. | 12-27-2012 |
20130007356 | Assigning A Classification To A Dual In-line Memory Module (DIMM) - Methods, apparatuses, and computer program products for assigning a classification to a dual in-line memory module (DIMM) are provided. Embodiments include determining, by a modifier, a classification of a DIMM; and providing a visual indication of the determined classification of the DIMM, including modifying, by the modifier, a top edge of a printed circuit board of the DIMM. | 01-03-2013 |
20130036263 | SOLID STATE STORAGE DEVICE USING VOLATILE MEMORY - A solid state storage device using volatile memory comprises a first transmission interface, a memory controller, a memory module and a backup memory module. The memory module is comprised of a plurality of volatile memories. The backup memory module is comprised of a plurality of non-volatile memories. A plurality of volatile memories and a plurality of non-volatile memories are electrically coupled with the memory controller via memory connecting sockets. Before power failure, the memory controller controls the memory module to save internal data backup to the backup memory module. In addition, the memory controller controls memory module to save internal backup data back to the backup memory module when required. | 02-07-2013 |
20130036264 | MULTI-RANK MEMORY MODULE THAT EMULATES A MEMORY MODULE HAVING A DIFFERENT NUMBER OF RANKS - A transparent four rank memory module has a front side and a back side. The front side has a third memory rank stacked on a first memory rank. The back side has a fourth memory rank stacked on a second memory rank. An emulator coupled to the memory module activates and controls one individual memory rank from either the first memory rank, the second memory rank, the third memory rank, or the fourth memory rank based on the signals received from a memory controller. | 02-07-2013 |
20130042059 | PAGE MERGING FOR BUFFER EFFICIENCY IN HYBRID MEMORY SYSTEMS - In a first embodiment of the present invention, a method for managing memory in a hybrid memory system is provided, wherein the hybrid memory system has a first memory and a second memory, wherein the first memory is smaller than the second memory and the first and second memories are of different types, the method comprising: identifying two or more pages in the first memory that are compatible with each other based at least in part on a prediction of when individual blocks within each of the two or more pages will be accessed; merging the two or more compatible pages, producing a merged page; and storing the merged page in the first memory. | 02-14-2013 |
20130046923 | MEMORY SYSTEM AND METHOD FOR PASSING CONFIGURATION COMMANDS - A memory system is provided. In the system, there are first and second sets of dynamic random access memories (DRAMs) and a system register. Each DRAM has at least a first and a second addressable mode register, where the binary address of the second mode register is the inverted binary address of the first mode register. The system register has an input configured to be coupled to a controller, an output coupled to the first set of DRAMs via first address lines and an inverted output coupled to the second set of DRAMs via second address lines. The system register is configured to receive mode register set commands including address bits and configuration bits at the input and to output the mode register set commands non-inverted via the output to the first set of DRAMs and in inverted form via the inverted output to the second set of DRAMs. | 02-21-2013 |
20130046924 | Mechanisms To Accelerate Transactions Using Buffered Stores - In one embodiment, the present invention includes a method for executing a transactional memory (TM) transaction in a first thread, buffering a block of data in a first buffer of a cache memory of a processor, and acquiring a write monitor on the block to obtain ownership of the block at an encounter time in which data at a location of the block in the first buffer is updated. Other embodiments are described and claimed. | 02-21-2013 |
20130046925 | Mechanisms To Accelerate Transactions Using Buffered Stores - In one embodiment, the present invention includes a method for executing a transactional memory (TM) transaction in a first thread, buffering a block of data in a first buffer of a cache memory of a processor, and acquiring a write monitor on the block to obtain ownership of the block at an encounter time in which data at a location of the block in the first buffer is updated. Other embodiments are described and claimed. | 02-21-2013 |
20130054883 | METHOD AND SYSTEM FOR SHARED HIGH SPEED CACHE IN SAS SWITCHES - A data storage system includes at least one host device configured to initiate a data request, at least one target device configured to store data, and a serial attached SCSI (SAS) switch coupled between the at least one host device and the at least one target device. The SAS switch includes a cache memory and includes control programming configured to determine whether data of the data request is stored in the cache is at least one of data stored in the cache memory of the SAS switch or data to be written in the cache memory of the SAS switch. The cache memory of the SAS switch is a shared cache that is shared across each of the at least one host device and the at least one target device. | 02-28-2013 |
20130054884 | MEMORY CONTROLLER AND A DYNAMIC RANDOM ACCESS MEMORY INTERFACE - A memory controller and a dynamic random access memory (DRAM) interface are disclosed. The memory controller implements signals for the DRAM interface. The DRAM interface includes a differential clock signal, an uncalibrated parallel command bus, and a high-speed, serial address bus. The command bus may be used to initiate communication with the memory device upon power-up and to initiate calibration of the address bus. | 02-28-2013 |
20130054885 | MULTIPORT MEMORY ELEMENT AND SEMICONDUCTOR DEVICE AND SYSTEM INCLUDING THE SAME - Provided is a multiport memory element and a semiconductor device including the same. The multiport memory element includes: a first port; a second port different from the first port; a first memory region accessible by a first processor which is coupled to the first port; a second memory region accessible by a second processor which is coupled to the second port; and a common memory region accessible by both the first processor and the second processor, and including a plurality of banks, wherein while the first processor accesses a first bank among the plurality of banks, the second processor accesses a second bank among the plurality of banks. | 02-28-2013 |
20130060996 | System and Method for Controller Independent Faulty Memory Replacement - In accordance with the present disclosure, a system and method for controller independent faulty memory replacement is described. The system includes a system memory component with a system memory component architecture. The system also includes a memory buffer coupled to the system memory component. The memory buffer may include at least one spare memory location corresponding to a faulty memory location of the system memory component. Additionally, the system memory component architecture may receive a read command directed to an address of the system memory component containing the faulty memory location and output, in response to the read command, data corresponding to the address from both the system memory component and the at least one spare memory component. | 03-07-2013 |
20130060997 | MITIGATING BUSY TIME IN A HIGH PERFORMANCE CACHE - Various embodiments of the present invention mitigate busy time in a hierarchical store-through memory cache structure. In one embodiment, a cache directory associated with a memory cache is divided into a plurality of portions each associated with a portion memory cache. Simultaneous cache lookup operations and cache write operations between the plurality of portions of the cache directory are supported. Two or more store commands are simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory. | 03-07-2013 |
20130067156 | DOUBLE DATA RATE CONTROLLER HAVING SHARED ADDRESS AND SEPARATE DATA ERROR CORRECTION - In general, embodiments of the present invention provide a double data rate (DDR) controller having a shared address and separate data error direction for DDR3 direct memory access (DMA). In a typical embodiment, the architecture described herein comprises a fields programmable gate array (FPGA) having a single memory controller coupled to a data multiplexer (MUX). Groups/sets of memory having individual dual inline memory modules (DIMMs) are coupled to the memory controller and the data MUX. Data flows between the DIMMs and the data multiplexer, while address and control information flows between the DIMMs and the memory controller. | 03-14-2013 |
20130067157 | SEMICONDUCTOR STORAGE DEVICE HAVING MULTIPLE HOST INTERFACE UNITS FOR INCREASED BANDWIDITH - Embodiments of the present invention provide a semiconductor storage device (SSD)-based storage system. Specifically, in a typical embodiment, the system comprises a SSD memory disk unit having (among other components) a plurality of host interface units coupled to a host interface controller. The plurality of host interface units communicate with a plurality of physical interface units of a device driver (e.g., on a one-to-one or one-to-multiple basis). The device driver also comprises a logical interface coupled to the plurality of physical interface units. Among other things, this allows the system to connect to multiple hosts. In addition, this design provides increased bandwidth. | 03-14-2013 |
20130080693 | HYBRID MEMORY DEVICE, COMPUTER SYSTEM INCLUDING THE SAME, AND METHOD OF READING AND WRITING DATA IN THE HYBRID MEMORY DEVICE - A hybrid memory device includes a DRAM and a non-volatile memory. When a program is executed for the first time by a central processing unit (CPU), and data is copied to the DRAM from an external memory device, the data is also copied to the non-volatile memory. The non-volatile memory is configured to directly output data stored therein to an exterior without passing through the DRAM. | 03-28-2013 |
20130086315 | DIRECT MEMORY ACCESS WITHOUT MAIN MEMORY IN A SEMICONDUCTOR STORAGE DEVICE-BASED SYSTEM - In general, embodiments of the present invention provide an approach for direct memory access (DMA) without main memory for a semiconductor storage device (SSD)-based system. Specifically, in a typical embodiment, an input/output hub (IOH) is provided with an inter-DMA engine. The IOH is coupled to a central processing unit (CPU), a set of double data rate (DDR) SSD memory disk units, and a graphics card. The graphics card can comprise a cache memory unit or other type of memory unit. Among other things, this embodiment provides one or more of the following features: interchangeability of hardware; resource allocation for DMA in the CPU utilizes inter-DMA resources; direct data transfer to the graphics card/processor; and/or no need to depend on a main memory comment needed in previous approaches. | 04-04-2013 |
20130097371 | DISABLING OUTBOUND DRIVERS FOR A LAST MEMORY BUFFER ON A MEMORY CHANNEL - Memory apparatus and methods utilizing multiple bit lanes may redirect one or more signals on the bit lanes. A memory agent may include a redrive circuit having a plurality of bit lanes, a memory device or interface, and a fail-over circuit coupled between the plurality of bit lanes and the memory device or interface. | 04-18-2013 |
20130103896 | MEMORY MODULE WITH MEMORY STACK AND INTERFACE WITH ENHANCED CAPABILITES - A memory module, which includes at least one memory stack, comprises a plurality of DRAM integrated circuits and an interface circuit. The interface circuit interfaces the memory stack to a host system so as to operate the memory stack as a single DRAM integrated circuit. In other embodiments, a memory module includes at least one memory stack and a buffer integrated circuit. The buffer integrated circuit, coupled to a host system, interfaces the memory stack to the host system so to operate the memory stack as at least two DRAM integrated circuits. In yet other embodiments, the buffer circuit interfaces the memory stack to the host system for transforming one or more physical parameters between the DRAM integrated circuits and the host system. | 04-25-2013 |
20130103897 | SYSTEM AND METHOD FOR TRANSLATING AN ADDRESS ASSOCIATED WITH A COMMAND COMMUNICATED BETWEEN A SYSTEM AND MEMORY CIRCUITS - A memory circuit system and method are provided. An interface circuit is capable of communication with a plurality of memory circuits and a system. In use, the interface circuit is operable to translate an address associated with a command communicated between the system and the memory circuits. | 04-25-2013 |
20130103898 | DRIVER FOR DDR2/3 MEMORY INTERFACES - An apparatus is described that includes a combined drive and termination circuit programmable to interface to DDR2 and DDR3 memory modules. In an exemplary embodiment the apparatus includes a combined output/termination driver, an input driver and a calibration subsystem. The combined output/termination driver includes a number of pull-up circuits and a number of pull-down circuits. One of the pull-up circuits presents a fixed output impedance. The rest of the pull-up circuits have an impedance programmable between two desired impedance values. One of the pull-down circuits presents a fixed output impedance. The rest of the pull-down circuits have an impedance programmable between two desired impedance values. The necessary number of pull-up circuits and pull-down circuits is activated in order to provide a desired driving and termination circuit such as to interface to specific impedance values as defined by the DDR2 and DDR3 interface protocol. | 04-25-2013 |
20130103899 | SYSTEM ON CHIP WITH RECONFIGURABLE SRAM - A system on chip includes electrical components and a first memory including memory blocks. A method of operating the system on chip includes generating an assignment of the memory blocks to the electrical components. The generating includes, initially, during a development phase of the system on chip, generating the assignment so that selected memory blocks of the memory blocks are assigned to first selected electrical components of the electrical components as emulated read-only memory. The generating includes, subsequently, during an operational phase of the system on chip, modifying the assignment so that one or more of the selected memory blocks are re-assigned to second selected electrical components of the electrical components as cache memory. The method also includes, according to the assignment, dynamically creating electrical connectivity between the memory blocks and the electrical components. | 04-25-2013 |
20130111120 | Enabling A Non-Core Domain To Control Memory Bandwidth | 05-02-2013 |
20130111121 | Dynamically Controlling Cache Size To Maximize Energy Efficiency | 05-02-2013 |
20130111122 | Method and apparatus for network table lookups | 05-02-2013 |
20130111123 | A MEMORY SYSTEM THAT UTILIZES A WIDE INPUT/OUTPUT (I/O) INTERFACE TO INTERFACE MEMORY STORAGE WITH AN INTERPOSER AND THAT UTILIZES A SERDES INTERFACE TO INTERFACE A MEMORY CONTROLLER WITH AN INTEGRATED CIRCUIT, AND A METHOD | 05-02-2013 |
20130132660 | DATA READ/WRITE SYSTEM - The present invention provides a data read/write system. The data read/write system includes a memory controller and a memory module. The memory controller includes a first control circuit, a data output circuit, and a data receiving circuit. The memory module includes a memory buffer and at least two memory chips. The memory buffer includes a second control circuit, a write circuit, and a read circuit. The advantage of the present invention is that, when data is read or written into the memory chip, especially a DDR4 X4 memory chip, low power consumption of interface data transmission can be achieved through a data bus inversion control line DBI. | 05-23-2013 |
20130138877 | METHOD AND APPARATUS FOR DISTRIBUTED DIRECT MEMORY ACCESS FOR SYSTEMS ON CHIP - A distributed direct memory access (DMA) method, apparatus, and system is provided within a system on chip (SOC). DMA controller units are distributed to various functional modules desiring direct memory access. The functional modules interface to a systems bus over which the direct memory access occurs. A global buffer memory, to which the direct memory access is desired, is coupled to the system bus. Bus arbitrators are utilized to arbitrate which functional modules have access to the system bus to perform the direct memory access. Once a functional module is selected by the bus arbitrator to have access to the system bus, it can establish a DMA routine with the global buffer memory. | 05-30-2013 |
20130151766 | CONVERGENCE OF MEMORY AND STORAGE INPUT/OUTPUT IN DIGITAL SYSTEMS - Embodiments of the present invention relate to CPU and/or digital memory architecture. Specifically, embodiments of the present invention relate to various approaches for adapting current designs to provide connection of a storage unit to a CPU via a memory unit through the use of controllers. This allows for system data to flow from the CPU to the memory unit to the storage unit. Such a configuration is enabled by the use of an extended memory access scheme that comprises a plurality of row address strobes (RAS) and a column address strobe (CAS) (and, optionally, one or more data bit line DQs). | 06-13-2013 |
20130151767 | MEMORY CONTROLLER-INDEPENDENT MEMORY MIRRORING - A method of memory controller-independent memory mirroring includes providing a mirroring association between a first memory segment and a second memory segment that is independent of a memory controller. A memory buffer receives data from the memory controller that is directed to a first memory location in the first memory segment. The memory buffer writes the data, independent of the memory controller, to both the first memory segment and the second memory segment according to the mirroring association. The memory buffer receives a plurality of read commands from the memory controller that are directed to the first memory location in the first memory segment and, in response, reads data from an alternating one of the first memory segment and the second memory segment and stores both first data from the first memory segment and second data from the second memory segment. | 06-13-2013 |
20130159615 | DDR RECEIVER ENABLE CYCLE TRAINING - A method is provided for sampling a data strobe signal of a memory cycle and determining a receiver enable phase based upon the data strobe signal. The method also includes performing a memory write cycle and a subsequent read cycle and training a read data strobe cycle at a one-quarter memory clock periodic offset. The method also includes determining a correct receiver enable delay in response to a successful read data strobe training cycle. Computer readable storage media are also provided. An apparatus is provided that includes a communication interface portion that is coupled to a memory portion and to a processing device. The apparatus also includes a first circuit portion, coupled to the communication interface portion. The first circuit portion monitors memory cycles on the communication interface portion, determines a receiver enable cycle phase and train a receiver enable cycle without using receiver enable seed. | 06-20-2013 |
20130159616 | SELF TERMINATED DYNAMIC RANDOM ACCESS MEMORY - A method for operating a memory system and a memory buffer device. The method includes receiving an external clock signal from a clock device of a CPU of a host computer to a buffer device, and receiving an ODT signal from the CPU to a command port of the buffer device. Buffer device provides the self-termination information internally to the common data bus by automatically detecting the read or write command on the common command bus and adjust the termination resistor array in a pre-determined value and timing fashion so that information can be read from or write to a data line of only one of the plurality of DIMM devices coupled together through a common data bus interface. All DIMM devices other than the DIMM device being read can be maintained in a termination state to prevent any signal from traversing to the common the common data bus interface. | 06-20-2013 |
20130166834 | SUB PAGE AND PAGE MEMORY MANAGEMENT APPARATUS AND METHOD - A method and apparatus for managing a virtual address to physical address translation utilize a subpage level fault detecting and access. The method and apparatus may also use an additional subpage and page store Non-Volatile Store (NVS). The method and apparatus determines whether a page fault occurs or whether a subpage fault occurs to effect an address translation and also operates such that if a subpage fault had occurred, a subpage is loaded corresponding to the fault from a NVS to a DRAM, such as DRAM or any other suitable volatile memory historically referred to as main memory. The method and apparatus, if a page fault has occurred, determines if a page fault has occurred without operating system assistance and is a hardware page fault detection system that loads a page corresponding to the fault from NVS to DRAM. | 06-27-2013 |
20130166835 | ARITHMETIC PROCESSING SYSTEM AND METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An arithmetic processing system includes the following elements. Plural storage media, which are physically independent, having storage regions are provided. Plural processors execute processing by using the storage regions of the plural storage media. An allocating unit allocates the storage regions of the plural storage media to the plural processors. A determining unit determines whether a total value of storage amounts necessary for the plural processors to execute processing is equal to or smaller than a value obtained by subtracting a storage capacity of one of the storage media from a total capacity of the plural storage media. A reallocating unit reallocates the allocated storage regions to the plural processors when the above-described determination result is positive. A discontinuing unit discontinues an operation performed by a storage medium which does not contain any of the storage regions reallocated to the plural processors as a result of reallocating the storage regions. | 06-27-2013 |
20130166836 | CONFIGURABLE MEMORY CONTROLLER/MEMORY MODULE COMMUNICATION SYSTEM - A memory system includes a first memory module and a second memory module. A memory controller is coupled to the first and second memory modules and reads configuration information from the first and second memory modules using a memory channel. The controller also configures a switch coupled between the controller and one of the memory modules to communicate using either a chip select line or a memory address line. | 06-27-2013 |
20130179632 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR OPTIMIZATION OF HOST SEQUENTIAL READS OR WRITES BASED ON VOLUME OF DATA TRANSFER - A method for optimization of host sequential reads based on volume of data includes, at a mass data storage device, pre-fetching a first volume of predicted data associated with an identified read data stream from a data store into a buffer memory different from the data store. A request for data from the read data stream is received from a host. In response, the requested data is provided to the host from the buffer memory. While providing the requested data to the host from the buffer memory, it is determined whether a threshold volume of data has been provided to the host from the data buffer memory. If so, a second volume of predicted data associated with the identified read data stream is pre-fetched from the data store and into the buffer memory. If not, additional predicted data is not pre-fetched from the data store. | 07-11-2013 |
20130179633 | SCATTER-GATHER INTELLIGENT MEMORY ARCHITECTURE FOR UNSTRUCTURED STREAMING DATA ON MULTIPROCESSOR SYSTEMS - A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion. | 07-11-2013 |
20130185492 | Memory Watch - A method can include receiving memory configuration information that specifies a memory configuration; receiving memory usage information for the memory configuration; analyzing the received memory usage information for a period of time; and, responsive to the analyzing, controlling notification circuitry configured to display a graphical user interface that presents information for physically altering a specified memory configuration. Various other apparatuses, systems, methods, etc., are also disclosed. | 07-18-2013 |
20130185493 | MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE - Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled. | 07-18-2013 |
20130185494 | POPULATING A FIRST STRIDE OF TRACKS FROM A FIRST CACHE TO WRITE TO A SECOND STRIDE IN A SECOND CACHE - Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted. | 07-18-2013 |
20130185495 | DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING A STRIDE NUMBER ORDERING OF STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride. | 07-18-2013 |
20130185496 | Vector Processing System - A vector processing system provides high performance vector processing using a System-On-a-Chip (SOC) implementation technique. One or more scalar processors (or cores) operate in conjunction with a vector processor, and the processors collectively share access to a plurality of memory interfaces coupled to Dynamic Random Access read/write Memories (DRAMs). In typical embodiments the vector processor operates as a slave to the scalar processors, executing computationally intensive Single Instruction Multiple Data (SIMD) codes in response to commands received from the scalar processors. The vector processor implements a vector processing Instruction Set Architecture (ISA) including machine state, instruction set, exception model, and memory model. | 07-18-2013 |
20130185497 | MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE - Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled. | 07-18-2013 |
20130191584 | DETERMINISTIC HIGH INTEGRITY MULTI-PROCESSOR SYSTEM ON A CHIP - Systems integrated into a single electronic chip are provided for. The systems include a primary shared bus, a secondary shared bus and an embedded dynamic random access memory (eDRAM) including a first port and a second port. The systems also include a primary processor in operable communication with the eDRAM via the first port; and a secondary processor in operable communication with the eDRAM via the secondary bus and the second port, wherein the primary and secondary processors are operating in synchronization. | 07-25-2013 |
20130191585 | SIMULATING A MEMORY STANDARD - An apparatus includes multiple first memory circuits, each first memory circuit being associated with a first memory standard, where the first memory standard defines a first set of control signals that each first memory circuit circuits is operable to accept. | 07-25-2013 |
20130191586 | METHOD FOR OPERATING MEMORY CONTROLLER AND SYSTEM INCLUDING THE SAME - Methods of operating a memory controller include requesting data from each of a plurality of separate memory devices in response to an in-order multi-memory read request and then reading the requested data from the plurality of separate memory devices. The data read from the plurality of separate memory devices is then transmitted to a system bus along with at least one indication signal that identifies a relationship between an ordering of the requested data according to memory device and an ordering of the transmitted data according to memory device. | 07-25-2013 |
20130191587 | MEMORY CONTROL DEVICE, CONTROL METHOD, AND INFORMATION PROCESSING APPARATUS - A memory control device includes a first memory, a second memory, a third memory longer in a delay time since start-up until an actual data access, and a control unit. The second memory stores at least a part of data from each data string among multiple data strings with a given number of data as a unit. The third memory stores all of data within the plurality of data strings therein. If a cache miss occurs in the first memory, the control unit conducts hit determination of a cache in the second memory, and starts an access to the third memory. If the result of the hit determination is a cache hit, the control unit reads the part of data falling under the cache hit from the second memory as leading data, reads data other than the part of data, of a data string to which the part of data belongs, from the third memory, and makes a response as subsequent data to the leading data. | 07-25-2013 |
20130205080 | APPARATUS AND METHOD FOR CONTROLLING REFRESHING OF DATA IN A DRAM - An apparatus comprises a dynamic random-access memory (DRAM) for storing data. Refresh control circuitry is provided to control the DRAM to periodically perform a refresh cycle for refreshing the data stored in each memory location of the DRAM. A refresh address sequence generator generates a refresh address sequence of addresses identifying the order in which memory locations of the DRAM are refreshed during the refresh cycle. To deter differential power analysis attacks on secure data stored in the DRAM, the refresh address sequence is generated with the addresses of at least a portion of the memory locations in a random order which varies from refresh cycle to refresh cycle. | 08-08-2013 |
20130212328 | High Throughput Interleaver/De-Interleaver - Systems and methods for performing high-speed multi-channel forward error correction using external DDR SDRAM is provided. According to one exemplary aspect, an interleaver/deinterleaver performs both read and write accesses to the DDR SDRAM that are burst-oriented by hiding active and precharge cycles in order to achieve high data rate operations. The interleaver/deinterleaver accesses data in the DDR SDRAM as read blocks and write blocks. Each block includes two data sequences. Each data sequence further includes a predetermined number of data words to be interleaved/deinterleaved. The PRECHARGE and ACTIVE command for one data sequence is issued when a preceding data sequence is being processed. Data in one read/write data sequence has the same row address within the same bank of the DDR SDRAM. | 08-15-2013 |
20130212329 | ELECTRONIC APPARATUS AND METHOD FOR MEMORY CONTROL - An electronic apparatus having plural memories of different performances such as bus widths facilitates achievement of its potential as a system. The electronic apparatus has a first memory and a memory controller configured to control the first memory. Upon a second memory being detected, the memory controller compares bus widths of the first memory and the second memory with each other. Upon the bus width of the second memory being broader than the bus width of the first memory, the memory controller makes a setting such that access to the second memory precedes access to the first memory. | 08-15-2013 |
20130219115 | DELAY CIRCUIT, DELAY CONTROLLER, MEMORY CONTROLLER, AND INFORMATION TERMINAL - A delay circuit of the present disclosure includes a first delay unit and a second delay unit which are connected in series and delay an input signal to generate a delayed signal. The first delay unit includes a first signaling pathway, and changes, based on a first delay control value, a first amount of delay to be provided to the input signal by switching signaling pathways for transmitting the input signal that are within the first pathway. The second delay unit includes a second signaling pathway, and changes, based on a second delay control value, a second amount of delay to be provided to the input signal without switching the second signaling pathway for transmitting the input signal. | 08-22-2013 |
20130227210 | MEMORY, MEMORY CONTROLLERS, AND METHODS FOR DYNAMICALLY SWITCHING A DATA MASKING/DATA BUS INVERSION INPUT - Examples are described herein of dynamic switching of data masking and data bus inversion functionality of a memory input. Both dynamic switching and a static setting for the memory input may be supported in some examples described herein. Use of a command indicating a functionality of the memory input is described. | 08-29-2013 |
20130227211 | APPARATUS AND METHOD FOR DATA DECODING - A data decoding apparatus is provided, which includes at least one processor block, at least one hardware block, and a memory processing unit to control the at least one processor block or the at least one hardware block to access a memory and to read or write data with minimum delay. | 08-29-2013 |
20130238848 | MECHANISM FOR ENABLING FULL DATA BUS UTILIZATION WITHOUT INCREASING DATA GRANULARITY - A memory is disclosed comprising a first memory portion, a second memory portion, and an interface, wherein the memory portions are electrically isolated from each other and the interface is capable of receiving a row command and a column command in the time it takes to cycle the memory once. By interleaving access requests (comprising row commands and column commands) to the different portions of the memory, and by properly timing these access requests, it is possible to achieve full data bus utilization in the memory without increasing data granularity. | 09-12-2013 |
20130238849 | LOAD REDUCTION DUAL IN-LINE MEMORY MODULE (LRDIMM) AND METHOD FOR PROGRAMMING THE SAME - A load reduction dual in-line memory module (LRDIMM) is similar to a registered dual inline memory module (RDIMM) in which control signals are synchronusly buffered but the LRDIMM includes a load reduction buffer (LRB) in the data path as well. To make an LRDIMM which appears compatible with RDIMMs on a system memory bus, the serial presence detector (SPD) of the LRDIMM is programmed with modified latency support and minimum delay values. When the dynamic read only memory (DRAMs) devices of the LRDIMM are subsequently set up by the host at boot time based on the parameters provided by the SPD, selected latency values are modified on the fly in an enhanced register phase look loop (RPLL) device. This has the effect of compensating for the delay introduced by the LRB without violating DRAM constraints, and provides memory bus timing for a LRDIMM that is indistinguishable from that of a RDIMM. | 09-12-2013 |
20130254473 | IMPLEMENTING MEMORY INTERFACE WITH CONFIGURABLE BANDWIDTH - A method and system are provided for implementing enhanced memory performance management with configurable bandwidth versus power usage in a chip stack of memory chips. A chip stack of memory chips is connected in a predefined density to allow a predefined high bandwidth connection between each chip in the stack, such as with through silicon via (TSV) interconnections. Large-bandwidth data transfers are enabled from the memory chip stack by trading off increased power usage for memory performance on a temporary basis. | 09-26-2013 |
20130262757 | MEMORY MODULE HAVING A WRITE-TIMING CALIBRATION MODE - In memory module populated by memory components having a write-timing calibration mode, control information that specifies a write operation is received via an address/control signal path and write data corresponding to the write operation is received via a data signal path. Each memory component receives multiple delayed versions of a timing signal used to indicate that the write data is valid, and outputs signals corresponding to the multiple delayed versions of the timing signal to enable determination, in a memory controller, of a delay interval between outputting the control information on the address/control signal path and outputting the write data on the data signal path. | 10-03-2013 |
20130268727 | MEMORY SYSTEM FOR ACCESS CONCENTRATION DECREASE MANAGEMENT AND ACCESS CONCENTRATION DECREASE METHOD - A spatial disturbance that occurs when an access is concentrated in a specific memory area in a volatile semiconductor memory like DRAM is properly solved by a memory controller. The memory controller includes a concentration access detection part generating a concentration access detection signal when an address for accessing a specific memory area among memory areas of volatile semiconductor memory is concentrically received. In the case that the concentration access detection signal is generated, the memory controller includes a controller for easing or preventing corruption of data which memory cells of the specific memory area and/or memory cells of memory areas adjacent to the specific memory area hold. | 10-10-2013 |
20130268728 | APPARATUS AND METHOD FOR IMPLEMENTING A MULTI-LEVEL MEMORY HIERARCHY HAVING DIFFERENT OPERATING MODES - A system and method are described for integrating a memory and storage hierarchy including a non-volatile memory tier within a computer system. In one embodiment, PCMS memory devices are used as one tier in the hierarchy, sometimes referred to as “far memory.” Higher performance memory devices such as DRAM placed in front of the far memory and are used to mask some of the performance limitations of the far memory. These higher performance memory devices are referred to as “near memory.” In one embodiment, the “near memory” is configured to operate in a plurality of different modes of operation including (but not limited to) a first mode in which the near memory operates as a memory cache for the far memory and a second mode in which the near memory is allocated a first address range of a system address space with the far memory being allocated a second address range of the system address space, wherein the first range and second range represent the entire system address space. | 10-10-2013 |
20130275662 | DATA PROCESSING CIRCUIT WITH ARBITRATION BETWEEN A PLURALITY OF QUEUES - Requests from a plurality of different agents ( | 10-17-2013 |
20130275663 | ATOMIC-OPERATION COALESCING TECHNIQUE IN MULTI-CHIP SYSTEMS - A cache-coherence protocol distributes atomic operations among multiple processors (or processor cores) that share a memory space. When an atomic operation that includes an instruction to modify data stored in the shared memory space is directed to a first processor that does not have control over the address(es) associated with the data, the first processor sends a request, including the instruction to modify the data, to a second processor. Then, the second processor, which already has control of the address(es), modifies the data. Moreover, the first processor can immediately proceed to another instruction rather than waiting for the address(es) to become available. | 10-17-2013 |
20130275664 | SCALABLE SCHEDULERS FOR MEMORY CONTROLLERS - Methods and apparatus to improve throughput and efficiency in memory devices are described. In one embodiment, a memory controller may include scheduler logic to issue read or write requests to a memory device in an optimal fashion, e.g., to maximize bandwidth and/or reduce latency. Other embodiments are also disclosed and claimed. | 10-17-2013 |
20130282971 | COMPUTING SYSTEM AND DATA TRANSMISSION METHOD - A computing system includes a central processing unit (CPU), a first direct memory access (DMA) controller, a first bus, a memory module inserted in a first dual inline memory module (DIMM) slot, and a storage device installed on a second DIMM slot. The storage device includes a storage module, a second bus, a memory unit, an interface control unit, and a second DMA controller. The CPU sets up the first and the second DMA controllers to perform data transmission. The first DMA controller controls a first data transmission between the memory module and the memory unit through the first and the second buses, and the second DMA controller controls a second data transmission between the memory unit and the storage module, with the interface control unit, through the second bus. The disclosure further provides a data transmission method of the computing system. | 10-24-2013 |
20130282972 | PROGRAMMABLE MEMORY CONTROLLER - One embodiment includes a programmable memory controller. The programmable memory controller includes a request processor that comprises a first domain-specific instruction set architecture (ISA) for accelerating common requests. A transaction processor comprises a second domain-specific ISA for accelerating transaction processing tasks. A dedicated command logic module inspects each memory command to a memory device and stalls particular commands for meeting timing constraints for application specific control of the memory device. | 10-24-2013 |
20130297864 | TIME-MULTIPLEXED COMMUNICATION PROTOCOL FOR TRANSMITTING A COMMAND AND ADDRESS BETWEEN A MEMORY CONTROLLER AND MULTI-PORT MEMORY - One embodiment sets forth a technique for time-multiplexed communication for transmitting command and address information between a controller and a multi-port memory device over a single connection. Command and address information for each port of the multi-port memory device is time-multiplexed within the controller to produce a single stream of commands and addresses for different memory requests. The single stream of commands and addresses is transmitted by the controller to the multi-port memory device where the single stream is demultiplexed to generate separate streams of commands and addresses for each port of the multi-port memory device. | 11-07-2013 |
20130297865 | TIME-MULTIPLEXED COMMUNICATION PROTOCOL FOR TRANSMITTING A COMMAND AND ADDRESS BETWEEN A MEMORY CONTROLLER AND MULTI-PORT MEMORY - One embodiment sets forth a technique for time-multiplexed communication for transmitting command and address information between a controller and a multi-port memory device over a single connection. Command and address information for each port of the multi-port memory device is time-multiplexed within the controller to produce a single stream of commands and addresses for different memory requests. The single stream of commands and addresses is transmitted by the controller to the multi-port memory device where the single stream is demultiplexed to generate separate streams of commands and addresses for each port of the multi-port memory device. | 11-07-2013 |
20130304981 | Computer System and Method of Memory Management - Computer systems and methods for memory management in a computer system are provided. A computer system includes an integrated circuit, where the integrated circuit includes a processing unit and a memory controller coupled to the processing unit. The memory controller includes a first interface and a second interface configured to couple the memory controller with a first memory and a second memory, respectively. The second interface is separate from the first interface. The computer system includes the first memory of a first memory type coupled to the memory controller through the first interface. The computer system further includes the second memory coupled to the memory controller through the second interface, where the second memory is of a second memory type that has a different power consumption characteristic than that of the first memory type. | 11-14-2013 |
20130318291 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR GENERATING TEST PACKETS IN A NETWORK TEST DEVICE USING VALUE LIST CACHING - Methods, systems, and computer readable media for generating test packets in a network device using value lists caching are disclosed. In one method, value lists are stored in dynamic random access memory of a network test device. Each value lists includes values for user defined fields (UDFs) to be inserted in test packets. Portions of each value lists are read into per-port caches. The UDF values are drained from the per-port caches using per-port stream engines to generate and send streams of test packets to one or more devices under test. The per-port caches are refilled with portions of the value lists from the DRAM and a rate sufficient to maintain the sending of the stream engine packets to the one or more devices under test. | 11-28-2013 |
20130318292 | CACHE MEMORY STAGED REOPEN - An apparatus is described. The apparatus includes a cache memory having two or more memory blocks and a central processing unit (CPU), coupled to the cache memory, to open a first memory block within the cache memory upon exiting from a low power state | 11-28-2013 |
20130326132 | MEMORY SYSTEM AND METHOD HAVING UNIDIRECTIONAL DATA BUSES - A memory system and method includes a unidirectional downstream bus coupling write data from a memory controller to several memory devices, and a unidirectional upstream bus coupling read data from the memory devices to the memory controller. The memory devices each include a write buffer for storing the write data until the respective memory device is no longer busy processing read memory requests. The downstream bus may also be used for coupling memory commands and/or row and column addresses from the memory controller to the memory devices. | 12-05-2013 |
20130332667 | INFORMATION PROCESSOR - An information processor includes an information processing sub-system having information processing circuits and a memory sub-system performing data communication with the information processing sub-systems, wherein the memory sub-system has a first memory, a second memory, a third memory having reading and writing latencies longer than those of the first memory and the second memory, and a memory controller for controlling data transfer among the first memory, the second memory and the third memory; graph data is stored in the third memory; the memory controller analyzes data blocks serving as part of the graph data, and performs preloading operation repeatedly to transfer the data blocks to be required next for the execution of the processing from the third memory to the first memory or the second memory on the basis of the result of the analysis. | 12-12-2013 |
20130332668 | METHODS AND APPARATUSES FOR ADDRESSING MEMORY CACHES - A cache memory includes cache lines to store information. The stored information is associated with physical addresses that include first, second, and third distinct portions. The cache lines are indexed by the second portions of respective physical addresses associated with the stored information. The cache memory also includes one or more tables, each of which includes respective table entries that are indexed by the first portions of the respective physical addresses. The respective table entries in each of the one or more tables are to store indications of the second portions of respective physical addresses associated with the stored information. | 12-12-2013 |
20130339592 | APPROACH TO VIRTUAL BANK MANAGEMENT IN DRAM CONTROLLERS - Banks within a dynamic random access memory (DRAM) are managed with virtual bank managers. A DRAM controller receives a new memory access request to DRAM including a plurality of banks. If the request accesses a location in DRAM where no virtual bank manager includes parameters for the corresponding DRAM page, then a virtual bank manager is allocated to the physical bank associated with the DRAM page. The bank manager is initialized to include parameters needed by the DRAM controller to access the DRAM page. The memory access request is then processed using the parameters associated with the virtual bank manager. One advantage of the disclosed technique is that the banks of a DRAM module are controlled with fewer bank managers than in previous DRAM controller designs. As a result, less surface area on the DRAM controller circuit is dedicated to bank managers. | 12-19-2013 |
20130339593 | REDUCING PENALTIES FOR CACHE ACCESSING OPERATIONS - A computer program product for reducing penalties for cache accessing operations is provided. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes respectively associating platform registers with cache arrays, loading control information and data of a store operation to be executed with respect to one or more of the cache arrays into the one or more of the platform registers respectively associated with the one or more of the cache arrays, and, based on the one or more of the cache arrays becoming available, committing the data from the one or more of the platform registers using the control information from the same platform registers to the one or more of the cache arrays. | 12-19-2013 |
20130339594 | HOST BUS ADAPTERS WITH SHARED MEMORY AND BATTERY BACKUP - The present disclosure includes methods and systems that share memory located on one PCIe based HBA across other PCIe based HBAs in the system. In addition, the backup battery is effectively shared across multiple PCIe based HBAs in the system. This approach saves significant costs by avoiding the need to have a separate DRAM with its own dedicated battery backup on each HBA board in the system. This also allows the redundant memory and backup batteries to be removed while still retaining the same functionality through the common DDR3 memory chip and battery backup shared across multiple HBAs in the system. The component cost for batteries and memory, management module, board space, and the board manufacturing cost are all reduced as a result. | 12-19-2013 |
20130339595 | IDENTIFYING AND PRIORITIZING CRITICAL INSTRUCTIONS WITHIN PROCESSOR CIRCUITRY - In one embodiment, the present invention includes a method for identifying a memory request corresponding to a load instruction as a critical transaction if an instruction pointer of the load instruction is present in a critical instruction table associated with a processor core, sending the memory request to a system agent of the processor with a critical indicator to identify the memory request as a critical transaction, and prioritizing the memory request ahead of other pending transactions responsive to the critical indicator. Other embodiments are described and claimed. | 12-19-2013 |
20130346682 | System And Method for Supporting Fast and Deterministic Execution and Simulation in Multi-Core Environments - The exemplary embodiments described herein relate to supporting fast and deterministic execution and simulation in multi-core environments. Specifically, the exemplary embodiments relate to systems and methods for implementing determinism in a memory system of a multithreaded computer. A exemplary system comprises a plurality of processors within a multi-processor environment, a cache memory within the processor and including metadata, and a hardware check unit performing one of a load check and a store check on the metadata to detect a respective one of a load metadata mismatch and a store metadata mismatch, and invoking a runtime software routine to order memory references upon a detection of one of the load metadata mismatch and the store metadata mismatch. | 12-26-2013 |
20130346683 | Cache Sector Dirty Bits - A cache subsystem apparatus and method of operating therefor is disclosed. In one embodiment, a cache subsystem includes a cache memory divided into a plurality of sectors each having a corresponding plurality of cache lines. Each of the plurality of sectors is associated with a sector dirty bit that, when set, indicates at least one of its corresponding plurality of cache lines is storing modified data of any other location in a memory hierarchy including the cache memory. The cache subsystem further includes a cache controller configured to, responsive to initiation of a power down procedure, determine only in sectors having a corresponding sector dirty bit set which of the corresponding plurality of cache lines is storing modified data. | 12-26-2013 |
20130346684 | METHOD, APPARATUS AND SYSTEM FOR A PER-DRAM ADDRESSABILITY MODE - Techniques and mechanisms for programming an operation mode of a dynamic random access memory (DRAM) device. In an embodiment, a memory controller stores a value in a mode register of a DRAM device, the value specifying whether a per-DRAM addressability (PDA) mode of the DRAM device is enabled. An external contact of the DRAM device is coupled to the memory controller device via a signal line of a data bus. In another embodiment, the memory controller sends a signal to the external contact while the PDA mode of the DRAM device is enabled, the signal to specify whether one or more features of the DRAM device are programmable. | 12-26-2013 |
20130346685 | Memory Component with Pattern Register Circuitry to Provide Data Patterns for Calibration - A memory component includes a memory core comprising dynamic random access memory (DRAM) storage cells and a first circuit to receive external commands. The external commands include a read command that specifies transmitting data accessed from the memory core. The memory component also includes a second circuit to transmit data onto an external bus in response to a read command and pattern register circuitry operable during calibration to provide at least a first data pattern and a second data pattern. During the calibration, a selected one of the first data pattern and the second data pattern is transmitted by the second circuit onto the external bus in response to a read command received during the calibration. Further, at least one of the first and second data patterns is written to the pattern register circuitry in response to a write command received during the calibration. | 12-26-2013 |
20130346686 | MEMORY ACCESS ALIGNMENT IN A DOUBLE DATA RATE ('DDR') SYSTEM - Memory access alignment in a double data rate (‘DDR’) system, including: executing, by a memory controller, one or more write operations to a predetermined address of a DDR memory module, including sending to the DDR memory module a predetermined amount of data of a predetermined pattern along with a data strobe signal; executing, by the memory controller, a plurality of read operations from the predetermined address of the DDR memory module, including capturing data transmitted from the DDR memory module; and determining, by the memory controller, a read adjust value and a write adjust value in dependence upon the data captured in response to the read operations. | 12-26-2013 |
20140006698 | Hybrid Cache State And Filter Tracking Of Memory Operations During A Transaction | 01-02-2014 |
20140006699 | FLEXIBLE COMMAND ADDRESSING FOR MEMORY | 01-02-2014 |
20140006700 | CONFIGURATION FOR POWER REDUCTION IN DRAM | 01-02-2014 |
20140006701 | Hardware and Operating System Support for Persistent Memory On A Memory Bus | 01-02-2014 |
20140013044 | COMPUTER SYSTEM HAVING FUNCTION OF DETECTING WORKING STATE OF MEMORY BANK - A memory bank of a computer system includes a detection unit for detecting working state of a storage chip and a register chip of the memory bank. The detection unit detects whether the storage chip and the register chip work normally and outputs detection signals to a motherboard of the computer system according to the detection of the storage chip and the register chip. The motherboard performs predetermined operations according to the detection signals, thus indicating the working state of the storage chip and the register chip. | 01-09-2014 |
20140013045 | NON-VOLATILE RAM DISK - A method and system are disclosed. In one embodiment the method includes allocating several memory locations within a phase change memory and switch (PCMS) memory to be utilized as a Random Access Memory (RAM) Disk. The RAM Disk is created for use by a software application running in a computer system. The method also includes mapping at least a portion of the allocated amount of PCMS memory to the software application address space. Finally, the method also grants the software application direct access to at least a portion of the allocated amount of the PCMS memory. | 01-09-2014 |
20140019677 | STORING DATA IN PRESISTENT HYBRID MEMORY - Storing data in persistent hybrid memory includes promoting a memory block from non-volatile memory to a cache based on a usage of said memory block according to a promotion policy, tracking modifications to the memory block while in the cache, and writing the memory block back into the non-volatile memory after the memory block is modified in the cache based on a writing policy that keeps a number of the memory blocks that are modified at or below a number threshold while maintaining the memory block in the cache. | 01-16-2014 |
20140019678 | DISK SUBSYSTEM AND METHOD FOR CONTROLLING MEMORY ACCESS - In a prior art disk subsystem formed by duplicating a shared memory (SM) in a DRAM (first area) and a SRAM (second area) having a higher speed than the DRAM, the data stored in the SRAM cannot be switched collectively while maintaining access to the SM, so that the access performance was deteriorated. According to the present invention, when there is a change in setting of data stored in a second area (SRAM), a data corresponding to the setting after the change is stored from a first area (DRAM) of a slave surface side SM to the second area (SRAM), and the setting of data of the second area (SRAM) is changed. After changing the setting, the slave surface side SM is changed to a master surface side SM. | 01-16-2014 |
20140025879 | DYNAMIC RANDOM ACCESS MEMORY APPLIED TO AN EMBEDDED DISPLAY PORT - A dynamic random access memory applied to an embedded display port includes a memory core unit, a peripheral circuit unit, and an input/output unit. The memory core unit is used for operating in a first predetermined voltage. The peripheral circuit unit is electrically connected to the memory core unit for operating in a second predetermined voltage, where the second predetermined voltage is lower than 1.1V. The input/output unit is electrically connected to the memory core unit and the peripheral circuit unit for operating in a third predetermined voltage, where the third predetermined voltage is lower than 1.1V. | 01-23-2014 |
20140025880 | SEMICONDUCTOR MEMORY CELL ARRAY HAVING FAST ARRAY AREA AND SEMICONDUCTOR MEMORY INCLUDING THE SAME - A semiconductor memory cell array is provided which includes a first memory cell array area including first group memory cells arranged in a chip in a matrix of rows and columns and having a first operating speed; and a second memory cell array area including second group memory cells arranged in the chip in a matrix of rows and columns and having a second operating speed different from the first operating speed. The first and second memory cell array areas are accessed by addressing of a DRAM controller. | 01-23-2014 |
20140032828 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR COPYING DATA BETWEEN MEMORY LOCATIONS - A system, method, and computer program product are provided for copying data between memory locations. In use, a memory copy instruction is implemented. Additionally, data is copied from a first memory location to a second memory location, utilizing the memory copy instruction. | 01-30-2014 |
20140032829 | Energy Conservation in a Multicore Chip - Technologies are described herein for conserving energy in a multicore chip via selectively refreshing memory directory entries. Some described examples may refresh a dynamic random access memory (DRAM) that stores a cache coherence directory of a multicore chip. More particularly, a directory entry may be accessed in the cache coherence directory stored in the DRAM. Some further examples may identify a cache coherence state of a block associated with the directory entry. In some examples, refresh of the directory entry stored in the DRAM may be selectively disabled based on the identified cache coherence state of the block such that energy associated with the multicore chip is conserved. | 01-30-2014 |
20140032830 | Memory Component with Pattern Register Circuitry to Provide Data Patterns for Calibration - A memory component includes a memory core comprising dynamic random access memory (DRAM) storage cells and a first circuit to receive external commands. The external commands include a read command that specifies transmitting data accessed from the memory core. The memory component also includes a second circuit to transmit data onto an external bus in response to a read command and pattern register circuitry operable during calibration to provide at least a first data pattern and a second data pattern. During the calibration, a selected one of the first data pattern and the second data pattern is transmitted by the second circuit onto the external bus in response to a read command received during the calibration. | 01-30-2014 |
20140040541 | METHOD OF MANAGING DYNAMIC MEMORY REALLOCATION AND DEVICE PERFORMING THE METHOD - A method of managing dynamic memory reallocation includes receiving an input address including a block bit part, a tag part, and an index part and communicating the index part to a tag memory array, receiving a tag group communicated by the tag memory array based on the index part, analyzing the tag group based on the block bit part and the tag part and changing the block bit part and the tag part based on a result of the analysis, and outputting an output address including a changed block bit part, a changed tag part, and the index part. | 02-06-2014 |
20140040542 | SCATTER-GATHER INTELLIGENT MEMORY ARCHITECTURE FOR UNSTRUCTURED STREAMING DATA ON MULTIPROCESSOR SYSTEMS - A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion. | 02-06-2014 |
20140040543 | Providing State Storage in a Processor for System Management Mode - In one embodiment, the present invention includes a processor that has an on-die storage such as a static random access memory to store an architectural state of one or more threads that are swapped out of architectural state storage of the processor on entry to a system management mode (SMM). In this way communication of this state information to a system management memory can be avoided, reducing latency associated with entry into SMM. Embodiments may also enable the processor to update a status of executing agents that are either in a long instruction flow or in a system management interrupt (SMI) blocked state, in order to provide an indication to agents inside the SMM. Other embodiments are described and claimed. | 02-06-2014 |
20140047174 | SECURE DATA PROTECTION WITH IMPROVED READ-ONLY MEMORY LOCKING DURING SYSTEM PRE-BOOT - Generally, this disclosure provides methods and systems for secure data protection with improved read-only memory locking during system pre-boot including protection of Advanced Configuration and Power Interface (ACPI) tables. The methods may include selecting a region of system memory to be protected, the selection occurring in response to a system reset state and performed by a trusted control block (TCB) comprising a trusted basic input/output system (BIOS); programming an address decoder circuit to configure the selected region as read-write; moving data to be secured to the selected region; programming the address decoder circuit to configure the selected region as read-only; and locking the read-only configuration in the address decoder circuit. | 02-13-2014 |
20140047175 | IMPLEMENTING EFFICIENT CACHE TAG LOOKUP IN VERY LARGE CACHE SYSTEMS - A method and circuit for implementing a cache directory and efficient cache tag lookup in very large cache systems, and a design structure on which the subject circuit resides are provided. A tag cache includes a fast partial large (LX) cache directory maintained separately on chip apart from a main LX cache directory (LXDIR) stored off chip in dynamic random access memory (DRAM) with large cache data (LXDATA). The tag cache stores most frequently accessed LXDIR tags. The tag cache contains predefined information enabling access to LXDATA directly on tag cache hit with matching address and data present in the LX cache. Only on tag cache misses the LXDIR is accessed to reach LXDATA. | 02-13-2014 |
20140052905 | Cache Coherent Handshake Protocol for In-Order and Out-of-Order Networks - Disclosed herein is a processing network element (NE) comprising at least one receiver configured to receive a plurality of memory request messages from a plurality of memory nodes, wherein each memory request designates a source node, a destination node, and a memory location, and a plurality of response messages to the memory requests from the plurality of memory nodes, wherein each memory request designates a source node, a destination node, and a memory location, at least one transmitter configured to transmit the memory requests and memory responses to the plurality of memory nodes, and a controller coupled to the receiver and the transmitter and configured to enforce ordering such that memory requests and memory responses designating the same memory location and the same source node/destination node pair are transmitted by the transmitter in the same order received by the receiver. | 02-20-2014 |
20140052906 | MEMORY CONTROLLER RESPONSIVE TO LATENCY-SENSITIVE APPLICATIONS AND MIXED-GRANULARITY ACCESS REQUESTS - A multi-channel memory controller ( | 02-20-2014 |
20140059285 | APPARATUS AND METHOD FOR DATA MOVEMENT - The present disclosure relates to an apparatus and method capable of carrying out data movement in a memory of a terminal. The apparatus includes a processor configured to transmit a command for data movement and address information for data movement in a memory to the memory, and the memory configured to perform the data movement in units of word line in the memory by using the address information, in response to reception of the command for moving the data. | 02-27-2014 |
20140059286 | MEMORY ACCESS DEVICE FOR MEMORY SHARING AMONG PLURALITY OF PROCESSORS, AND ACCESS METHOD FOR SAME - Provided is a memory access device for a shared memory mechanism of main memory for a plurality of CPUs. The present invention includes a plurality of CPUs using memory as main memory, another function block using memory as a buffer, a CPU interface which controls access transfer from the plurality of CPUs to memory, and a DRAM controller for performing arbitration of the access transfer to the memory. Therein, the CPU interface causes access requests from the plurality of CPUs to wait, and receives and stores the address, data transfer mode and data size of each access, notifies the DRAM controller of the access requests, and then, upon receiving grant signals for the access requests, sends information to the DRAM controller according to the grant signals, whereupon the DRAM controller receives the grant signals, and on the basis of the access arbitration, specifies CPUs for which transfers have been granted so as to send the grant signals to the CPU interface. | 02-27-2014 |
20140068167 | RESULTS GENERATION FOR STATE MACHINE ENGINES - A state machine engine includes a storage element, such as a (e.g., match) results memory. The storage element is configured to receive a result of an analysis of data. The storage element is also configured to store the result in a particular portion of the storage element based on a characteristic of the result. The storage element is additionally configured to store a result indicator corresponding to the result. Other state machine engines and methods are also disclosed. | 03-06-2014 |
20140068168 | TILE BASED INTERLEAVING AND DE-INTERLEAVING FOR DIGITAL SIGNAL PROCESSING - Tile based interleaving and de-interleaving of row-column interleaved data is described. In one example, the de-interleaving is divided into two memory transfer stages, the first from an on-chip memory to a DRAM and the second from the DRAM to an on-chip memory. Each stage operates on part of a row-column interleaved block of data and re-orders the data items, such that the output of the second stage comprises de-interleaved data. In the first stage, data items are read from the on-chip memory according to a non-linear sequence of memory read addresses and written to the DRAM. In the second stage, data items are read from the DRAM according to bursts of linear address sequences which make efficient use of the DRAM interface and written back to on-chip memory according to a non-linear sequence of memory write addresses. | 03-06-2014 |
20140068169 | Independent Threading Of Memory Devices Disposed On Memory Modules - A memory module includes a substrate having signal lines thereon that form a control path and a plurality of data paths. A plurality of memory devices are mounted on the substrate. Each memory device is coupled to the control path and to a distinct data path. The memory module includes control circuitry to enable each memory device to process a distinct respective memory access command in a succession of memory access commands and to output data on the distinct data path in response to the processed memory access command. | 03-06-2014 |
20140068170 | MEMORY ADDRESS GENERATION FOR DIGITAL SIGNAL PROCESSING - Memory address generation for digital signal processing is described. In one example, a digital signal processing system-on-chip utilises an on-chip memory space that is shared between functional blocks of the system. An on-chip DMA controller comprises an address generator that can generate sequences of read and write memory addresses for data items being transferred between the on-chip memory and a paged memory device, or internally within the system. The address generator is configurable and can generate non-linear sequences for the read and/or write addresses. This enables aspects of interleaving/deinterleaving operations to be performed as part of a data transfer between internal or paged memory. As a result, a dedicated memory for interleaving operations is not required. In further examples, the address generator can be configured to generate read and/or write addresses that take into account limitations of particular memory devices when performing interleaving, such as DRAM. | 03-06-2014 |
20140075106 | METHODS OF COMMUNICATING TO DIFFERENT TYPES OF MEMORY MODULES IN A MEMORY CHANNEL - A computer system is disclosed including a printed circuit board (PCB) including a plurality of traces, at least one processor mounted to the PCB to couple to some of the plurality of traces, a heterogeneous memory channel including a plurality of sockets coupled to a memory channel bus of the PCB, and a memory controller coupled between the at least one processor and the heterogeneous memory channel. The heterogeneous memory channel includes a plurality of sockets coupled to a memory channel bus of the PCB. The plurality of sockets are configured to receive a plurality of different types of memory modules. The memory controller may be a programmable heterogeneous memory controller to flexibly adapt to the memory channel bus to control access to each of the different types of memory modules in the heterogeneous memory channel. | 03-13-2014 |
20140075107 | INTERFACE FOR STORAGE DEVICE ACCESS OVER MEMORY BUS - A nonvolatile storage or memory device is accessed over a memory bus. The memory bus has an electrical interface typically used for volatile memory devices. A controller coupled to the bus sends synchronous data access commands to the nonvolatile memory device, and reads the response from the device bus based on an expected timing of a reply from the nonvolatile memory device. The controller determines the expected timing based on when the command was sent, and characteristics of the nonvolatile memory device. The controller may not need all the electrical signal lines available on the memory bus, and could issue data access commands to different groups of nonvolatile memory devices over different groups of electrical signal lines. The memory bus may be available and configured for either use with a memory controller and volatile memory devices, or a storage controller and nonvolatile memory devices. | 03-13-2014 |
20140089572 | DISTRIBUTED PAGE-TABLE LOOKUPS IN A SHARED-MEMORY SYSTEM - The disclosed embodiments provide a system that performs distributed page-table lookups in a shared-memory multiprocessor system with two or more nodes, where each of these nodes includes a directory controller that manages a distinct portion of the system's address space. During operation, a first node receives a request for a page-table entry that is located at a physical address that is managed by the first node. The first node accesses its directory controller to retrieve the page-table entry, and then uses the page-table entry to calculate the physical address for a subsequent page-table entry. The first node determines the home node (e.g., the managing node) for this calculated physical address, and sends a request for the subsequent page-table entry to that home node. | 03-27-2014 |
20140089573 | METHOD FOR ACCESSING MEMORY DEVICES PRIOR TO BUS TRAINING - Embodiments of the invention describe apparatuses, systems and methods for enabling memory device access prior to bus training, thereby enabling firmware image storage in non-flash nonvolatile memory, such as DDR DRAM. The increasing size of firmware images, such as BIOS, MRC, and ME firmware, makes current non-volatile storage solutions, such as SPI flash memory, impractical; executing BIOS code in flash is slow, and having a separate non-volatile memory device increases device costs. Furthermore, solutions such as Cache-as-RAM, which are utilized for running the pre-memory BIOS code, are limited by the cache size that is not scalable to the increasing complexity of BIOS code. | 03-27-2014 |
20140089574 | SEMICONDUCTOR MEMORY DEVICE STORING MEMORY CHARACTERISTIC INFORMATION, MEMORY MODULE AND MEMORY SYSTEM HAVING THE SAME, AND OPERATING METHOD OF THE SAME - A semiconductor memory device storing memory characteristic information, a memory module including the semiconductor memory device, a memory system, and an operating method of the semiconductor memory device. The semiconductor memory device may include a cell array including a plurality of areas; a command decoder configured to decode a command and generate an internal command; and an information storage unit configured to store characteristic information of at least one of the plurality of areas. When a first command and a first row address accompanying the first command are received, characteristic information of an area corresponding to the first row address is provided to an outside. | 03-27-2014 |
20140089575 | Semiconductor Memory Asynchronous Pipeline - An asynchronously pipelined SDRAM has separate pipeline stages that are controlled by asynchronous signals. Rather than using a clock signal to synchronize data at each stage, an asynchronous signal is used to latch data at every stage. The asynchronous control signals are generated within the chip and are optimized to the different latency stages. Longer latency stages require larger delays elements, while shorter latency states require shorter delay elements. The data is synchronized to the clock at the end of the read data path before being read out of the chip. Because the data has been latched at each pipeline stage, it suffers from less skew than would be seen in a conventional wave pipeline architecture. Furthermore, since the stages are independent of the system clock, the read data path can be run at any CAS latency as long as the re-synchronizing output is built to support it. | 03-27-2014 |
20140095779 | PROCESSING MEMORY ACCESS INSTRUCTIONS THAT HAVE DUPLICATE MEMORY INDICES - A method of an aspect includes receiving an instruction indicating a first source packed memory indices, a second source packed data operation mask, and a destination storage location. Memory indices of the packed memory indices are compared with one another. One or more sets of duplicate memory indices are identified. Data corresponding to each set of duplicate memory indices is loaded only once. The loaded data corresponding to each set of duplicate memory indices is replicated for each of the duplicate memory indices in the set. A packed data result in the destination storage location in response to the instruction. The packed data result includes data elements from memory locations that are indicated by corresponding memory indices of the packed memory indices when not blocked by corresponding elements of the packed data operation mask. | 04-03-2014 |
20140095780 | DISTRIBUTED ROW HAMMER TRACKING - A memory controller issues a targeted refresh command in response to detection by a distributed detector. A memory device includes detection logic that monitors for a row hammer event, which is a threshold number of accesses to a row within a time threshold that can cause data corruption to a physically adjacent row (a “victim” row). The memory device sends an indication of the row hammer event to the memory controller. In response to the row hammer event indication, the memory controller sends one or more commands to the memory device to cause the memory device to perform a targeted refresh that will refresh the victim row. | 04-03-2014 |
20140095781 | PHASE CHANGE MEMORY IN A DUAL INLINE MEMORY MODULE - Subject matter disclosed herein relates to management of a memory device. | 04-03-2014 |
20140101380 | MANAGING BANKS IN A MEMORY SYSTEM - Systems and methods are provided that facilitate memory storage in a memory device. The system contains a memory controller and a memory array communicatively coupled to the memory controller. The memory controller sends commands to the memory array and the memory array writes or retrieves data contained therein based upon the command. The memory controller can monitor multiple banks and manage bank activations. Accordingly, memory access overhead can be reduced and memory devices can be more efficient. | 04-10-2014 |
20140101381 | MANAGING BANKS IN A MEMORY SYSTEM - Systems and methods are provided that facilitate memory storage in a multi-bank memory device. The system contains a memory controller and a memory array communicatively coupled to the memory controller. The memory controller sends commands to the memory array and the memory array updates or retrieves data contained therein based upon the command. If the memory controller detects a pattern of memory requests, the memory controller can issue a preemptive activation request to the memory array. Accordingly, memory access overhead is reduced. | 04-10-2014 |
20140101382 | DATA BUFFER WITH A STROBE-BASED PRIMARY INTERFACE AND A STROBE-LESS SECONDARY INTERFACE - A data buffer with a strobe-based primary interface and a strobe-less secondary interface used on a memory module is described. One memory module includes an address buffer, the data buffer and multiple dynamic random-access memory (DRAM) devices. The address buffer provides a timing reference to the data buffer and to the DRAM devices for one or more transactions between the data buffer and the DRAM devices via the strobe-less secondary interface. | 04-10-2014 |
20140108716 | DYNAMIC RANDOM ACCESS MEMORY FOR STORING RANDOMIZED DATA AND METHOD OF OPERATING THE SAME - A dynamic random access memory (DRAM) includes a memory cell array, a data input/output circuit, and a data randomizer configured to randomize data to be stored in the memory cell array. The data randomizer includes an encoder configured to generate write data by encoding input data received from the data input/output circuit using a randomization code and to output the write data to the memory cell array. The data randomizer further includes a decoder configured to generate output data by decoding read data received from the memory cell array using the randomization code and to output the output data to the data input/output circuit. | 04-17-2014 |
20140115244 | APPARATUS, SYSTEM AND METHOD FOR PROVIDING A PERSISTENT LEVEL-TWO CACHE - Aspects of the present disclosure disclose systems and methods for providing a level-two persistent cache. In various aspects, a solid-state drive is employed as a level-two cache to expand the capacity of existing caches. In particular, any data that is scheduled to be evicted or otherwise removed from a level-one cache is stored in the level-two cache with corresponding metadata in a manner that is quickly retrievable. | 04-24-2014 |
20140115245 | APPARATUS SYSTEM AND METHOD FOR PROVIDING RAW DATA IN A LEVEL-TWO CACHE - Aspects of the present disclosure disclose systems and methods for managing a level-two persistent cache. In various aspects, a solid-state drive is employed as a level-two cache to expand the capacity of existing caches. Any data stored in the level-two cache may be stored in a particular version or format of data known as “raw” data, in contrast to storing the data in a “cooked” version, as is typically stored in a level-one cache. | 04-24-2014 |
20140115246 | APPARATUS, SYSTEM AND METHOD FOR MANAGING EMPTY BLOCKS IN A CACHE - Aspects of the present disclosure disclose systems and methods for recognizing multiple and distinct references within a cache that identify or otherwise provide access to empty blocks of data. Multiple references identifying empty blocks of data are associated with a single block of empty data permanently stored in the cache. Subsequently, each time an empty block of data is added to the cache, a reference corresponding to the empty block is mapped to a generic empty block of data stored in the cache. When a reference is removed or deleted from the cache, only the reference is deleted; the single generic block of empty data continues to reside in the cache. | 04-24-2014 |
20140122789 | MEMORY CONTROL APPARATUS AND MEMORY CONTROL METHOD - In a memory control apparatus for issuing a command for a bank corresponding to a transfer request, the transfer request for the corresponding bank is stored. The column address of the transfer request stored at the first is compared with the column addresses of a plurality of subsequent transfer requests. It is determined based on the comparison result whether to issue a command with precharge or a command without precharge for the transfer request stored at the first. The determined command is issued. | 05-01-2014 |
20140129765 | METHOD TO IMPROVE DATA RELIABILITY IN DRAM SSD USING ASYNCHRONOUS LOGGING AND INCREMENTAL BACKUP - Data back-up and recovery methods for DRAM SSDs and other high performance disks are provided. During operation, write events to the DRAM SSD are asynchronously backed-up onto a back-up HDD storage disk from an in-memory buffer. Should a DRAM SSD failure occur, the system can continue to operate, albeit at a lower performance, using the back-up HDD storage disk. Should the main power fail, data remaining in the in-memory buffer is flushed to the back-up HDD storage disk and writing events that did not make it to the in-memory buffer due to insufficient space are incrementally backed-up from the DRAM SSD to the secondary storage. Once power returns from the main power, data from the back-up storage disk and the secondary storage are transferred to the DRAM SSD. | 05-08-2014 |
20140129766 | INTELLIGENT DUAL DATA RATE (DDR) MEMORY CONTROLLER - Various embodiments include systems, methods, and devices configured to reduce the amount of information communicated via system buses/fabrics when transferring data to and from one or more memories. A system master component may send a source address and a destination address to a direct memory access controller inside of, or adjacent to, a memory controller. The direct memory access controller and/or the memory controller may determine whether the source and destination addresses are inside relevant portions of the memory. When both the source and destination are inside the relevant portion of the memory, the memory controller may perform a memory-to-memory data transfer without accessing the system bus. | 05-08-2014 |
20140129767 | APPARATUS AND METHOD FOR IMPLEMENTING A MULTI-LEVEL MEMORY HIERARCHY - A system and method are described for integrating a memory and storage hierarchy including a non-volatile memory tier within a computer system. In one embodiment, PCMS memory devices are used as one tier in the hierarchy, sometimes referred to as “far memory.” Higher performance memory devices such as DRAM placed in front of the far memory and are used to mask some of the performance limitations of the far memory. These higher performance memory devices are referred to as “near memory.” | 05-08-2014 |
20140136773 | PROCESSOR MEMORY OPTIMIZATION VIA PAGE ACCESS COUNTING - To utilize the most efficient memory available to a mobile processor, page access counters may be used to record utilization associated with multiple different memory types. In one embodiment, an operating system routine may analyze the page access counters to determine low utilization pages and high utilization pages to dynamically assign between the multiple different memory types, which may include a more efficient memory type having greater capacity, greater throughput, lower latency, or lower power consumption than a less efficient memory type. As such, in response to detecting a high utilization page in the less efficient memory or a low utilization page in the more efficient memory, contents associated therewith may be copied to the more efficient memory and the less efficient memory, respectively, and virtual-to-physical address mappings may be changed to reflect the reassignment. | 05-15-2014 |
20140143487 | SYSTEM AND METHOD FOR MANAGING TRANSACTIONS - A method for writing data, the method may include: receiving or generating, by an interfacing module, a data unit coherent write request for performing a coherent write operation of a data unit to a first address; receiving, by the interfacing module and from a circuit that comprises a cache and a cache controller, a cache coherency indicator that indicates that a most updated version of the content stored at the first address is stored in the cache; and instructing, by the interfacing module, the cache controller to invalidate a cache line of the cache that stored the most updated version of the first address without sending the most updated version of the content stored at the first address from the cache to a memory module that differs from the cache if a length of the data unit equals a length of the cache line. | 05-22-2014 |
20140149651 | Providing Extended Cache Replacement State Information - In an embodiment, a processor includes a decode logic to receive and decode a first memory access instruction to store data in a cache memory with a replacement state indicator of a first level, and to send the decoded first memory access instruction to a control logic. In turn, the control logic is to store the data in a first way of a first set of the cache memory and to store the replacement state indicator of the first level in a metadata field of the first way responsive to the decoded first memory access instruction. Other embodiments are described and claimed. | 05-29-2014 |
20140149652 | MEMORY SYSTEM AND METHOD OF MAPPING ADDRESS USING THE SAME - In one example embodiment, a memory system includes a memory module and a memory controller. The memory module is configured generate density information of the memory module based on a number of the bad pages of the memory module, the bad pages being pages that have a fault. The memory controller is configured to map a continuous physical address to a dynamic random access memory (dram) address of the memory module based on the density information received from the memory module. | 05-29-2014 |
20140164689 | SYSTEM AND METHOD FOR MANAGING PERFORMANCE OF A COMPUTING DEVICE HAVING DISSIMILAR MEMORY TYPES - Systems and methods are provided for managing performance of a computing device having dissimilar memory types. An exemplary embodiment comprises a method for interleaving dissimilar memory devices. The method involves determining an interleave bandwidth ratio comprising a ratio of bandwidths for two or more dissimilar memory devices. The dissimilar memory devices are interleaved according to the interleave bandwidth ratio. Memory address requests are distributed from one or more processing units to the dissimilar memory devices according to the interleave bandwidth ratio. | 06-12-2014 |
20140164690 | SYSTEM AND METHOD FOR ALLOCATING MEMORY TO DISSIMILAR MEMORY DEVICES USING QUALITY OF SERVICE - Systems and methods are provided for allocating memory to dissimilar memory devices. An exemplary embodiment includes a method for allocating memory to dissimilar memory devices. An interleave bandwidth ratio is determined, which comprises a ratio of bandwidths for two or more dissimilar memory devices. The dissimilar memory devices are interleaved according to the interleave bandwidth ratio to define two or more memory zones having different performance levels. Memory address requests are allocated to the memory zones based on a quality of service (QoS). | 06-12-2014 |
20140164691 | MEMORY ARCHITECTURE FOR DISPLAY DEVICE AND CONTROL METHOD THEREOF - A memory architecture for a display device and a control method thereof are provided. The memory architecture includes a display data memory and a memory controller. The display data memory includes N sub-memories and N×M arbiters, wherein N is a positive integer and M is a positive integer equal to or greater than 2. Each sub-memory includes M memory blocks divided by an address. Each M arbiters are coupled to the M memory blocks of each sub-memory. The memory controller, coupled to the N×M arbiters, generates N×M sets of request signals and output address signals according to a set of an input request signal and an input address signal, and transmits to the N×M arbiters to sequentially control the N×M arbiters. | 06-12-2014 |
20140181385 | FLEXIBLE UTILIZATION OF BLOCK STORAGE IN A COMPUTING SYSTEM - Embodiments of the present invention disclose a method, computer program product, and system for utilizing a block storage device as Dynamic Random-Access Memory (DRAM) space, wherein a computer includes at least one DRAM module and at least one block storage device interfaced to the computer using a double data rate (DDR) interface. During boot up, the computer configures DRAM and block storage devices of the computer for utilization as DRAM or block storage. Then the computer determines that more DRAM space is required. Responsive to determining that more DRAM space is required, the computer transforms a block storage device into DRAM space. Once the computer determines that the transformed block storage device that is being used for DRAM space is no longer needed to be used as DRAM space, the computer transforms the block storage device back to block storage space. | 06-26-2014 |
20140181386 | METHOD AND APPARATUS FOR POWER REDUCTION FOR DATA MOVEMENT - A method of and device for transferring data is provided. The method includes determining a difference between a data segment that was transferred last relative to each of one or more data segments available to be transferred next. In some embodiments, for so long as no data segment available to be sent has been waiting too long, the data segment chosen to be sent next is the data segment having the smallest difference relative to the data segment transferred last. The chosen data segment is then transmitted as the next data segment transferred. | 06-26-2014 |
20140181387 | HYBRID CACHE - Data caching methods and systems are provided. A method is provided for a hybrid cache system that dynamically changes modes of one or more cache rows of a cache between an un-split mode having a first tag field and a first data field to a split mode having a second tag field, a second data field being smaller than the first data field and a mapped page field to improve the cache access efficiency of a workflow being executed in a processor. A hybrid cache system is provided in which the cache is configured to operate one or more cache rows in an un-split mode or in a split mode. The system is configured to dynamically change modes of the cache rows from the un-split mode to the split mode to improve the cache access efficiency of a workflow being executed by the processor. | 06-26-2014 |
20140181388 | Method And Apparatus To Implement Lazy Flush In A Virtually Tagged Cache Memory - A processor includes a processor core including an execution unit to execute instructions, and a cache memory. The cache memory includes a controller to update each of a plurality of stale indicators in response to a lazy flush instruction. Each stale indicator is associated with respective data, and each updated stale indicator is to indicate that the respective data is stale. The cache memory also includes a plurality of cache lines. Each cache line is to store corresponding data and a foreground tag that includes a respective virtual address associated with the corresponding data, and that includes the associated stale indicator. Other embodiments are described as claimed. | 06-26-2014 |
20140181389 | INSTALLATION CACHE - Data caching methods and systems are provided. The data cache method loads data into an installation cache and a cache (simultaneously or serially) and returns data from the installation cache when the data has not completely loaded into the cache. The data cache system includes a processor, a memory coupled to the processor, a cache coupled to the processor and the memory and an installation cache coupled to the processor and the memory. The system is configured to load data from the memory into the installation cache and the cache (simultaneously or serially) and return data from the installation cache to the processor when the data has not completely loaded into the cache. | 06-26-2014 |
20140181390 | METHOD, APPARATUS AND SYSTEM FOR EXCHANGING COMMUNICATIONS VIA A COMMAND/ADDRESS BUS - Techniques and mechanisms for exchanging information from a memory controller to a memory device via a command/address bus. In an embodiment, the memory device samples a first portion of a command during a first sample period and samples a second portion of the command during a second sample period, the first portion and second portion exchanged via the command/address bus. The first sample period and the second sample period are concurrent with, respectively, a first transition of a clock signal and a second transition of the clock signal. In another embodiment, a mode of the memory device determines a relationship between the first transition and the second transition. | 06-26-2014 |
20140181391 | HARDWARE CHIP SELECT TRAINING FOR MEMORY USING WRITE LEVELING MECHANISM - A method of training chip select for a memory module. The method includes programming a memory controller into a mode wherein a command signal is active for a programmable time period. The method then programs a programmable delay line of the chip select with a delay value and performs initialization of the memory module. The memory module is then placed in a write leveling mode wherein placing the memory module in the write leveling mode toggles a state of the chip select. A write leveling procedure is then performed and a response thereto is determined from the memory module. A determination is made whether the memory module is in a pass state or an error state based on the response. | 06-26-2014 |
20140181392 | HARDWARE CHIP SELECT TRAINING FOR MEMORY USING READ COMMANDS - A method of training chip select for a memory module. The method includes programming a memory controller into a mode wherein a command signal is active for a programmable time period. The method then programs a programmable delay line of the chip select with a delay value and performs initialization of the memory module. A read command is then sent to the memory module to toggle a state of the chip select. A number of data strobe signals sent by the memory module in response to the read command are counted. A determination is made whether the memory module is in a pass state or an error state based on a result of the counting. | 06-26-2014 |
20140181393 | Memory Systems and Methods for Dynamically Phase Adjusting a Write Strobe and Data to Account for Receive-Clock Drift - A memory system includes a memory controller that writes data to and reads data from a memory device. A write data strobe accompanying the write data indicates to the memory device when the write data is valid, whereas a read strobe accompanying data from the memory device indicates to the memory controller when the read data is valid. The memory controller adaptively controls the phase of the write data strobe to compensate for timing drift at the memory device. The memory controller uses read signals as a measure of the drift. | 06-26-2014 |
20140189224 | TRAINING FOR MAPPING SWIZZLED DATA TO COMMAND/ADDRESS SIGNALS - Data pin mapping and delay training techniques. Valid values are detected on a command/address (CA) bus at a memory device. A first part of the pattern (high phase) is transmitted via a first subset of data pins on the memory device in response to detecting values on the CA bus; a second part of the pattern (low phase) is transmitted via a second subset of data pins on the memory device in response to detecting values on the CA bus. Signals are sampled at the memory controller from the data pins while the CA pattern is being transmitted to obtain a first memory device's sample (high phase) and the second memory device's sample (low phase) by analyzing the first and the second subset of sampled data pins. The analysis combined with the knowledge of the transmitted pattern on the CA bus leads to finding the unknown data pins mapping. Varying the transmitted CA patterns and the resulting feedbacks sampled on memory controller data signals allows CA/CTRL/CLK signals delay training with and without priory data pins mapping knowledge. | 07-03-2014 |
20140189225 | Independent Control Of Processor Core Retention States - In an embodiment, a processor includes a first processor core, a second processor core, a first voltage regulator to provide a first voltage to the first processor core with a first active value when the first processor core is active, and a second voltage regulator to provide a second voltage to the second processor core with a second active value when the second processor core is active. Responsive to a request to place the first processor core in a first low power state with an associated first low power voltage value, the first voltage regulator is to reduce the first voltage to a second low power voltage value that is less than the first low power voltage value, independent of the second voltage regulator. First data stored in a first register of the first processor core is retained at the second low power value. Other embodiments are described and claimed. | 07-03-2014 |
20140189226 | MEMORY DEVICE AND MEMORY SYSTEM HAVING THE SAME - A memory device includes a memory cell array, a multi-purpose register (MPR) and a control unit. The memory cell array includes a plurality of memory blocks. The multi-purpose register (MPR) stores physical address information for each of the plurality of memory blocks. The control unit outputs the physical address information stored in the multi-purpose register in response to an MPR read command received from a memory controller. | 07-03-2014 |
20140189227 | MEMORY DEVICE AND A MEMORY MODULE HAVING THE SAME - A memory device is provided. The memory device includes a plurality of memory chips, and a buffer chip connected to the plurality of memory chips. The plurality of memory chips and the buffer chip are disposed in a stack. A first input/output (IO) port of the buffer chip is connected in series to an external device, and a second IO port of the buffer chip is connected in parallel to IO ports of each of the plurality of memory chips. | 07-03-2014 |
20140195728 | DATA SAMPLING ALIGNMENT METHOD FOR MEMORY INFERFACE - The present disclosure relates to an interface comprising a memory controller and a memory unit coupled to the memory controller and configured to communicate with the memory controller through a first signal and a second signal. The interface further comprises a determination unit comprising judgment logic configured to send a control signal configured to align the first signal with the second signal. The memory controller further comprises a digitally-controlled delay line (DCDL) coupled to the determination unit and configured to receive the control signal, wherein the determination unit instructs the DCDL to adjust a delay of the first signal to align the first signal with the second signal. The memory controller further comprises a value register configured to store a signal delay value corresponding to alignment between the first signal with the second signal which is contained within the control signal. Other devices and methods are disclosed. | 07-10-2014 |
20140195729 | Memory Having Improved Reliability for Certain Data Types - A method for minimizing soft error rates within caches by configuring a cache with certain sections to correspond to bitcell topologies that are more resistant to soft errors and then using these sections to store modified data. | 07-10-2014 |
20140195730 | ROBUST AND SECURE MEMORY SUBSYSTEM - The present disclosure is generally directed to a more robust memory subsystem having a an improved architecture for managing a memory space. In one embodiment, a method is provided that includes receiving a memory access request from a memory controller and attempting to access the requested data from a first level of memory maintained on the memory device that contains the map cache. The method is further configured to perform a lookup in the map cache to determine whether the requested address is resident in the first level of memory. If the requested data is not resident in the first level of memory, the method causes a re-map address to be calculated that identifies a location of the requested data in a lower level of memory. Conversely, if the requested data is resident in the first level of memory, the method provides the memory controller with access to the requested data. | 07-10-2014 |
20140201435 | HETEROGENEOUS MEMORY SYSTEMS, AND RELATED METHODS AND COMPUTER-READABLE MEDIA FOR SUPPORTING HETEROGENEOUS MEMORY ACCESS REQUESTS IN PROCESSOR-BASED SYSTEMS - Heterogeneous memory systems, and related methods and computer-readable media for supporting heterogeneous memory access requests in processor-based systems are disclosed. A heterogeneous memory system is comprised of a plurality of homogeneous memories that can be accessed for a given memory access request. Each homogeneous memory has particular power and performance characteristics. In this regard, a memory access request can be advantageously routed to one of the homogeneous memories in the heterogeneous memory system based on the memory access request, and power and/or performance considerations. The heterogeneous memory access request policies may be predefined or determined dynamically based on key operational parameters, such as read/write type, frequency of page hits, and memory traffic, as non-limiting examples. In this manner, memory access request times can be optimized to be reduced without the need to make tradeoffs associated with only having one memory type available for storage. | 07-17-2014 |
20140201436 | DRAM Memory Interface - It is proposed a DRAM memory interface ( | 07-17-2014 |
20140208015 | MEMORY CONTROL SYSTEM AND POWER CONTROL METHOD - A memory control system includes: a plurality of I/O circuits; and a power control circuit that performs, when a predetermined condition for usage states of memories is satisfied, and an unused memory is present among the memories, a power consumption reduction process for causing a target I/O circuit to consume less power than an other one of the I/O circuits, the target I/O circuit being an I/O circuit among the I/O circuits that is connected to the unused memory. | 07-24-2014 |
20140215140 | Data Mask Encoding in Data Bit Inversion Scheme - Devices, circuits, and methods for data mask and data bit inversion encoding and decoding for a memory circuit. According to these methods and circuits, the number of data lines/pins required to encode data mask information and data bit inversion information can be reduced. In an embodiment the data mask and data inversion functions for a portion of data, such as a data word, can be merged onto a common pin/data line. In other embodiments, a data mask instruction can be conveyed through a transmitted data word itself without using any extra pins. According to these embodiments, the pin overhead can be reduced from two pins per byte to one pin per byte. | 07-31-2014 |
20140223091 | SYSTEM AND METHOD FOR MANAGEMENT OF UNIQUE ALPHA-NUMERIC ORDER MESSAGE IDENTIFIERS WITHIN DDR MEMORY SPACE - An embedded hardware-based risk system is provided that has an apparatus and method for the management of unique alpha-numeric order message identifiers within DDR memory space restrictions. The apparatus provides a new design for assigning orders (CLOrID) to memory and the method thereof specifically with the intention to not impact latency until memory is over 90% full. | 08-07-2014 |
20140237174 | Highly Efficient Design of Storage Array Utilizing Multiple Cache Lines for Use in First and Second Cache Spaces and Memory Subsystems - A method of operating a cache memory includes the step of storing a set of data in a first space in a cache memory, a set of data associated with a set of tags. A subset of the set of data is stored in a second space in the cache memory, the subset of the set of data associated with a tag of a subset of the set of tags. The tag portion of an address is compared with the subset of data in the second space in the cache memory in that said subset of data is read when the tag portion of the address and the tag associated with the subset of data match. The tag portion of the address is compared with the set of tags associated with the set of data in the first space in cache memory and the set of data in the first space is read when the tag portion of the address matches one of the sets of tags associated with the set of data in the first space and the tag portion of the address and the tag associated with the subset of data in the second space do not match. | 08-21-2014 |
20140237175 | PARALLEL PROCESSING COMPUTER SYSTEMS WITH REDUCED POWER CONSUMPTION AND METHODS FOR PROVIDING THE SAME - A parallel processing computing system includes an ordered set of m memory banks and a processor core. The ordered set of m memory banks includes a first and a last memory bank, wherein m is an integer greater than 1. The processor core implements n virtual processors, a pipeline having p ordered stages, including a memory operation stage, and a virtual processor selector function. | 08-21-2014 |
20140237176 | SYSTEM AND METHOD FOR UNLOCKING ADDITIONAL FUNCTIONS OF A MODULE - A system for interfacing with a co-processor or input/output device is disclosed. According to one embodiment, the system performs a maze unlock sequence by operating a memory device in a maze unlock mode. The maze unlock sequence involves writing a first data pattern of a plurality of data patterns to a memory address of the memory device, reading a first set of data from the memory address, and storing the first set of data in a validated data array. The maze unlock sequence further involves writing a second data pattern of the plurality of data patterns to the memory address, reading a second set of data from the memory address, and storing the second set of data in the validated data array. A difference vector array is generated from the validate data array and an address map of the memory device is identified based on the difference vector array. | 08-21-2014 |
20140244922 | MULTI-PURPOSE REGISTER PROGRAMMING VIA PER DRAM ADDRESSABILITY MODE - Embodiments of an apparatus, system and method for using Per DRAM Addressability (PDA) to program Multi-Purpose Registers (MPRs) of a dynamic random access memory (DRAM) device are described herein. Embodiments of the invention allow unique 32 bit patterns to be stored for each DRAM device on a rank, thereby enabling data bus training to be done in parallel. Furthermore, embodiments of the invention provide 32 bits of storage per DRAM device on a rank for the system BIOS for storing codes such as MR values, or for any other purpose (e.g., temporary scratch storage to be used by BIOS processes). | 08-28-2014 |
20140244923 | MEMORY CONTROLLER WITH CLOCK-TO-STROBE SKEW COMPENSATION - A clock signal is transmitted to first and second integrated circuit (IC) components via a clock signal line, the clock signal having a first arrival time at the first IC component and a second, later arrival time at the second IC component. A write command is transmitted to the first and second IC components to be sampled by those components at respective times corresponding to transitions of the clock signal, and write data is transmitted to the first and second IC components in association with the write command. First and second strobe signals are transmitted to the first and second IC components, respectively, to time reception of the first and second write data in those components. The first and second strobe signals are selected from a plurality of phase-offset timing signals to compensate for respective timing skews between the clock signal and the first and second strobe signals. | 08-28-2014 |
20140244924 | LOAD REDUCTION DUAL IN-LINE MEMORY MODULE (LRDIMM) AND METHOD FOR PROGRAMMING THE SAME - A method is disclosed for providing memory bus timing of a load reduction dual inline memory module (LRDIMM). The method includes: determining a latency value of a dynamic random access memory (DRAM) of the LRDIMM; determining a modified latency value of the DRAM that accounts for a delay caused by a load reduction buffer (LRB) that is deployed between the DRAM and a memory bus; storing the modified latency value in a serial presence detector (SPD) of the LRDIMM; and providing memory bus timing for the LRDIMM based on the modified latency value, wherein the memory bus timing is compatible with a registered dual inline memory module (RDIMM). | 08-28-2014 |
20140258605 | MEMORY IMBALANCE PREDICTION BASED CACHE MANAGEMENT - Embodiments of methods, apparatuses, and storage media for memory imbalance prediction-based cache memory management are disclosed herein. In one instance, the apparatus may include a memory controller associated with a memory having a plurality of storage units. The memory controller may include logic configured to determine whether the memory enters into an imbalance state based at least in part on a difference in numbers of pending access requests to different storage units, and cause an adjustment of replacement management of a cache memory, based at least in part on a result of the determination. Other embodiments may be described and/or claimed. | 09-11-2014 |
20140258606 | STORAGE CONTROL DEVICE, STORAGE DEVICE, INFORMATION PROCESSING SYSTEM, AND STORAGE CONTROL METHOD - A storage control device includes: a partial unit buffer configured to hold at least one data assigned to a partial unit, in which the partial unit is one of a plurality of partial units that are each a division of a write unit for a memory; and a request generation section configured to generate, upon indication of a busy state in the memory for any of the partial units, a write request for the write unit of the memory when the holding of the data assigned to that partial unit is possible in the partial unit buffer. | 09-11-2014 |
20140258607 | SEMICONDUCTOR MEMORY DEVICE AND METHOD OF OPERATING THE SAME - A semiconductor memory device and a method of operating the same are provided. The semiconductor memory device includes a buffer that inputs a first signal and outputs a first delay signal, a command decoder that outputs a second signal, a mask pulse signal generator that inputs the first delay signal and the second signal and generates a mask pulse signal, and a signal reshaper that inputs the first delay signal, the second signal and the mask pulse signal and reshapes the first delay signal or the second signal. | 09-11-2014 |
20140281190 | COHERENCE PROCESSING WITH ERROR CHECKING - An apparatus for processing and tracking the progress of coherency transactions in a computing system is disclosed. The apparatus may include a finite-element state machine, a processor, and a scoreboard circuit. The finite-element state machine may be configured to track the progress of a transaction as well as detect errors during the processing of the transaction. The processor may be configured to transmit coherence requests dependent upon the transaction. The scoreboard circuit may be configured to track the requests and associate responses. | 09-18-2014 |
20140281191 | ADDRESS MAPPING INCLUDING GENERIC BITS - Embodiments relate to address mapping including generic bits. An aspect includes receiving an address including generic bits from a memory control unit (MCU) by a buffer module in a main memory. Another aspect includes mapping the generic bits to an address format corresponding to a type of dynamic random access memory (DRAM) in a memory subsystem associated with the buffer module by the buffer module. Yet another aspect includes accessing a physical location in the DRAM in the memory subsystem by the buffer module based on the mapped generic bits. | 09-18-2014 |
20140281192 | TAGGING IN MEMORY CONTROL UNIT (MCU) - Embodiments relate to tagging in a MCU. An aspect includes assigning a command tag to a command by a tag allocation logic of the MCU. Another aspect includes sending the command and the command tag on a plurality of channels that are in communication with the MCU. Another aspect includes receiving a response tag comprising one of a data tag and a done tag corresponding to the command tag from each of the plurality of channels. Another aspect includes, based on receiving a data tag from each of the plurality of channels, determining that read data corresponding to the command is available. | 09-18-2014 |
20140281193 | SYSTEM AND METHOD FOR ACCESSING MEMORY - A close proximity memory arrangement maintains a point to point association between DQs, or data lines, to DRAM modules employs a clockless state machine on a DRAM side of the memory controller-DRAM interface such that a single FIFO on the memory controller side synchronizes or orders the DRAM fetch results. Addition of a row address (ROW-ADD) and column address (COL-ADD) strobe reducing latency and power demands. Close proximity point to point DRAM interfaces render the DRAM side FIFO redundant in interfaces such as direct stacked 3D DRAMs on top of the logic die hosting the memory controller. The close proximity point to point arrangement eliminates the DRAM internal FIFO and latency scheme, resulting in just the memory controller internal clock domain crossing FIFOs. | 09-18-2014 |
20140281194 | DYNAMICALLY-SIZEABLE GRANULE STORAGE - A data storage system includes data storage and random access memory. A sorting module is communicatively coupled to the random access memory and sorts data blocks of write data received in the random access memory of the data storage. A storage controller is communicatively coupled to the random access memory and the data storage and being configured to write the sorted data blocks into one or more individually-sorted granules in a granule storage area of the data storage, wherein each granule is dynamically constrained to a subset of logical block addresses. A method and processor-implemented process provide for sorting data blocks of write data received in random access memory of data storage. The method and processor-implemented process write the sorted data blocks into one or more individually-sorted granules in a granule storage area of the data storage, wherein each granule is dynamically constrained to a subset of logical block addresses. | 09-18-2014 |
20140281195 | METHOD AND A SYSTEM TO VERIFY SHARED MEMORY INTEGRITY - A method, a system and a computer program product including instructions for verification of the integrity of a shared memory using in line coding is provided. It involves an active step wherein multiple bus masters write a corresponding data to a shared memory. After that it also includes a verification step where data entered in the shared memory by multiple bus masters is verified. | 09-18-2014 |
20140281196 | PROCESSORS, METHODS, AND SYSTEMS TO RELAX SYNCHRONIZATION OF ACCESSES TO SHARED MEMORY - A processor of an aspect includes a plurality of logical processors. A first logical processor of the plurality is to execute software that includes a memory access synchronization instruction that is to synchronize accesses to a memory. The processor also includes memory access synchronization relaxation logic that is to prevent the memory access synchronization instruction from synchronizing accesses to the memory when the processor is in a relaxed memory access synchronization mode. | 09-18-2014 |
20140281197 | PROVIDING SNOOP FILTERING ASSOCIATED WITH A DATA BUFFER - In one embodiment, a conflict detection logic is configured to receive a plurality of memory requests from an arbiter of a coherent fabric of a system on a chip (SoC). The conflict detection logic includes snoop filter logic to downgrade a first snooped memory request for a first address to an unsnooped memory request when an indicator associated with the first address indicates that the coherent fabric has control of the first address. Other embodiments are described and claimed. | 09-18-2014 |
20140281198 | DATA INTERFACE CIRCUIT FOR CAPTURING RECEIVED DATA BITS INCLUDING CONTINUOUS CALIBRATION - Circuits and methods for implementing a continuously adaptive timing calibration training function in an integrated circuit interface are disclosed. A mission data path is established where a data bit is sampled by a strobe. A similar reference data path is established for calibration purposes only. At an initialization time both paths are calibrated and a delta value between them is established. During operation of the mission path, the calibration path continuously performs calibration operations to determine if its optimal delay has changed by more than a threshold value. If so, the new delay setting for the reference path is used to change the delay setting for the mission path after adjustment by the delta value. Circuits and methods are also disclosed for performing multiple parallel calibrations for the reference path to speed up the training process. | 09-18-2014 |
20140281199 | OPTICAL INTERCONNECT IN HIGH-SPEED MEMORY SYSTEMS - A optical link for achieving electrical isolation between a controller and a memory device is disclosed. The optical link increases the noise immunity of electrical interconnections, and allows the memory device to be placed a greater distance from the processor than is conventional without power-consuming I/O buffers. | 09-18-2014 |
20140281200 | MEMORY DEVICES AND SYSTEMS INCLUDING MULTI-SPEED ACCESS OF MEMORY MODULES - A system, comprising: a plurality of modules, each module comprising a plurality of integrated circuits devices coupled to a module bus and a channel interface that communicates with a memory controller, at least a first module having a portion of its total module address space composed of first type memory cells having a first maximum access speed, and at least a second module having a portion of its total module address space composed of second type memory cells having a second maximum access speed slower than the first access speed. | 09-18-2014 |
20140289460 | SYSTEMS AND METHODS INVOLVING DATA BUS INVERSION MEMORY CIRCUITRY, CONFIGURATION AND/OR OPERATION INCLUDING DATA SIGNALS GROUPED INTO 10 BITS AND/OR OTHER FEATURES - Systems, methods and fabrication processes relating to dynamic random access memory (DRAM) devices involving data signals grouped into 10 bits are disclosed. According to one illustrative implementation a DRAM device may comprise a memory core, circuitry that receives a data bus inversion (DBI) bit associated with a data signal as input directly, without transmission through DBI logic associated with an input buffer, circuitry that stores the DBI bit into the memory core, reads the DBI bit from the memory core, and provides the DBI bit as output. In further implementations, DRAM devices herein may store and process the DBI bit on an internal data bus as a regular data bit. | 09-25-2014 |
20140297938 | NON-VOLATILE RANDOM ACCESS MEMORY (NVRAM) AS A REPLACEMENT FOR TRADITIONAL MASS STORAGE - A non-volatile random access memory (NVRAM) is used in a computer system to perform multiple roles in a platform storage hierarchy, specifically, to replace traditional mass storage that is accessible by an I/O. The computer system includes a processor to execute software and a memory coupled to the processor. At least a portion of the memory comprises a non-volatile random access memory (NVRAM) that is byte-rewritable and byte-erasable by the processor. The system further comprises a memory controller coupled to the NVRAM to perform a memory access operation to access the NVRAM in response to a request from the software for access to a mass storage. | 10-02-2014 |
20140297939 | Memory Controllers, Systems, and Methods Supporting Multiple Request Modes - A memory system includes a memory controller with a plurality N of memory-controller blocks, each of which conveys independent transaction requests over external request ports. The request ports are coupled, via point-to-point connections, to from one to N memory devices, each of which includes N independently addressable memory blocks. All of the external request ports are connected to respective external request ports on the memory device or devices used in a given configuration. The number of request ports per memory device and the data width of each memory device changes with the number of memory devices such that the ratio of the request-access granularity to the data granularity remains constant irrespective of the number of memory devices. | 10-02-2014 |
20140304464 | METHODS AND SYSTEMS FOR PERFORMING DEDUPLICATION IN A DATA STORAGE SYSTEM - A dedupe cache solution is provided that uses an in-line signature generation algorithm on the front-end of the data storage system and an off-line dedupe algorithm on the back-end of the data storage system. The in-line signature generation algorithm is performed as data is moved from the system memory device of the host system into the DRAM device of the storage controller. Because the signature generation algorithm is an in-line process, it has very little if any detrimental impact on write latency and is scalable to storage environments that have high IOPS. The back-end deduplication algorithm looks at data that the front-end process has indicated may be a duplicate and performs deduplication as needed. Because the deduplication algorithm is performed off-line on the back-end, it also does not contribute any additional write latency. | 10-09-2014 |
20140304465 | DRAM AND ACCESS AND OPERATING METHOD THEREOF - An access method for a DRAM is provided. A row address is partitioned into a first portion and a second portion. The first portion of the row address is provided via an address bus and a first active command is provided via a command bus the DRAM. The second portion of the row address is provided via the address bus and a second active command is provided via the command bus to the DRAM, after the first active command is provided. The address bus is formed by a plurality of address lines, and a quantity of the address lines is smaller than the number of bits of the row address. | 10-09-2014 |
20140304466 | DRAM AND ACCESS AND OPERATING METHOD THEREOF - An operating access method for a DRAM is provided. A first address is obtained via an address bus and a first command is obtained via a command bus from a controller. A second address is obtained via the address bus and a second command is obtained via the command bus from the controller after the first command is obtained. The first address and the second address are combined to obtain a valid address, wherein the valid address is a row address when each of the first command and the second command is an active command. In addition, the valid address is a column address when the second command is an access command. | 10-09-2014 |
20140310451 | BLOCK STORAGE USING A HYBRID MEMORY DEVICE - Techniques for block storage using a hybrid memory device are described. In at least some embodiments, a hybrid memory device includes a volatile memory portion, such as dynamic random access memory (DRAM). The hybrid memory device further includes non-volatile memory portion, such as flash memory. In at least some embodiments, the hybrid memory device can be embodied as a non-volatile dual in-line memory module, or NVDIMM. Techniques discussed herein employ various functionalities to enable the hybrid memory device to be exposed to various entities as an available block storage device. | 10-16-2014 |
20140310452 | SEMICONDUCTOR DEVICE AND PROCESSOR SYSTEM INCLUDING THE SAME - Provided is a semiconductor device including: a plurality of processing circuits; an arbitration circuit that arbitrates a plurality of data transfer requests issued by the plurality of processing circuits; a mask control circuit that loads the plurality of data transfer requests arbitrated by the arbitration circuit, and sequentially outputs the plurality of data transfer requests after a lapse of a mask period; and a memory controller that accesses a memory based on the plurality of data transfer requests sequentially output from the mask control circuit, and switches a mode of the memory to a power saving mode when no data transfer request is output from the mask control circuit for a predetermined period. | 10-16-2014 |
20140317343 | CONFIGURATION OF DATA STROBES - Disclosed embodiments may include a circuit having a plurality of data terminals, no more than two pairs of differential data strobe terminals associated with the plurality of data terminals, and digital logic circuitry. The digital logic circuitry may be coupled to the data terminals and configured to use the no more than two pairs of differential data strobe terminals concurrently with the plurality of data terminals to transfer data. Other embodiments may be disclosed. | 10-23-2014 |
20140317344 | SEMICONDUCTOR DEVICE - A semiconductor device may include a storage unit configured to store a number of times a first command has been provided to a memory cell array, a control unit configured to generate a second command operable to activate at least one word line in the memory cell array based on a comparison of the number stored at the storage unit with a threshold value, when the first command is received, and a selection unit configured to select one of the first command and the second command based on a result of the comparison and transmit the selected command to the memory cell array. | 10-23-2014 |
20140325135 | TERMINATION IMPEDANCE APPARATUS WITH CALIBRATION CIRCUIT AND METHOD THEREFOR - A termination impedance apparatus includes a variable pull-up resistor, a variable pull-down resistor, and a small-signal calibration circuit. The variable pull-up resistor is coupled between a first power supply voltage terminal and an output terminal. The variable pull-down resistor is coupled between the output terminal and a second power supply voltage terminal. The small-signal calibration circuit is for calibrating the variable pull-up resistor and the variable pull-down resistor to achieve a desired small-signal impedance. | 10-30-2014 |
20140325136 | CONFIGURATION FOR POWER REDUCTION IN DRAM - Disclosed embodiments may include an apparatus having a segment wordline enable coupled to logic to selectively disable ones of a number of segment wordline drivers. The logic may partition a page of the apparatus to reduce power consumed through activation of the disabled ones of the number of segment wordlines. Other embodiments may be disclosed. | 10-30-2014 |
20140331006 | SEMICONDUCTOR MEMORY DEVICES - A semiconductor memory device includes a memory cell array, a data inversion/mask interface and a write circuit. The data inversion/mask interface receives a data block including a plurality of unit data, each of the plurality of unit data having a first data size, and the data inversion/mask interface selectively enables each data mask signal associated with each of the plurality of unit data based on a number of first data bits in a second data size of each unit data. The second data size is smaller than a first data size of the unit data. The write circuit receives the data block and performs a masked write operation that selectively writes each of the plurality of unit data in the memory cell array in response to the data mask signal. | 11-06-2014 |
20140337569 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR LOW LATENCY SCHEDULING AND LAUNCH OF MEMORY DEFINED TASKS - A system, method, and computer program product for low-latency scheduling and launch of memory defined tasks. The method includes the steps of receiving a task metadata data structure to be stored in a memory associated with a processor, transmitting the task metadata data structure to a scheduling unit of the processor, storing the task metadata data structure in a cache unit included in the scheduling unit, and copying the task metadata data structure from the cache unit to the memory. | 11-13-2014 |
20140337570 | MEMORY SYSTEM AND METHOD USING STACKED MEMORY DEVICE DICE, AND SYSTEM USING THE MEMORY SYSTEM - A memory system and method uses stacked memory device dice coupled to each other and to a logic die. The logic die may include a timing correction system that is operable to control the timing at which the logic die receives signals, such as read data signals, from each of the memory device dice. The timing correction controls the timing of the read data or other signals by adjusting the timing of respective strobe signals, such as read strobe signals, that are applied to each of the memory device dice. The memory device dice may transmit read data to the memory device at a time determined by when it receives the respective strobe signals. The timing of each of the strobe signals is adjusted so that the read data or other signals from all of the memory device dice are received at the same time. | 11-13-2014 |
20140344511 | METHOD FOR STORING DATA - A method for storing data is disclosed, the method including collecting, by a CPU module, a source data and an RTC (Real Time Clock) value and storing the source data and the RTC value in a common RAM (Random Access Memory) of a data log module, converting, by an MPU (Micro Processing Unit) of the data log module, a type of the source data, and adding the RTC value and an index value to the converted value to generate a data row, and compressing, by a compression unit of the data log module, the generated data row, and storing the generated compressed data row in a memory card. | 11-20-2014 |
20140344512 | Data Processing Apparatus and Memory Apparatus - A data processing apparatus includes bus masters and a memory controller. Each bus master includes a data buffer, and issues a memory command to specify access to the memory and generates first priority information depending on a free space of the data buffer, wherein the first priority information is associated with the memory command and indicates a priority of the memory command. The memory controller determines a processing order of memory commands which are issued by the plurality of bus masters based on the first priority information corresponding to the memory commands, and executes the respective memory commands transferred from the plurality of bus masters in the processing order determined by the processing order determining unit. | 11-20-2014 |
20140351501 | MESSAGE STORAGE IN MEMORY BLOCKS USING CODEWORDS - A codeword is generated from a message. One or more anchor values are appended to the codeword at predetermined anchor positions. Before the codeword is stored in a memory block, the locations and values of stuck cells in the memory block are determined. Based on the values and positions of the stuck cells, the values of the codeword are remapped so that values of the codeword that are the same as the values of the stuck cells are placed at the positions of the stuck cells. The remapped codeword is stored in the memory block. When the message is later read, the original codeword can be recovered from the remapped codeword based on the locations of the anchor values in the remapped codeword. | 11-27-2014 |
20140351502 | METHOD AND APPARATUS FOR SENDING DATA FROM MULTIPLE SOURCES OVER A COMMUNICATIONS BUS - In a memory system, multiple memory modules communicate over a bus. Each memory module may include a hub and at least one memory storage unit. The hub receives local data from the memory storage units, and downstream data from one or more other memory modules. The hub assembles data to be sent over the bus within a data block structure, which is divided into multiple lanes. An indication is made of where, within the data block structure, a breakpoint will occur in the data being placed on the bus by a first source (e.g., the local or downstream data). Based on the indication, data from a second source (e.g., the downstream or local data) is placed in the remainder of the data block, thus reducing gaps on the bus. Additional apparatus, systems, and methods are disclosed. | 11-27-2014 |
20140351503 | MULTI-SERIAL INTERFACE STACKED-DIE MEMORY ARCHITECTURE - Systems and methods disclosed herein substantially concurrently transfer a plurality of streams of commands, addresses, and/or data across a corresponding plurality of serialized communication link interfaces (SCLIs) between one or more originating devices or destination devices such as a processor and a switch. At the switch, one or more commands, addresses, or data corresponding to each stream can be transferred to a corresponding destination memory vault controller (MVC) associated with a corresponding memory vault. The destination MVC can perform write operations, read operations, and/or memory vault housekeeping operations independently from concurrent operations associated with other MVCs coupled to a corresponding plurality of memory vaults. | 11-27-2014 |
20140359207 | Systems and Methods for DQS Gating - Systems and methods for timing read operations with a memory device are provided. A timing signal from the memory device is received at a gating circuit. The timing signal is passed through as a filtered timing signal during a gating window. The gating circuit is configured to open the gating window based on a control signal. The gating circuit is further configured to close the gating window based on a first edge of the timing signal. The first edge is determined based on a counter that is triggered to begin counting by the control signal. At a timing control circuit, the control signal is generated based on i) a count signal from the counter, and ii) a second edge of the timing signal that precedes the first edge in time. | 12-04-2014 |
20140372691 | COUNTER POLICY IMPLEMENTATION - According to an example, a counter policy implementation apparatus may include a policy determination module to receive a counter address for a local counter and to map the counter address to a specific policy of a plurality of policies, and a policy application module to receive a posted value and a double data rate (DDR) value associated with the local counter. The policy application module may include a comparator to compare the posted value or the DDR value with a maximum value associated with the local counter specified in the mapped policy, and an action block to perform an action specified by the mapped policy based on the comparison. | 12-18-2014 |
20140379976 | MEMORY CONTROLLER AND ASSOCIATED SIGNAL GENERATING METHOD - A memory controller and an associated signal generating method are provided. A generating sequence of commands is properly arranged to enlarge latching intervals of an address signal and a bank signal for stable access of a DDR memory module. | 12-25-2014 |
20140379977 | DYNAMIC/STATIC RANDOM ACCESS MEMORY (D/SRAM) - Dynamic/static random access memory (D/SRAM) cell, block shift static random access memory (BS-SRAM) and method using the same employ dynamic storage mode and dynamic storage mode switching to shift data. The D/SRAM cell includes a static random access memory (SRAM) cell having a pair of cross-coupled elements to store data, and a dynamic/static (D/S) mode selector to selectably switch the D/SRAM cell between the dynamic storage mode and a static storage mode. The BS-SRAM includes a plurality of D/SRAM cells arranged in an array and a controller to shift data from an adjacent D/SRAM cell in a second row of the array to a D/SRAM cell in a first row. The method includes switching the mode of, coupling data from an adjacent memory cell to, and storing the coupled data in, a selected D/SRAM cell. | 12-25-2014 |
20150019802 | MONOLITHIC THREE DIMENSIONAL (3D) RANDOM ACCESS MEMORY (RAM) ARRAY ARCHITECTURE WITH BITCELL AND LOGIC PARTITIONING - A monolithic three dimensional (3D) memory cell array architecture with bitcell and logic partitioning is disclosed. A 3D integrated circuit (IC) (3DIC) is proposed which folds or otherwise stacks elements of the memory cells into different tiers within the 3DIC. Each tier of the 3DIC has memory cells as well as access logic including global block control logic therein. By positioning the access logic and global block control logic in each tier with the memory cells, the length of the bit and word lines for each memory call are shortened, allowing for reduced supply voltages as well as generally reducing the overall footprint of the memory device. | 01-15-2015 |
20150019803 | PARTITIONED MEMORY WITH SHARED MEMORY RESOURCES AND CONFIGURABLE FUNCTIONS - A memory device that includes an input interface that receives instructions and input data on a first plurality of serial links. The memory device includes a memory block having a plurality of banks, wherein each of the banks has a plurality of memory cells, and wherein the memory block has multiple ports. An output interface provides data on a second plurality of serial links. A cache coupled to the IO interface and to the plurality of banks, stores write data designated for a given memory cell location when the given memory cell location is currently being accessed, thereby avoiding a collision. Memory device includes one or more memory access controllers (MACs) coupled to the memory block and one or more arithmetic logic units (ALUs) coupled to the MACs. The ALUs perform one or more operations on data prior to the data being transmitted out of the IC via the IO, such as read/modify/write or statistics or traffic management functions, thereby reducing congestion on the serial links and offloading appropriate operations from the host to the memory device. | 01-15-2015 |
20150019804 | MAPPING OF RANDOM DEFECTS IN A MEMORY DEVICE - A memory device includes a memory array with random defective memory cells. The memory array is organized into rows and columns with a row and column identifying a memory location of a memory cell of the memory array. The memory device includes a row address device and a column address device and is operative to use a grouping of either the row or the column addresses to manage the random defective memory cells by mapping the memory location of a defective memory cell to an alternate memory location. | 01-15-2015 |
20150026397 | METHOD AND SYSTEM FOR PROVIDING MEMORY MODULE INTERCOMMUNICATION - Exemplary embodiments include a memory module including a plurality of connectors, at least one memory, at least one transmitter and at least one receiver. The connectors are configured to fit with a form factor of a memory socket on a server board. The memory is coupled with the connectors. The transmitter(s) are coupled with the memory. The transmitter(s) are configured to send a first plurality of signals from the memory module such that the first plurality of signals bypass the connectors. The receiver(s) are coupled with the memory. The receiver(s) are configured to receive a second plurality of signals to the memory module such that the second plurality of signals bypass the plurality of connectors. | 01-22-2015 |
20150026398 | MOBILE DEVICE AND A METHOD OF CONTROLLING THE MOBILE DEVICE - A mobile device including: a storage device; a system-on-chip (SOC) including a central processing unit (CPU) and a memory interface configured to access the storage device in response to a request of the CPU; and a working memory including an input/output (I/O) scheduler and a device driver, the I/O scheduler configured to detect real time processing requests and store the real time processing requests in a sync queue, and detect non-real time processing requests and store the non-real time processing requests in an async queue, the device driver configured to adjust the performance of the mobile device based on the number of requests in the sync queue. | 01-22-2015 |
20150032950 | SIGNAL CONTROL CIRCUIT, INFORMATION PROCESSING APPARATUS, AND DUTY RATIO CALCULATION METHOD - A signal control circuit includes: a delay acquisition circuit configured to obtain a first delay amount to be added to an input signal for aligning timing of rise of the input signal with timing of fall or rise of a reference signal and a second delay amount to be added to the input signal for aligning timing of fall of the input signal with timing of the fall or the rise of the reference signal; and a ratio calculation circuit configured to calculate a duty ratio of the input signal based on a difference between the first delay amount and the second delay amount. | 01-29-2015 |
20150032951 | METHODS, APPARATUS, AND SYSTEMS FOR SECURE DEMAND PAGING AND OTHER PAGING OPERATIONS FOR PROCESSOR DEVICES - A secure demand paging system ( | 01-29-2015 |
20150039821 | COMMUNICATION APPARATUS AND DATA PROCESSING METHOD - A communication apparatus comprises a general-purpose memory, and a high-speed memory that allows higher-speed access than the general-purpose memory. Protocol processing is executed to packetize transmission data using a general-purpose buffer allocated to the general-purpose memory and/or a high-speed buffer allocated to the high-speed memory as network buffers. | 02-05-2015 |
20150046641 | MEMORY INTERFACE HAVING MEMORY CONTROLLER AND PHYSICAL INTERFACE - A memory interface which is capable of performing calibration of a physical interface by realizing handshake of Update Interface signals. The physical interface connects memory and a memory controller which controls the memory to each other and converts data between the memory and the memory controller. A data conversion unit is disposed between the memory controller and the physical interface, for adjusting output timing of signals output from the memory controller to the physical interface and adjusting output timing of signals output from the physical interface to the memory controller. An update process unit is disposed between the memory controller and the physical interface, for controlling executing timing of calibration for adjusting drive performance of the physical interface. | 02-12-2015 |
20150046642 | MEMORY COMMAND SCHEDULER AND MEMORY COMMAND SCHEDULING METHOD - A memory command scheduler is provided. The memory command scheduler includes a scheduler queue receiving first and second requests for a memory access from external devices and storing the first and second requests therein; and a controller generating a command of the second request after a preset number of clock cycles from a current clock cycle and transferring the generated command to a memory, if generation of a command of the first request is possible in the current clock cycle and generation of the command of the second request is possible after the preset number of clock cycles from the current clock cycle. | 02-12-2015 |
20150058548 | HIERARCHICAL STORAGE FOR LSM-BASED NoSQL STORES - Logically arranged hierarchy or tiered storage may comprise a layer of storage being a faster access storage (e.g. solid state drive (SSD)) and another (e.g., next) layer being a traditional disk (e.g. HDD). In one embodiment, compaction occurs within the higher layer, e.g., until there is no more room and then during the compaction sequence the data may be moved down to the lower layer. In another embodiment, compaction and migration to a lower layer may occur within the higher layer, e.g., based on one or more policies, even if the higher layer is not full. In one embodiment, the data between layers are maintained as disjoint. In one embodiment, the more recent versions are always in the higher layer and the older versions are always in the lower layer. | 02-26-2015 |
20150067247 | METHOD AND SYSTEM FOR MIGRATING DATA BETWEEN STORAGE DEVICES OF A STORAGE ARRAY - Described herein are methods, systems and machine-readable media for migrating data between storage devices of a storage array. A metric is used to measure the optimality of candidate data migrations, the metric taking into account capacity balance and proper data striping. Candidate migrations are evaluated against the metric. The candidate migration that ranks as the best migration according to the metric may be carried out. This process of evaluating candidate migrations and carrying out the best candidate migration may be iterated until data is properly distributed among the storage devices of the storage array. | 03-05-2015 |
20150067248 | DRAM CONTROLLER HAVING DRAM BAD PAGE MANAGEMENT FUNCTION AND BAD PAGE MANAGEMENT METHOD THEREOF - A bad page management system is provided to guarantee a yield of a volatile semiconductor memory device such as a DRAM. A bad page list exists in a DRAM. A page remapper in a memory controller performs a page remapping operation in parallel with a normal operation of a scheduling unit to perform a latency overhead hidden function. A chip size of the DRAM is reduced or minimized. A DRAM controller performs a latency overhead hidden function to control a DRAM. | 03-05-2015 |
20150074346 | MEMORY CONTROLLER, MEMORY MODULE AND MEMORY SYSTEM - A memory module, comprising: a first pin, arranged to receive a first signal; a second pin, arranged to receive second signal; a first conducting path, having a first end coupled to the first pin; at least one memory chip, coupled to the first conducting path for receiving the first signal; a predetermined resistor, having a first terminal coupled to a second end of the first conducting path; and a second conducting path, having a first end coupled to second pin for conducting the second to a second terminal of the predetermined resistor; wherein the first signal and the second are synchronous and configured to be a differential signal, for enabling a selected memory chip from the at least one memory chip to be accessed. | 03-12-2015 |
20150089126 | Data Compression In Processor Caches - In an embodiment, a processor includes a cache data array including a plurality of physical ways, each physical way to store a baseline way and a victim way; a cache tag array including a plurality of tag groups, each tag group associated with a particular physical way and including a first tag associated with the baseline way stored in the particular physical way, and a second tag associated with the victim way stored in the particular physical way; and cache control logic to: select a first baseline way based on a replacement policy, select a first victim way based on an available capacity of a first physical way including the first victim way, and move a first data element from the first baseline way to the first victim way. Other embodiments are described and claimed. | 03-26-2015 |
20150095563 | MEMORY MANAGEMENT - Apparatus, systems, and methods to manage memory operations are described. In one embodiment, an electronic device comprises a processor and a memory control logic to retrieve a global sequence number from a memory device, receive a read request for data stored in a logical block address in the memory device, retrieve a media sequence number from the logical block address in the memory device, and return a null response in lieu of the data stored in the logical block address when the media sequence number is older than the global sequence number. Other embodiments are also disclosed and claimed. | 04-02-2015 |
20150095564 | APPARATUS AND METHOD FOR SELECTING MEMORY OUTSIDE A MEMORY ARRAY - An apparatus includes a memory module, which includes a memory array. The memory array includes rows of memory and columns of memory. The apparatus also includes at least one row of memory not in the memory array and a register. The register includes an address space and a row/column indicator. The apparatus also includes row selection logic to select the at least one row to be activated if the address from an address bus equals the register value and if the row/column indicator indicates row. | 04-02-2015 |
20150106560 | METHODS AND SYSTEMS FOR MAPPING A PERIPHERAL FUNCTION ONTO A LEGACY MEMORY INTERFACE - A memory system includes a CPU that communicates commands and addresses to a main-memory module. The module includes a buffer circuit that relays commands and data between the CPU and the main memory. The memory module additionally includes an embedded processor that shares access to main memory in support of peripheral functionality, such as graphics processing, for improved overall system performance. The buffer circuit facilitates the communication of instructions and data between CPU and the peripheral processor in a manner that minimizes or eliminates the need to modify CPU, and consequently reduces practical barriers to the adoption of main-memory modules with integrated processing power. | 04-16-2015 |
20150120996 | TRACING MECHANISM FOR RECORDING SHARED MEMORY INTERLEAVINGS ON MULTI-CORE PROCESSORS - A memory race recorder (MRR) is provided. The MRR includes a multi-core processor having a relaxed memory consistency model, an extension to the multi-core processor, the extension to store chunks, the chunk having a chunk size (CS) and an instruction count (IC), and a plurality of cores to execute instructions. The plurality of cores executes load/store instructions to/from a store buffer (STB) and a simulated memory to store the value when the value is not in the STB. The oldest value in the STB is transferred to the simulated memory when the IC is equal to zero and the CS is greater than zero. The MRR logs a trace entry comprising the CS, the IC, and a global timestamp, the global timestamp proving a total order across all logged chunks. | 04-30-2015 |
20150120997 | SEMICONDUCTOR DEVICE INCLUDING REPEATER CIRCUIT FOR MAIN DATA LINE - A semiconductor memory disclosed in this disclosure includes first and second memory cell arrays, a first main data line that transfers the read data read from the first memory cell array, a second main data line that transfers the read data read from the second memory cell array, a main amplifier coupled to the second main data line, and a repeater circuit coupled to the first main data line and the second main data line. | 04-30-2015 |
20150134895 | SEMICONDUTOR MEMORY DEVICE AND MEMORY SYSTEM INCLUDING THE SAME - A semiconductor memory device may include a cell array comprising a plurality of memory cells, each memory cell connected to a word line and a bit line, the cell array divided into a plurality of blocks, each block including a plurality of word lines, the plurality of blocks including at least a first defective block; a nonvolatile storage circuit configured to store address information of the first defective block, and to output the address information to an external device; and a fuse circuit configured to cut off an activation of word lines of the first defective block. | 05-14-2015 |
20150134896 | MECHANISMS TO ACCELERATE TRANSACTIONS USING BUFFERED STORES - In one embodiment, the present invention includes a method for executing a transactional memory (TM) transaction in a first thread, buffering a block of data in a first buffer of a cache memory of a processor, and acquiring a write monitor on the block to obtain ownership of the block at an encounter time in which data at a location of the block in the first buffer is updated. Other embodiments are described and claimed. | 05-14-2015 |
20150149714 | CONSTRAINING PREFETCH REQUESTS TO A PROCESSOR SOCKET - In an embodiment, a processor includes at least one core having one or more execution units, a first cache memory and a first cache control logic. The first cache control logic may be configured to generate a first prefetch request to prefetch first data, where this request is to be aborted if the first data is not present in a second cache memory coupled to the first cache memory. Other embodiments are described and claimed. | 05-28-2015 |
20150149715 | NONVOLATILE RANDOM ACCESS MEMORY USE - For nonvolatile random access memory (NVRAM) use, a query module identifies persistent data on a NVRAM in response to waking the NVRAM. A management module makes available the persistent data for use. | 05-28-2015 |
20150149716 | WRITE AND READ COLLISION AVOIDANCE IN SINGLE PORT MEMORY DEVICES - A method of avoiding a write collision in single port memory devices from two independent write operations is described. A first data object from a first write operation is divided into a first even sub-data object and first odd sub-data object. A second data object from a second write operation is divided into a second even sub-data object and a second odd sub-data object. The first even sub-data object is stored to a first single port memory device and the second odd sub-data object to a second single port memory device when the first write operation and the second write operation occur at the same time. The second even sub-data object is stored to the first single port memory device and the first odd sub-data object to the second single port memory device when the first write operation and the second write operation occur at the same time. | 05-28-2015 |
20150293709 | FINE-GRAINED BANDWIDTH PROVISIONING IN A MEMORY CONTROLLER - Systems and methods for applying a fine-grained QoS logic are provided. The system may include a memory controller, the memory controller configured to receive memory access requests from a plurality of masters via a bus fabric. The memory controller determines the priority class of each of the plurality of masters, and further determines the amount of memory data bus bandwidth consumed by each master on the memory data bus. Based on the priority class assigned to each of the masters and the amount of memory data bus bandwidth consumed by each master, the memory controller applies a fine-grained QoS logic to compute a schedule for the memory requests. Based on this schedule, the memory controller converts the memory requests to memory commands, sends the memory commands to a memory device via a memory command bus, and receives a response from the memory device via a memory data bus. | 10-15-2015 |
20150301752 | Method of interleaving, de-interleaving, and corresponding interleaver and de-interleaver - A method of interleaving comprising: generating a combined data by combining, a plurality of columns of input data to be inputted to a plurality of adjacent sub-interleavers into a column, wherein data within same rows among the plurality of columns of input data have same delay time; writing the combined data row by row into an off-chip memory; delaying the combined data, by the off-chip memory; and splitting data outputted by the off-chip memory into the plurality of columns such that each split column includes data corresponding to one of the plurality of adjacent sub-interleavers. | 10-22-2015 |
20150301977 | Distributed Termination for Flyby Memory Buses - Methods and systems that perform distributed termination for shared signal buses on memory modules. Distributed termination improves signal quality and results in higher overall memory performance. Distributed termination enables depopulation of devices on branches without significant performance degradation. Distributed termination enables new signal topologies that may enable higher performance. | 10-22-2015 |
20150302903 | SYSTEM AND METHOD FOR DEEP COALESCING MEMORY MANAGEMENT IN A PORTABLE COMPUTING DEVICE - Various embodiments of methods and systems for deep coalescing memory management (“DCMM”) in a portable computing device (“PCD”) are disclosed. Because multiple active multimedia (“MM”) clients running on the PCD may generate a random stream of mixed read and write requests associated with data stored at non-contiguous addresses in a double data rate (“DDR”) memory component, DCMM solutions triage the requests into dedicated deep coalescing (“DC”) cache buffers, sequentially ordering the requests and/or the DC buffers based on associated addresses for the data in the DDR, to optimize read and write transactions from and to the DDR memory component in blocks of contiguous data addresses. | 10-22-2015 |
20150302904 | ACCESSING MEMORY - A disclosed example method involves performing simultaneous data accesses on at least first and second independently selectable logical sub-ranks to access first data via a wide internal data bus in a memory device. The memory device includes a translation buffer chip, memory chips in independently selectable logical sub-ranks, a narrow external data bus to connect the translation buffer chip to a memory controller, and the wide internal data bus between the translation buffer chip and the memory chips. A data access is performed on only the first independently selectable logical sub-rank to access second data via the wide internal data bus. The example method also involves locating a first portion of the first data, a second portion of the first data, and the second data on the narrow external data bus during separate data transfers. | 10-22-2015 |
20150302905 | METHODS FOR CALIBRATING A READ DATA PATH FOR A MEMORY INTERFACE - A method for calibrating a read data path for a DDR memory interface circuit from time to time in conjunction with functional operation of a memory circuit is described. The method uses the steps of issuing a sequence of read commands so that a delayed dqs signal toggles continuously. Next, delaying a core clock signal originating within the DDR memory interface circuit to produce a capture clock signal. The capture clock signal is delayed from the core clock by a capture clock delay value. Next, determining an optimum capture clock delay value. The output of the read data path is clocked by the core clock. The timing for the read data path with respect to data propagation is responsive to at least the capture clock. | 10-22-2015 |
20150309743 | SEMICONDUCTOR MEMORY DEVICES AND MEMORY SYSTEMS INCLUDING THE SAME - A semiconductor memory device includes a control logic and a memory cell array in which a plurality of memory cells are arranged. The memory cell array includes a plurality of bank arrays, and each of the plurality of bank arrays includes a plurality of sub-arrays. The control logic controls an access to the memory cell array based on a command and an address signal. The control logic dynamically sets a keep-away zone that includes a plurality of memory cell rows which are deactivated based on a first word-line when the first word-line is enabled. The first word-line is coupled to a first memory cell row of a first sub-array of the plurality of sub-arrays. Therefore, increased timing parameters may be compensated, and parallelism may be increased. | 10-29-2015 |
20150309923 | STORAGE CONTROL APPARATUS AND STORAGE SYSTEM - A storage control apparatus includes a processor. The processor is configured to detect a dependency relationship between a first data access and a second data access made after passage of a delay time from the first data access. The first data access is made to a first storage area in a first storage device. The second data access is made to a second storage area in the first storage device. The processor is configured to transfer, when a current data access is made to the first storage area in a state in which the dependency relationship is detected, data in the second storage area to a second storage device before the delay time passes. The second storage device has a higher access speed than the first storage device. | 10-29-2015 |
20150310898 | SYSTEM AND METHOD FOR PROVIDING A CONFIGURABLE TIMING CONTROL FOR A MEMORY SYSTEM - A system and method for providing a configurable timing control of a memory system is disclosed. In one embodiment, the system has a first interface to receive a DIMM clock and configuration information, a second interface to a first data bus, and a third interface to a second data bus. The system further has a plurality of flip-flops, a multiplexor coupled to the plurality of flip-flops, a first control block for controlling to hold an input data within the plurality of flip-flops, and a second control block for controlling a timing of an output data from the plurality of flip-flops via the multiplexor with a programmable delay. The input data is received via the second interface. The programmable delay is received via the first interface. The output data is sent out with the timing delay via the third interface. | 10-29-2015 |
20150310902 | Static Power Reduction in Caches Using Deterministic Naps - The dNap architecture is able to accurately transition cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/Leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff. | 10-29-2015 |
20150317086 | INFORMATION PROCESSOR - An information processor includes an information processing sub-system having information processing circuits and a memory sub-system performing data communication with the information processing sub-systems, wherein the memory sub-system has a first memory, a second memory, a third memory having reading and writing latencies longer than those of the first memory and the second memory, and a memory controller for controlling data transfer among the first memory, the second memory and the third memory; graph data is stored in the third memory; the memory controller analyzes data blocks serving as part of the graph data, and performs preloading operation repeatedly to transfer the data blocks to be required next for the execution of the processing from the third memory to the first memory or the second memory on the basis of the result of the analysis. | 11-05-2015 |
20150323982 | FRAME BUFFER POWER MANAGEMENT - For frame buffer power management, a frame buffer includes a write circuit and a read circuit, and drives a display. A power management module terminates power to the frame buffer in response to a power reduction policy being satisfied. | 11-12-2015 |
20150324129 | RESULTS GENERATION FOR STATE MACHINE ENGINES - A state machine engine includes a storage element, such as a (e.g., match) results memory. The storage element is configured to receive a result of an analysis of data. The storage element is also configured to store the result in a particular portion of the storage element based on a characteristic of the result. The storage element is additionally configured to store a result indicator corresponding to the result. Other state machine engines and methods are also disclosed. | 11-12-2015 |
20150332743 | SEMICONDUCTOR MEMORY DEVICE - A semiconductor memory device includes a first page buffer block and a second page buffer block corresponding to a first memory bank and a second memory bank, respectively, an input/output control circuit suitable for transferring input data to data lines, a first column decoder and a second column decoder suitable for latching the input data transferred through the data lines to the first page buffer block and the second page buffer block, respectively, based on a column address transferred through address lines that are shared by the first and second column decoders, and a control signal generation circuit suitable for generating a plurality of page buffer selection signals to control the first and second column decoders to selectively perform data latch operations on the first and second page buffer blocks. | 11-19-2015 |
20150340092 | DATA GENERATING DEVICE AND DATA GENERATING METHOD - A data generating device includes: a memory cell array including a plurality of memory cells; a read circuit operative to obtain a plurality of resistance value information pieces from the plurality of memory cells; and a data generator circuit operative to set a condition on the basis of the plurality of resistance value information pieces, and generating data by allocating, on the basis of the condition, the plurality of resistance value information pieces into a plurality of sets which respectively correspond to a plurality of values constituting the data. Each of the plurality of memory cells has a characteristic where, when in a variable state, a resistance value thereof reversibly changes between a plurality of variable resistance value ranges in accordance with an electric stress applied. | 11-26-2015 |
20150356013 | SYSTEM AND METHOD FOR MANAGING TRANSACTIONS - A method for writing data, the method may include: receiving or generating, by an interfacing module, a data unit coherent write request for performing a coherent write operation of a data unit to a first address; receiving, by the interfacing module and from a circuit that comprises a cache and a cache controller, a cache coherency indicator that indicates that a most updated version of the content stored at the first address is stored in the cache; and instructing, by the interfacing module, the cache controller to invalidate a cache line of the cache that stored the most updated version of the first address without sending the most updated version of the content stored at the first address from the cache to a memory module that differs from the cache if a length of the data unit equals a length of the cache line. | 12-10-2015 |
20150357011 | Techniques for Accessing a Dynamic Random Access Memory Array - Examples are disclosed for accessing a dynamic random access memory (DRAM) array. In some examples, sub-arrays of a DRAM bank may be capable of opening multiple pages responsive to a same column address strobe. In other examples, sub-arrays of a DRAM bank may be arranged such that input/output (IO) bits may be routed in a serialized manner over an IO wire. For these other examples, the IO wire may pass through a DRAM die including the DRAM bank and/or may couple to a memory channel or bus outside of the DRAM die. Other examples are described and claimed. | 12-10-2015 |
20150357012 | DIGITAL SIGNAL PROCESSOR AND DATA INPUTTING/OUTPUTTING METHOD - The digital signal processor includes a DRAM including multiple memory cells configured to store data in a parasitic capacitor and a core logic configured to perform an operation of recording, reading, or updating data in the DRAM on the basis of a predetermined digital signal processing architecture. The core logic: records input data in a memory cell of the DRAM; reads the recorded input data before a retention time passes; and externally outputs the data or stores the data in another memory cell of the DRAM. | 12-10-2015 |
20150363107 | MEMORY MODULE AND SYSTEM SUPPORTING PARALLEL AND SERIAL ACCESS MODES - A memory module can be programmed to deliver relatively wide, low-latency data in a first access mode, or to sacrifice some latency in return for a narrower data width, a narrower command width, or both, in a second access mode. The narrow, higher-latency mode requires fewer connections and traces. A controller can therefore support more modules, and thus increased system capacity. Programmable modules thus allow computer manufacturers to strike a desired balance between memory latency, capacity, and cost. | 12-17-2015 |
20150370699 | DRAM AND ACCESS AND OPERATING METHOD THEREOF - An access method for a dynamic random access memory (DRAM) is provided. The method includes partitioning a row address into a first portion and a second portion; providing the first portion of the row address via an address bus and a first active command via a command bus to the memory; and providing the second portion of the row address via the address bus and a second active command via the command bus to the memory after the first active command is provided. The address bus is formed by a plurality of address lines, and a quantity of the address lines is smaller than the number of bits of the row address. A corresponding electronic device is also provided. | 12-24-2015 |
20150370731 | MEMORY SYSTEM AND METHOD FOR OPERATING THE SAME - A memory system includes a common data bus, a common control bus, memory devices suitable for sharing the common data bus and the common control bus, wherein the memory devices each have different latencies for recognizing control signals of the common control bus, and a controller suitable for controlling the memory devices through the common data bus and the common control bus. | 12-24-2015 |
20150371688 | MEMORY DEVICES HAVING SPECIAL MODE ACCESS - Memory devices are provided that include special operating modes accessible upon receipt of a particular message from a host. One device includes a memory array, a special mode enable register, and a controller. When the controller receives a register write command to write first data into the special mode enable register and the memory device does so, the memory device operates in a first mode. When the controller receives a register write command to write second data into the special mode enable register and the memory device does so, the memory device operates in a second mode. | 12-24-2015 |
20150371689 | ADAPTIVE GRANULARITY ROW- BUFFER CACHE - According to an example, a method for adaptive-granularity row buffer (AG-RB) caching may include determining whether to cache data to a RB cache, and adjusting, by a processor or a memory side logic, an amount of the data to cache to the RB cache for different memory accesses, such as dynamic random-access memory (DRAM) accesses. According to another example, an AG-RB cache apparatus may include a 3D stacked DRAM including a plurality of DRAM dies including one or more DRAM banks, and a logic die including a RB cache. The AG-RB cache apparatus may further include a processor die including a memory controller including a predictor module to determine whether to cache data to the RB cache, and to adjust an amount of the data to cache to the RB cache for different DRAM accesses. | 12-24-2015 |
20150371708 | SRAM CELLS - There is provided a memory unit that comprises a plurality of memory cell groups, each memory cell group comprising a plurality of memory cells that are each operatively connected to a first local bit line and a second local bit line by respective first and second access transistors, and each memory cell being associated with a word line configured to control the first and second access transistors of the memory cell. The first and second local bit lines of each memory cell group being operatively connected to respective first and second column bit lines by respective first and second group access switches, the first group access switch being configured to be controlled by the second column bit line, and the second group access switch being configured to be controlled by the first column bit line. | 12-24-2015 |
20150380088 | RESISTIVE MEMORY WRITE OPERATION WITH MERGED RESET - In a memory device where writing a memory cell to a first bit state takes longer than writing to the second bit state, selectively executing the write operation can amortize the performance cost of writing the bit state that takes longer to write. Write logic dequeues multiple cachelines from a write buffer and sets all bits of all cachelines to the first bit state in a single write operation. The write logic then executes separate write operations on each cacheline separately to selectively write memory cells of each respective cacheline to the second bit state. | 12-31-2015 |
20160012869 | MEMORY CONTROLLER WITH STAGGERED REQUEST SIGNAL OUTPUT | 01-14-2016 |
20160034204 | Data Processing Method, Apparatus, and System - A data processing method, including dividing a to-be-processed data block into multiple data subblocks, where a quantity of the multiple data subblocks is less than or equal to a quantity of banks Banks of a memory, and performing an access operation on a bank corresponding to each data subblock of the to-be-processed block, where different data subblocks of the block are corresponding to different Banks of the memory. In an embodiment of the present disclosure, a processor maps different data subblocks of a to-be-processed Block to different Banks, so that a quantity of inter-page access operations on a same Block may be reduced, thereby improving memory access efficiency when two contiguous memory access operations access different pages of a same bank. | 02-04-2016 |
20160034406 | MEMORY CONTROLLER AND METHOD FOR CONTROLLING A MEMORY DEVICE TO PROCESS ACCESS REQUESTS ISSUED BY AT LEAST ONE MASTER DEVICE - A memory controller and method are provided for controlling a memory device to process access requests issued by at least one master device, the memory device having a plurality of access regions. The memory controller has a pending access requests storage that buffers access requests that have been issued by a master device prior to those access requests being processed by the memory device. Access control circuitry then issues control commands to the plurality of access regions in order to control the memory device to process access requests retrieved from the pending access requests storage. A query structure is also provided that is configured to maintain, for each access region, information about the buffered access requests in the pending access requests storage, and the access control circuitry references the query structure when determining the control commands to be issued to the plurality of access regions. Such an approach enables significant performance and energy savings to be realized in control of the memory device, without requiring the contents of the pending access requests storage to be directly monitored by the access control circuitry. | 02-04-2016 |
20160041781 | DATA BUFFER WITH STROBE-BASED PRIMARY INTERFACE AND A STROBE-LESS SECONDARY INTERFACE - A data buffer with a strobe-based primary interface and a strobe-less secondary interface used on a memory module is described. One memory module includes an address buffer, the data buffer and multiple dynamic random-access memory (DRAM) devices. The address buffer provides a timing reference to the data buffer and to the DRAM devices for one or more transactions between the data buffer and the DRAM devices via the strobe-less secondary interface. | 02-11-2016 |
20160042769 | SEMICONDUCTOR PACKAGE ON PACKAGE MEMORY CHANNELS WITH ARBITRATION FOR SHARED CALIBRATION RESOURCES - A package on package (PoP) apparatus includes a shared ZQ calibration path and a shared ZQ calibration resistor for calibrating multiple channels of DRAM on a memory package of the PoP apparatus. Arbitration circuitry on a processor package of the PoP apparatus is coupled to separate memory controllers for the multiple memory channels. The arbitration circuitry is configured to indicate availability of the shared ZQ calibration resistor. The memory controllers are configured to communicate with the arbitration circuitry before performing a ZQ calibration and to delay the ZQ calibration when the arbitration circuitry indicates the ZQ calibration resistor is busy. | 02-11-2016 |
20160048451 | ENERGY-EFFICIENT DYNAMIC DRAM CACHE SIZING - Techniques described herein generally include methods and systems related to improving energy efficiency in a chip multiprocessor by reducing the energy consumption of a DRAM cache for such a multi-chip processor. Methods of varying refresh interval may be used to improve the energy efficiency of such a DRAM cache. Specifically, a per-set refresh interval based on retention time of memory blocks in the set may be determined, and, starting from the leakiest memory block, memory blocks stored in the DRAM cache that are associated with data also stored in a lower level of cache are not refreshed. | 02-18-2016 |
20160049181 | VIRTUAL MEMORY MAPPING FOR IMPROVED DRAM PAGE LOCALITY - Embodiments are described for methods and systems for mapping virtual memory pages to physical memory pages by analyzing a sequence of memory-bound accesses to the virtual memory pages, determining a degree of contiguity between the accessed virtual memory pages, and mapping sets of the accessed virtual memory pages to respective single physical memory pages. Embodiments are also described for a method for increasing locality of memory accesses to DRAM in virtual memory systems by analyzing a pattern of virtual memory accesses to identify contiguity of accessed virtual memory pages, predicting contiguity of the accessed virtual memory pages based on the pattern, and mapping the identified and predicted contiguous virtual memory pages to respective single physical memory pages. | 02-18-2016 |
20160054948 | DRAM AND ACCESS AND OPERATING METHOD THEREOF - An operating method for a memory. The method includes obtaining a first address via an address bus and a first command via a command bus from a controller, obtaining a second address via the address bus and a second command via the command bus from the controller after the first command is obtained, and combining the first address and the second address to obtain a valid address. The valid address is a row address when each of the first command and the second command is an active command, and the valid address is a column address when the second command is an access command. | 02-25-2016 |
20160062659 | VIRTUAL MEMORY MODULE - A memory controller of a mass memory device determining that a memory operation has been initiated which involves the mass memory device, and in response dynamically checks for available processing resources of a host device that is operatively coupled to the mass memory device and thereafter puts at least one of the available processing resources into use for performing the memory operation. In various non-limiting examples: the available processing resources may be a core engine of a multi-core CPU, a DPS or a graphics processor; central processing unit; a digital signal processor; and a graphics processor; and it may also be dynamically checked whether memory resources of the host are available and those can be similarly put into use (e.g., write data to a DRAM of the host, process data in the DRAM with the host DSP, then write the processed data to the mass memory device). | 03-03-2016 |
20160062913 | SEMICONDUCTOR DEVICE, SEMICONDUCTOR SYSTEM AND SYSTEM ON CHIP - At least one example embodiment discloses a semiconductor device including a direct memory access (DMA) system configured to directly access a memory to write first data to an address of the memory, wherein the DMA system includes an initializer configured to set a data transfer parameter for writing the first data to the memory during a flushing period of second data from a cache to the address by a processor, a creator configured to create the first data based on the set data transfer parameter, and a transferer configured to write the first data to the address of the memory after the flushing period based on the data transfer parameter. | 03-03-2016 |
20160064048 | ASYNCHRONOUS/SYNCHRONOUS INTERFACE - The present disclosure includes methods, and circuits, for operating a memory device. One method embodiment for operating a memory device includes controlling data transfer through a memory interface in an asynchronous mode by writing data to the memory device at least partially in response to a write enable signal on a first interface contact, and reading data from the memory device at least partially in response to a read enable signal on a second interface contact. The method further includes controlling data transfer in a synchronous mode by transferring data at least partially in response to a clock signal on the first interface contact, and providing a bidirectional data strobe signal on an interface contact not utilized in the asynchronous mode. | 03-03-2016 |
20160077751 | COALESCING MEMORY ACCESS REQUESTS - A computing system can include a processor and a memory. The computing system can also include a memory controller to interface between the processor and the memory. The memory controller coalesces requests to access a memory row to form a single request to access the memory row. | 03-17-2016 |
20160085447 | Solid State Drives and Computing Systems Including the Same - Solid state drives may include a controller, a mapping table and a buffer memory. The controller provides a logical address of associated data through a first input-output unit at a first speed and provides the associated data through a second input-output unit at a second speed. The controller may be connected to the first input-output unit and the second input-output unit. The mapping table may be connected to the controller through the first input-output unit. The buffer memory may be connected to the controller through the second input-output unit. The first input-output unit may be physically separated from the second input-output unit. The first speed may be different from the second speed. | 03-24-2016 |
20160093344 | METHOD, APPARATUS AND SYSTEM TO MANAGE IMPLICIT PRE-CHARGE COMMAND SIGNALING - Techniques and mechanisms for exchanging information between a memory controller and a memory device. In an embodiment, a memory controller receives information indicating for a memory device a threshold number of pending consolidated activation commands to access that memory device. The threshold number indicated by the information is less than a theoretical maximum number of pending consolidated activation commands, the theoretical maximum number defined based on timing parameters of the memory device. In another embodiment, the memory controller limits communication of consolidated activation commands to the memory device based on the information indicating the threshold number. | 03-31-2016 |
20160093345 | DYNAMIC RANDOM ACCESS MEMORY TIMING ADJUSTMENTS - A method includes detecting, at a controller, a rate-of-change between first data traffic to be sent to a dynamic random access memory (DRAM) at a first time and second data traffic to be sent to the DRAM at a second time. The method also includes adjusting a data rate of the second data traffic in response to a determination that the rate-of-change satisfies a threshold. | 03-31-2016 |
20160098194 | MECHANISM FOR ENABLING FULL DATA BUS UTILIZATION WITHOUT INCREASING DATA GRANULARITY - A memory is disclosed comprising a first memory portion, a second memory portion, and an interface, wherein the memory portions are electrically isolated from each other and the interface is capable of receiving a row command and a column command in the time it takes to cycle the memory once. By interleaving access requests (comprising row commands and column commands) to the different portions of the memory, and by properly timing these access requests, it is possible to achieve full data bus utilization in the memory without increasing data granularity. | 04-07-2016 |
20160103620 | SYMBOL LOCK METHOD AND A MEMORY SYSTEM USING THE SAME - A memory system includes a transmitter and a receiver. The transmitter is configured to transmit a data signal corresponding to a first symbol lock pattern and a data burst via an interface. The data burst includes a first data and a subsequent data. The receiver is configured to receive the data signal, to detect the first symbol lock pattern based on the received data signal, and to find the first data of the data burst according to the detected first symbol lock pattern. | 04-14-2016 |
20160110132 | Dynamic Adjustment Of Speed of Memory - A technique, as well as select implementations thereof, pertaining to dynamic adjustment of speed of memory is described. The technique may involve obtaining information indicative of memory transactions associated with a memory device from an external memory interface coupled to the memory device. The technique may also involve determining a runtime bandwidth of the memory device according to the memory transactions. The technique may further involve comparing the runtime bandwidth of the memory device to at least one threshold bandwidth. The technique may additionally involve adjusting the speed of the memory device according to a result of the comparing. | 04-21-2016 |
20160117123 | DEVICE, METHOD, AND COMPUTER PROGRAM FOR SCHEDULING ACCESS REQUESTS TO SHARED MEMORY - A scheduling device according to one embodiment includes an access request accepting section and an access request selecting section. The access request accepting section is configured to accept access requests from requesters. The access request selecting section is configured to select a first access request as a reference for access request selection from among the accepted access requests, select an access request transferable in a bank interleave (BI) mode with respect to the first access request, and select an access request transferable in a continuous read/write (CN) mode in response to a determination that there is no access request transferable in the BI mode, or that the preceding access request was in the BI or the CN mode. The access request selecting section is configured to repeat the selections in response to a determination that there is no access request transferable in the BI mode and in the CN mode. | 04-28-2016 |
20160117129 | DISAGGREGATED MEMORY APPLIANCE - Example embodiments provide a disaggregated memory appliance, comprising: a plurality of leaf memory switches that manage one or more memory channels of one or more of leaf memory modules; a low-latency memory switch that arbitrarily connects one or more external processors to the plurality of leaf memory modules over a host link; and a management processor that responds to requests from one or more external processors for management, maintenance, configuration and provisioning of the leaf memory modules within the memory appliance. | 04-28-2016 |
20160124873 | MEMORY SYSTEM WITH REGION-SPECIFIC MEMORY ACCESS SCHEDULING - An integrated circuit device includes a memory controller coupleable to a memory. The memory controller to schedule memory accesses to regions of the memory based on memory timing parameters specific to the regions. A method includes receiving a memory access request at a memory device. The method further includes accessing, from a timing data store of the memory device, data representing a memory timing parameter specific to a region of the memory cell circuitry targeted by the memory access request. The method also includes scheduling, at the memory controller, the memory access request based on the data. | 05-05-2016 |
20160132265 | STORAGE DEVICE AND OPERATING METHOD OF THE SAME - A storage device includes a first memory, a second memory, and a memory controller. The memory controller may include a first controller configured to access the first memory according to a request of an external host device, and a second memory controller configured to access the second memory according to the request of the external host device. The first memory and first memory controller may be configured so that the first memory operates according to a first configuration type, and the second memory and second memory controller may be configured so that the second memory operates according to a second configuration type different from the first configuration type. The memory controller is configured to receive the request from the external host device and based on the request, to store write data to the first memory, and store metadata about the write data to the second memory. | 05-12-2016 |
20160132432 | Methods for Caching and Reading Data to be Programmed into a Storage Unit and Apparatuses Using the Same - A method for caching and reading data to be programmed into a storage unit, performed by a processing unit, including at least the following steps. A write command for programming at least a data page into a first address is received from a master device via an access interface. It is determined whether a block of data to be programmed has been collected, where the block contains a specified number of pages. The data page is stored in a DRAM (Dynamic Random Access Memory) and cache information is updated to indicate that the data page has not been programmed into the storage unit, and to also indicate the location of the DRAM caching the data page when the block of data to be programmed has not been collected. | 05-12-2016 |
20160139833 | MEMORY APPARATUS AND METHOD FOR ACCESSING MEMORY - A memory apparatus and a memory accessing method are provided. The memory accessing method includes: calculating an accessed times of each of a plurality of word line addresses; setting each of the corresponding word line addresses as an aggressor word line address by comparing the accessed times of the each of the word line addresses and a threshold accessed times; and setting a backup word line address, and replacing memory cells of the aggressor word line address by memory cells of the backup word line address. | 05-19-2016 |
20160140045 | PACKET CLASSIFICATION - Methods, systems, and computer readable media for packet classification are disclosed. According to one method, the method includes receiving a packet containing header information for packet classification. The method also includes determining, using the header information, a first memory address identifier. The method further includes determining, using the first memory address identifier, memory pointer information indicating a second memory address identifier. The method also includes obtaining, using the memory pointer information indicating the second memory address identifier, packet related information from a memory. The method further includes performing, using the packet related information, a packet classification action. | 05-19-2016 |
20160147442 | PERIPHERAL COMPONENT INTERCONNECT EXPRESS CONTROLLERS CONFIGURED WITH NON-VOLATILE MEMORY EXPRESS INTERFACES - Systems and methods presented herein provide for SSD data storage via PCIe controllers configured with NVMe interfaces. In one embodiment, a PCIe controller includes a plurality of buffers, a Dynamic Random Access Memory (DRAM) device, and an I/O processor operable to partition the DRAM device into a plurality of logical blocks. The controller also includes virtual function logic communicatively coupled to the logical blocks of the DRAM device and to the buffers. The virtual function logic is coupled to a host system through the I/O processor to process an I/O request from the host system to a logical block of the DRAM device, to retrieve data from the logical block to at least one of the buffers, and to transfer the data from the buffer to the host system. | 05-26-2016 |
20160147481 | BUFFER CIRCUIT WITH DATA BIT INVERSION - A buffer circuit ( | 05-26-2016 |
20160147663 | Graphics Deterministic Pre-Caching - An apparatus includes a computerized appliance having a processor, persistent storage storing one or more executable programs, and Dynamic Random Access Memory (DRAM) accessible by the processor, and caching software (SW) executing on the processor from a non-transitory medium, the SW providing a process: storing Logical Block Address (LBA) tables associated with individual ones of existing programs executable on the processor, tracking program launch and close, managing caching of data for any program launched according to the associated LBA, tracking data usage during execution of any program launched, on closing a program, removing any unused LBAs from the associated LBA table, adding any LBAs accessed not on the table; and saving the resulting LBA table for the program closed. | 05-26-2016 |
20160162399 | SYSTEMS AND METHODS FOR PROVIDING IMPROVED LATENCY IN A NON-UNIFORM MEMORY ARCHITECTURE - Systems, methods, and computer programs are disclosed for allocating memory in a portable computing device having a non-uniform memory architecture. One embodiment of a method comprises: receiving from a process executing on a first system on chip (SoC) a request for a virtual memory page, the first SoC electrically coupled to a second SoC via an interchip interface, the first SoC electrically coupled to a first local volatile memory device via a first high-performance bus and the second SoC electrically coupled to a second local volatile memory device via a second high-performance bus; determining a free physical page pair comprising a same physical address available on the first and second local volatile memory devices; and mapping the free physical page pair to a single virtual page address. | 06-09-2016 |
20160162415 | SYSTEMS AND METHODS FOR PROVIDING IMPROVED LATENCY IN A NON-UNIFORM MEMORY ARCHITECTURE - Systems, methods, and computer programs are disclosed for allocating memory in a portable computing device having a non-uniform memory architecture. One embodiment of a method comprises: receiving from a process executing on a first system on chip (SoC) a request for a virtual memory page, the first SoC electrically coupled to a second SoC via an interchip interface, the first SoC electrically coupled to a first local volatile memory device via a first high-performance bus and the second SoC electrically coupled to a second local volatile memory device via a second high-performance bus; determining whether a number of available physical pages on the first and second local volatile memory devices exceeds a minimum threshold for initiating replication of memory data between the first and second local volatile memory devices; and if the minimum threshold is exceeded, allocating a first physical address on the first local volatile memory device and a second physical address on the second local volatile memory device to a single virtual page address. | 06-09-2016 |
20160180900 | IMPLEMENTING DRAM ROW HAMMER AVOIDANCE | 06-23-2016 |
20160180916 | Reconfigurable Row Dram | 06-23-2016 |
20160188257 | DISABLING A COMMAND ASSOCIATED WITH A MEMORY DEVICE - In an embodiment, a memory device may contain device processing logic and a mode register. The mode register may a register that may specify a mode of operation of the memory device. A field in the mode register may hold a value that may indicate whether a command associated with the memory device is disabled. The value may be held in the field until either the memory device is power-cycled or reset. The device processing logic may acquire an instance of the command. The device processing logic may determine whether the command is disabled based on the value held by the mode register. The device processing logic may not execute the instance of the command if the device processing logic determines the command is disabled. If the device processing logic determines the command is not disabled, the device processing logic may execute the instance of the command. | 06-30-2016 |
20160188258 | MEMORY INTERFACE SIGNAL REDUCTION - In some embodiments a controller includes a memory activate pin, one or more combined memory command/address signal pins, and a selection circuit adapted to select in response to the memory activate pin as each of the one or more combined memory command/address signal pins either a memory command signal or a memory address signal. Other embodiments are described and claimed. | 06-30-2016 |
20160188490 | COST-AWARE PAGE SWAP AND REPLACEMENT IN A MEMORY - Memory eviction that recognizes not all evictions have an equal cost on system performance. A management device keeps a weight and/or a count associated with each portion of memory. Each memory portion is associated with a source agent that generates requests to the memory portion. The management device adjusts the weight by a cost factor indicating a latency impact that could occur if the evicted memory portion is again requested after being evicted. The latency impact is a latency impact for the associated source agent to replace the memory portion. In response to detecting an eviction trigger for the memory device, the management device can identify a memory portion having a most extreme weight, such as a highest or lowest value weight. The management device replaces the identified memory portion with a memory portion that triggered the eviction. | 06-30-2016 |
20160203019 | Instruction and logic to test transactional execution status | 07-14-2016 |
20160254036 | FLEXIBLE COMMAND ADDRESSING FOR MEMORY | 09-01-2016 |
20160378396 | ACCELERATED ADDRESS INDIRECTION TABLE LOOKUP FOR WEAR-LEVELED NON-VOLATILE MEMORY - Embodiments are generally directed to accelerated address indirection table lookup for wear-leveled non-volatile memory. A embodiment of a memory device includes nonvolatile memory; a memory controller; and address indirection logic to provide address indirection for the nonvolatile memory, of the address indirection logic to maintain an address indirection table (AIT) in the nonvolatile memory, the AIT including a plurality of levels, and copy at least a portion of the AIT to a second memory, the second memory having less latency than the first memory. | 12-29-2016 |
20160378660 | FLUSHING AND RESTORING CORE MEMORY CONTENT TO EXTERNAL MEMORY - A method and apparatus for flushing and restoring core memory content to and from, respectively, external memory are described. In one embodiment, the apparatus is an integrated circuit comprising a plurality of processor cores, the plurality of process cores including one core having a first memory operable to store data of the one core, the one core to store data from the first memory to a second memory located externally to the processor in response to receipt of a first indication that the one core is to transition from a first low power idle state to a second low power idle state and receipt of a second indication generated externally from the one core indicating that the one core is to store the data from the first memory to the second memory, locations in the second memory at which the data is stored being accessible by the one core and inaccessible by other processor cores in the IC; and a power management controller coupled to the plurality of cores and located outside the plurality of cores. | 12-29-2016 |
20160378684 | MULTI-PAGE CHECK HINTS FOR SELECTIVE CHECKING OF PROTECTED CONTAINER PAGE VERSUS REGULAR PAGE TYPE INDICATIONS FOR PAGES OF CONVERTIBLE MEMORY - A processor of an aspect includes at least one translation lookaside buffer (TLB) and a memory management unit (MMU). Each TLB is to store translations of logical addresses to corresponding physical addresses. The MMU, in response to a miss in the at least one TLB for a translation of a first logical address to a corresponding physical address, is to check for a multi-page protected container page versus regular page (P/R) check hint. If the multi-page P/R check hint is found, then the MMU is to check a P/R indication. If the multi-page P/R check hint is not found, then the MMU does not check the P/R indication. Other processors, methods, and systems are also disclosed. | 12-29-2016 |
20160379690 | ACCESSING DATA STORED IN A COMMAND/ADDRESS REGISTER DEVICE - A register not connected to a data bus is read by transferring data across an address bus to a device connected to the data bus, from which the data is read by a device connected to the data bus. The register resides in a register device connected via the address bus to a memory device that is connected to both the address bus and the data bus. A host processor triggers the register device to transfer information over the address bus to a register on the memory device. The host processor then reads the information from the register of the memory device. | 12-29-2016 |
20170235515 | APPARATUSES AND METHODS FOR DATA MOVEMENT | 08-17-2017 |
20170236566 | DATA TRANSFER FOR MULTI-LOADED SOURCE SYNCHROUS SIGNAL GROUPS | 08-17-2017 |
20170236568 | ELECTRONIC DEVICE | 08-17-2017 |
20180024739 | Memory Sharing for Physical Accelerator Resources in a Data Center | 01-25-2018 |
20180024750 | WORKLOAD-AWARE PAGE MANAGEMENT FOR IN-MEMORY DATABASES IN HYBRID MAIN MEMORY SYSTEMS | 01-25-2018 |
20180024864 | Memory Module for a Data Center Compute Sled | 01-25-2018 |
20190146694 | DRAM Bank Activation Management | 05-16-2019 |
20190147937 | APPARATUSES AND METHODS FOR SHIFT DECISIONS | 05-16-2019 |