Entries |
Document | Title | Date |
20080209117 | Nonvolatile RAM - A nonvolatile RAM allows a read/write operation to be performed in a random manner with respect to a memory area, which is divided into a plurality of memory arrays each including a plurality of memory cells. Upon detection of an initialization signal, initialization is performed on at least one memory array, which is selected in advance. In addition, a disconnection control signal occurs so as to disconnect an access by an external device during a prescribed period for performing the initialization. The nonvolatile RAM is capable of protecting data from being irregularly read, modified, and reloaded with respect to at least one memory array, which is selected in advance, even when the nonvolatile RAM is frequently accessed by a prescribed application. | 08-28-2008 |
20080222351 | HIGH-SPEED OPTICAL CONNECTION BETWEEN CENTRAL PROCESSING UNIT AND REMOTELY LOCATED RANDOM ACCESS MEMORY - A data transmission assembly includes a first connection terminal coupled to a processing unit and a second connection terminal coupled to a random access memory (RAM) resource. The data transmission assembly also includes a first electrical/optical (EO) signal converter and a second EO signal converter. The first EO signal converter is coupled to the first connection terminal and the second EO signal converter is coupled to the second connection terminal. The data transmission assembly also includes an optical signal propagation medium with a first end and a second end. The first end is attached to the first EO signal converter, and the second end is attached to the second EO signal converter. The signal propagation medium carries signals between the first connection terminal and the second connection terminal to support memory accesses performed by the processing unit to access data at memory locations within the RAM resource. | 09-11-2008 |
20080229006 | High Bandwidth Low-Latency Semaphore Mapped Protocol (SMP) For Multi-Core Systems On Chips - A system and method for dynamically managing movement of semaphore data within the system. The system includes, but is no limited to, a plurality of functional units communicating over the network, a memory device communication with the plurality of functional units over the network, and at least one semaphore storage unit communicating with the plurality of functional unites and the memory device over the network. The plurality of functional units include a plurality of functional unit memory locations. The memory device includes a plurality of memory device memory locations. The at least one semaphore storage unit includes a plurality of semaphore storage unit memory locations. The at least one semaphore storage unit controls dynamic movement of the semaphore data among the plurality of functional unit memory locations, the plurality of memory device memory locations, the plurality of semaphore storage unit memory locations, and any combinations therof. | 09-18-2008 |
20080244167 | Electronic device and method for installing software - A peripheral for a computer and a method of using the peripheral is for installing software onto the computer using Direct Memory Access. The peripheral comprises a computer accessible medium and a program product. The program product has codes to read and write to the Random Access Memory of the computer; and to bypass restrictions of the host computer Operating System that prevent the peripheral from gaining full access to all portions of the host computer's Random Access Memory. The preferred methods of using the peripheral automatically install software on a computer or copies forensic data from the computer's Random Access Memory once the peripheral is connected to the computer. | 10-02-2008 |
20080263267 | SYSTEM ON CHIP WITH RECONFIGURABLE SRAM - A system on chip comprises N components, where N is an integer greater than one, and a storage module. The storage module comprises a first memory, a control module, and a connection module. The first memory includes M blocks of static random access memory, where M is an integer greater than one. The control module generates a first assignment of the M blocks to the N components during a first period and generates a second assignment of the M blocks to the N components during a second period. The first and second assignments are different. The connection module dynamically connects the M blocks to the N components based on the first and second assignments. | 10-23-2008 |
20080263268 | Digital signal processor - A digital signal processor is adapted to a working RAM, which is capable of storing a plurality of data in a rewritable manner and whose storage area is divided into a plurality of sub-areas that are designated by addresses in read/write operations, wherein an operation circuit performs calculations on the data of the working RAM in accordance with a program, and wherein upon detection of a non-access event in which the program does not need to access the working RAM, a write circuit compulsorily writes ‘0’ into the working RAM with regard to each of the prescribed addresses of the prescribed sub-areas subjected to initialization, which are designated by address data. Thus, it is possible to actualize the selective initialization on the prescribed sub-areas within the working RAM without increasing the scale of the peripheral circuitry, without requiring complicated controls, and without increasing the overall processing time therefor. | 10-23-2008 |
20080282027 | SECURE AND SCALABLE SOLID STATE DISK SYSTEM - A solid state disk system is disclosed. The system comprises a user token and at least one level secure virtual storage controller, coupled to the host system. The system includes a plurality of virtual storage devices coupled to at least one secure virtual storage controller. A system and method in accordance with the present invention could be utilized in flash based storage, disk storage systems, portable storage devices, corporate storage systems, PCs, servers, wireless storage, and multimedia storage systems. | 11-13-2008 |
20080288718 | METHOD AND APPARATUS FOR MANAGING MEMORY FOR DYNAMIC PROMOTION OF VIRTUAL MEMORY PAGE SIZES - A computer implemented method, apparatus, and computer usable program code for managing real memory. In response to a request for a page to be moved into real memory, a contiguous range of real memory is reserved for the page corresponding to a contiguous virtual memory range to form a reservation within a plurality of reservations for the real memory. This reservation enables efficient promotion of pages to a larger page size. The page only occupies a portion of the contiguous range of real memory for the reservation. In response to a need for real memory, a selected reservation is released within the plurality of reservations based on an age of the selected reservation within the plurality of reservations. | 11-20-2008 |
20080288719 | Memory Tracing in an Emulation Environment - A system and method are disclosed to trace memory in a hardware emulator. In one aspect, a first Random Access Memory is used to store data associated with a user design during emulation. At any desired point in time, the contents of the first Random Access Memory are captured in a second Random Access Memory. After the capturing, the contents of the second Random Access Memory are copied to a visibility system. During the copying, the user design may modify the data in the first Random Access Memory while the captured contents within the second Random Access Memory remain unmodifiable so that the captured contents are not compromised. In another aspect, different size memories are in the emulator to emulate the user model. Larger memories have their ports monitored to reconstruct the contents of the memories, while smaller memories are captured in a snapshot RAM. Together the two different modes of tracing memory are used to provide visibility to the user of the entire user memory. | 11-20-2008 |
20080294839 | System and method for dumping memory in computer systems - A method and system for dumping computer memory includes receiving an instruction to perform a dump of memory in a partitioned computer system where each partition has at least one processor and associated memory. The associated memory having a first portion and a second portion where the first portion is normally actively used for user application program or data storage. After receipt of the dump request, the source memory content is protected against corruption or contention from other program sources and copied into the second portion of memory. Preferably, the first and second portions are co-located RAM to provide speedy transfers of information. Access to the first portion of memory is then permitted by removing the protections and the user may have full access to run applications. The dump image is then transferred to any location as a background I/O task as the user executes his applications. | 11-27-2008 |
20080301360 | Random Access Memory for Use in an Emulation Environment - A Random Access Memory (RAM) and method of using the same are disclosed. The RAM includes a plurality of memory cells arranged in columns and in rows with each memory cell coupled to at least one word line and at least one bit line. The RAM includes a plurality of switches with at least one of the switches coupled between two of the memory cells to allow data to be copied from one of the two memory cells to the other of the two memory cells. In another aspect, the two memory cells can be considered a dual bit cell that contains a copying mechanism. There are two interleaved memory planes, assembled from bit cells that contain two bits of information. One bit is the primary bit that corresponds to the normal RAM bit. The second bit is able to receive a copy and hold the primary value. When the copying mechanism is over, the two memory planes may act as two completely independent structures. | 12-04-2008 |
20080301361 | Dedicated flow manager between the processor and the random access memory - The invention proposes a flow manager between the main processor and the random access memory that improves performances and security with a memory access management interface processor positioned in interface between the main processor and the random access memory, this memory access management interface processor selecting the relevant flow characteristics with which it feeds an interface dedicated storage unit, the interface dedicated storage unit being only accessible by the memory access management interface processor, the embodiment of this invention may be either hardware or logic. | 12-04-2008 |
20090006728 | VIRTUAL MACHINE STATE SNAPSHOTS - Saving state of Random Access Memory (RAM) in use by guest operating system software is accomplished using state saving software that starts a plurality of compression threads for compressing RAM data blocks used by the guest. Each compression thread determines a compression level for a RAM data block based on a size of a queue of data to be written to disk, then compresses the RAM data block, and places the compressed block in the queue. | 01-01-2009 |
20090006729 | CACHE FOR A MULTI THREAD AND MULTI CORE SYSTEM AND METHODS THEREOF - According to one embodiment, the present disclosure generally provides a method for improving the performance of a cache of a processor. The method may include storing a plurality of data in a data Random Access Memory (RAM). The method may further include holding information for all outstanding requests forwarded to a next-level memory subsystem. The method may also include clearing information associated with a serviced request after the request has been fulfilled. The method may additionally include determining if a subsequent request matches an address supplied to one or more requests already in-flight to the next-level memory subsystem. The method may further include matching fulfilled requests serviced by the next-level memory subsystem to at least one requester who issued requests while an original request was in-flight to the next level memory subsystem. The method may also include storing information specific to each request, the information including a set attribute and a way attribute, the set and way attributes configured to identify where the returned data should be held in the data RAM once the data is returned, the information specific to each request further including at least one of thread ID, instruction queue position and color. The method may additionally include scheduling hit and miss data returns. Of course, various alternative embodiments are also within the scope of the present disclosure. | 01-01-2009 |
20090055580 | MULTI-LEVEL DRAM CONTROLLER TO MANAGE ACCESS TO DRAM - Providing for multi-tiered RAM control is provided herein. As an example, a RAM access management system can include multiple input controllers each having a request buffer and request scheduler. Furthermore, a request buffer associated with a controller can vary in size with respect to other buffers. Additionally, request schedulers can vary in complexity and can be optimized at least for a particular request buffer size. As a further example, a first controller can have a large memory buffer and simple scheduling algorithm optimized for scalability. A second controller can have a small memory buffer and a complex scheduler, optimized for efficiency and high RAM performance. Generally, RAM management systems described herein can increase memory system scalability for multi-core parallel processing devices while providing an efficient and high bandwidth RAM interface. | 02-26-2009 |
20090055581 | DATA STORAGE DEVICE AND DATA PROVIDING METHOD THEREIN - A data storage device, the data storage device may include: a data storage unit; a system data storage unit that stores an application program, an operating system (OS), and management information related to a processing of the stored data; a system control unit that performs an initialization, a control, and a system setting of the device; a central processing unit (CPU) that performs data processing including data read and data write and processes an instruction word; a random access memory (RAM) that loads the data from the data storage unit and the system data storage unit, loads the instruction word of the CPU, and temporarily stores a data processing result of the processed instruction word; and an output determination unit that determines to output at least one of the data stored in the data storage unit, the application program, and the data processing result. | 02-26-2009 |
20090063759 | SYSTEM AND METHOD FOR PROVIDING CONSTRAINED TRANSMISSION AND STORAGE IN A RANDOM ACCESS MEMORY - A system and method for providing constrained transmission and storage in a random access memory. A system includes a memory device for providing constrained transmission and storage. The memory device includes an interface to a data bus, the data bus having a previous state. The memory device also includes an interface to an address and command bus for receiving a request to read data at an address, and a mechanism for initiating a programmable mode. The programmable mode facilitates retrieving data at the address, and executing an exclusive or (XOR) using the retrieved data and the previous state of the data bus as input. The result of the XOR operation is transmitted to the requester via the data bus. | 03-05-2009 |
20090063760 | Systems, devices, and/or methods to access synchronous RAM in an asynchronous manner - Certain exemplary embodiments can provide a method, which can comprise, via a state machine implemented as an application specific integrated circuit, responsive to an automatically detected asynchronous RAM interface signal, automatically transmitting a corresponding synchronous RAM interface signal. The state machine can be communicatively coupled to a programmable logic controller. | 03-05-2009 |
20090138655 | METHOD AND TERMINAL FOR DEMAND PAGING AT LEAST ONE OF CODE AND DATA REQUIRING REAL-TIME RESPONSE - A method and terminal for demand paging at least one of code and data requiring a real-time response is provided. The method includes splitting and compressing at least one of code and data requiring a real-time response to a size of a paging buffer and storing the compressed at least one of code and data in a physical storage medium, if there is a request for demand paging for the at least one of code and data requiring the real-time response, classifying the at least one of code and data requiring the real-time response as an object of Random Access Memory (RAM) paging that pages from the physical storage medium to a paging buffer, and loading the classified at least one of code and data into the paging buffer. | 05-28-2009 |
20090144490 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR PROVIDING IMPROVED MEMORY USAGE - An apparatus for providing improved memory usage may include a processor. The processor may be configured to receive media content data, direct storage of up to a predetermined amount of a most recently received portion of the media content data into a first memory reservoir, and, in response to storing the predetermined amount in the first memory reservoir, transfer oldest portions of the received media content from the first memory reservoir to a second memory reservoir to maintain the storage in the first memory reservoir at the predetermined amount. | 06-04-2009 |
20090157954 | CACHE MEMORY UNIT WITH EARLY WRITE-BACK CAPABILITY AND METHOD OF EARLY WRITE BACK FOR CACHE MEMORY UNIT - A cache memory unit includes: a cache memory; an early write-back condition checking unit for checking whether an early write-back condition has been satisfied; and an early write-back execution unit for monitoring a memory bus connecting the cache memory unit and an external memory unit, and in response to the memory bus being idle and the early write-back condition being satisfied, for causing dirty data in the cache memory to be written back to the external memory unit using the memory bus. | 06-18-2009 |
20090193185 | Method for accessing the physical memory of an operating system - A method for accessing the physical memory with an operating system, providing for mapping the physical address to the linear address of the memory in the operating system. Thus to access the user-space of the memory with an operating system is practically to read and write data in the kernel-space of the memory to achieve quick access of the physical memory. | 07-30-2009 |
20090204751 | MULTIPROCESSOR SYSTEM AND PORTABLE TERMINAL USING THE SAME - [PROBLEMS] To provide a portable terminal designated for speeding up the startup time of a multiprocessor system which is configured to be started up by a program being transferred from a specific processor to another processor. [MEANS OF SOLVING PROBLEMS] As a storing pattern of a program to a memory (ROM) transferred to another processor, a header is given to each code section. The header stores information as to whether or not the section needs to be transferred in each startup mode and size information of the corresponding code section. The startup time for each mode is shortened by enabling to transfer only the necessary portion from the transfer source processor to the transfer destination processor for each startup mode. | 08-13-2009 |
20090210615 | OVERLAY MANAGEMENT IN A FLASH MEMORY STORAGE DEVICE - The operating firmware of a portable flash memory storage device is stored in the relatively large file storage memory, which is non executable. It is logically parsed into overlays to fit into an executable memory. The overlays can be of differing sizes to organize function calls efficiently while minimizing dead space or unnecessarily separating functions that should be within one or a group of frequently accessed overlays. Eviction of the overlays is preferably carried out on a least recently loaded basis. These features minimize latency caused by calling overlays unnecessarily and minimize fragmentation of the random access memory used for the overlays. | 08-20-2009 |
20090222620 | MEMORY DEVICE, INFORMATION PROCESSING APPARATUS, AND ELECTRIC POWER CONTROLLING METHOD - A memory device includes a memory unit that is nonvolatile and is made up of a plurality of memory areas; first retaining units each of which is provided in correspondence with a different one of the memory areas and each of which retains first setting information that defines whether a corresponding one of the memory areas is in an active state or a stop state; and an electric power-source controlling unit that supplies electric power to one or more of the memory areas that correspond to the first setting information defining the memory areas to be in the active state, and stops electric power supply to one or more of the memory areas that correspond to the first setting information defining the memory areas to be in the stop state. | 09-03-2009 |
20090235018 | Increased Magnetic Damping for Toggle MRAM - Magnetic random access memory (MRAM) devices and techniques for use thereof are provided. In one aspect, a magnetic memory cell is provided. The magnetic memory cell comprises at least one fixed magnetic layer; at least one first free magnetic layer separated from the fixed magnetic layer by at least one barrier layer; at least one second free magnetic layer separated from the first free magnetic layer by at least one spacer layer; and at least one capping layer over a side of the second free magnetic layer opposite the spacer layer. One or more of the first free magnetic layer and the second free magnetic layer comprise at least one rare earth element, such that the at least one rare earth element makes up between about one percent and about 10 percent of one or more of the first free magnetic layer and the second free magnetic layer. | 09-17-2009 |
20090248968 | REDUCTION OF LATENCY IN STORE AND FORWARD ARCHITECTURES UTILIZING MULTIPLE INTERNAL BUS PROTOCOLS - Disclosed is a store and forward device that reduces latency. The store and forward device allows front end devices having various transfer protocols to be connected in a single path through a RAM, while reducing latency. Front end devices that transfer data on a piecemeal basis are required to transfer all of the data to a RAM prior to downloading data to a back end. Front end devices that transfer data in a single download begin the transfer of data out of a RAM as soon as a threshold value is reached. Hence, the latency associated with downloading all of the data into a RAM | 10-01-2009 |
20090300277 | DEVICES AND METHODS FOR OPERATING A SOLID STATE DRIVE - The present disclosure includes methods and devices for operating a solid state drive. One method embodiment includes receiving an indication of a desired number of write input/output operations (IOPs) per unit time performed by the solid state drive. The method can also include managing the number of write IOPs performed by the solid state drive at least partially based on the desired number of write IOPs per unit time, a number of spare blocks in the solid state drive, and a desired operational life for the solid state drive. | 12-03-2009 |
20100030953 | HIGH-SPEED SOLID STATE STORAGE SYSTEM HAVING A NON-VOLATILE RAM FOR RAPIDLY STORING ADDRESS MAPPING INFORMATION - A solid state storage system incorporating a non-volatile randome access memory (NVRAM) that exhibits a reduced storage time is presented. The solid state storage system includes a memory area, a controller, and an information storage area. The controller is configured to control the memory area. The information storage area controlled by the controller and is configured to store logical address mapping information and physical address mapping information of the memory area. | 02-04-2010 |
20100030954 | INFORMATION PROCESSING SYSTEM AND SEMICONDUCTOR STORAGE DEVICE - A random access memory includes a data signal line, a data-synchronization signal line for a data synchronization signal which provides a synchronization signal when data is transmitted to the data signal line, and a setting module. The setting module determines whether the data signal line is set to be a data signal line for common input/output use, a data signal line for output-only use, or a data signal line for input-only use, and further determines whether the data-synchronization signal line is set to be a data-synchronization signal line for common input/output use, a data-synchronization signal line for output-only use, or a data-synchronization signal line for input-only use. | 02-04-2010 |
20100057981 | METHODS AND DEVICES FOR EXECUTING DECOMPRESSED OPTION MEMORY IN SHADOW MEMORY - Methods and systems for executing a decompressed portion of an option memory in a shadow memory. An area of system memory is allocated and a portion of the option memory is decompressed using the allocated area. The decompressed portion is stored in the shadow memory so the decompressed portion can be executed in shadow memory. | 03-04-2010 |
20100057982 | Hypervisor security using SMM - Methods, systems, apparatuses and program products are disclosed for protecting computers and similar equipment from undesirable occurrences, especially attacks by malware. Invariant information, such as pure code and some data tables may be enrolled for later revalidation by code operating outside the normal context. For example, a periodic interrupt may invoked a system management mode interrupt service routine to discover whether code regions accessible to Protected Mode programs have become corrupted or otherwise changed, such as by tampering from untrusted or untrustworthy programs that have easy access only to protected mode operation. | 03-04-2010 |
20100070694 | COMPUTER SYSTEM HAVING RAM SLOTS WITH DIFFERENT SPECIFICATIONS - A computer system is able to adopt a RAM module belonged to a first specification with a RAM slot belonged to a second specification. The computer system comprises: a RAM module belonged to the first specification, a RAM sot belonged to the second specification, and a RAM controller connected to the RAM slot. The data, derived from the RAM module and only existed in the first specification, is transmitted to the RAM controller via the N/A pins of the RAM slot when the RAM module is plugged in the RAM slot. | 03-18-2010 |
20100070695 | POWER-EFFICIENT MEMORY MANAGEMENT FOR EMBEDDED SYSTEMS - Embodiments of the invention provide a memory allocation module that adopts memory-pool based allocation and is aware of the physical configuration of the memory blocks in order to manage the memory allocation intelligently while exploiting statistical characters of packet traffic. The memory-pool based allocation makes it easy to find empty memory blocks. Packet traffic characteristics are used to maximize the number of empty memory blocks. | 03-18-2010 |
20100077138 | Write Protection Method and Device for At Least One Random Access Memory Device - In a write protection method for at least one random access memory device, the inherent problems of such memory devices with regard to data integrity and security with respect to hacker attacks, such that they can also be used for secure archiving in particular of a large volume of data, are avoided by virtue of the fact that commands directed to the at least one memory device are received by a write protection device connected upstream of the at least one memory device before said commands are forwarded to the at least one memory device, wherein commands received in the write protection device are compared with a positive list of permitted commands previously stored in the write protection device, wherein in one case, where the comparison determines that a permitted command is present, said command is forwarded to the at least one memory device, and in the other case, where the comparison determines that no permitted command is present, said command is not forwarded to the at least one memory device. | 03-25-2010 |
20100088467 | MEMORY DEVICE AND OPERATING METHOD OF MEMORY DEVICE - A memory device may include a non-volatile memory and non-volatile RAM. The non-volatile memory may include a data block and a metadata block. Metadata information with respect to the data block may be included in the metadata block. A portion of metadata with respect to the data block or the metadata with respect to the metadata block may be stored in the non-volatile RAM. | 04-08-2010 |
20100095056 | RAM Control Device and Memory Device Using The Same - In a RAM control device, an arbiter circuit ( | 04-15-2010 |
20100095057 | NON-VOLATILE RESISTIVE SENSE MEMORY ON-CHIP CACHE - Various embodiments of the present invention are generally directed to an apparatus and associated method for a non-volatile resistive sense memory on-chip cache. In accordance with some embodiments, a processing circuit is formed on a first semiconductor substrate. A second semiconductor substrate is affixed to the first semiconductor substrate to form an encapsulated integrated chip package, wherein a non-volatile storage array of resistive sense memory (RSM) cells is formed on the second semiconductor substrate to cache data used by the processing circuit. | 04-15-2010 |
20100106899 | Global address space management - Methods, systems and computer program products for global address space management are described herein. A System on Chip (SOC) unit configured for a global address space is provided. The SOC includes an on-chip memory, a first controller and a second controller. The first controller is enabled to decode addresses that map to memory locations in the on-chip memory and the second controller is enabled to decode addresses that map to memory locations in an off-chip memory. | 04-29-2010 |
20100115195 | HARDWARE MEMORY LOCKS - Methods, systems and computer program products to implement hardware memory locks are described herein. A system to implement hardware memory locks is provided. The system comprises an off-chip memory coupled to a SOC unit that includes a controller and an on-chip memory. Upon receiving a request from a requester to access a first memory location in the off-chip memory, the controller is enabled to grant access to modify the first memory location based on an entry stored in a second memory location of the on-chip memory. In an embodiment, the on-chip memory is Static Random Access Memory (SRAM) and the off-chip memory is Random Access Memory (RAM). | 05-06-2010 |
20100122023 | PORTABLE ELECTRONIC DEVICE AND METHOD FOR PROTECTING DATA OF THE PORTABLE ELECTRONIC DEVICE - A portable electronic device includes a random access memory, a non-volatile random access memory, a detecting unit, and a processing unit. The detecting unit is configured to detect an acceleration of the portable electronic device. The processing unit is configured to compare a value of the acceleration of the portable electronic device with a predetermined parameter. If the value of the acceleration is greater or equal to the predetermined parameter, data is copied from the random access memory to the non-volatile random access memory. | 05-13-2010 |
20100138596 | INFORMATION PROCESSOR AND INFORMATION PROCESSING METHOD - According to one embodiment, an information processor includes a connector, a determination module, a recognition module, and a cache control module. The connector connects a storage device to the information processor. The storage device is used as a cache by an operating system which controls the information processor. The determination module determines whether to use the storage device connected to the information processor as a data readable and writable storage area. The recognition module causes the operating system to recognize the storage device as a storage area when the determination module determines to use the storage device as a storage area. The cache controller controls the operating system to use the storage device as a cache when the determination module determines not to use the storage device as a storage area. | 06-03-2010 |
20100146198 | OPTIMAL POWER USAGE IN DECODING A CONTENT STREAM STORED IN A SECONDARY STORAGE - Decoding a content of interest with optimal power usage. In an embodiment, a central processing unit (CPU) retrieves the frames of a data stream of interest from a secondary storage and stores them in a random access memory (RAM). The CPU forms an index table indicating the locations at which each of the frames is stored. The index table is provided to a decoder, which processes the frames in sequence to recover the original data from the encoded data. By using the index information, the power usage is reduced at least in an embodiment when the decoding is performed by an auxiliary processor. | 06-10-2010 |
20100153633 | PC architecture using fast NV RAM in main memory - Systems and methods for a PC or server architecture have been disclosed. The architecture is characterized by using non-volatile RAM modules, such as MRAM modules, for at least a part of the main memory, thus accelerating the power-on sequence of the computer. Components, which were stored in prior art either in battery backed CMOS Modules or in flash memory have been deployed in the non-volatile part of the main memory. Such components can be power-on self test codes, system configuration information, device drivers, a portion of the Operating system, and a portion or all of application programs and related application data. | 06-17-2010 |
20100153634 | SYSTEM AND METHOD FOR DATA MIGRATION BETWEEN COMPUTER CLUSTER ARCHITECTURE AND DATA STORAGE DEVICES - An improved duty cycle, increased effective bandwidth, and minimized power consumption are attained in a system for data migration between a compute cluster and disk drives by inclusion of a buffer node coupled to the compute cluster to store data received therefrom in a random fashion. The buffer node signals the computer nodes to promptly return from the I/O cycle to the computing state to improve the duty cycle of the device. The system further includes a storage controller which is coupled between the buffer node and the disk drives to schedule data transfer activity between them in an optimal orderly manner. The data transfers are actuated in the sequence determined based on minimization of seeking time and tier usage, and harvest priority, when the buffer node either reaches a predetermined storage space minimal level or a predetermined time has elapsed since the previous I/O cycle. The storage controller deactivates the disk drives which are not needed for the data transfer. Since the writing on the disk drives is conducted in the orderly manner, the system avoids the usage of excessive number of disk drives. | 06-17-2010 |
20100153635 | Storage device with expandable solid-state memory capacity - In a particular embodiment, a circuit device is disclosed that includes a first interface to a high speed data bus of a host system and a second interface coupled to a first data storage device. The circuit device further includes a solid-state storage device having a first solid-state data storage medium and having at least one expansion slot to receive at least one second solid-state data storage medium to expand a memory capacity of the solid-state storage device. The circuit device also includes a control circuit adapted to receive data from the host system via the first interface and to selectively write the received data to one of the first data storage device and the solid-state storage device. | 06-17-2010 |
20100161892 | PSEUDO DUAL-PORTED SRAM - A memory is described which includes a main memory array made up of multiple single-ported memory banks connected by parallel read and write buses, and a sideband memory equivalent to a single dual-ported memory bank. Control logic and tags state facilitates a pattern of access to the main memory and the sideband memory such that the memory performs like a fully provisioned dual-ported memory capable of reading and writing any two arbitrary addresses on the same cycle. | 06-24-2010 |
20100161893 | DISK SYSTEM USING MEMORY CONTROL SIGNAL OF PROCESSOR - The disk system of the present invention decreases defects of a volume restriction and a volatile characteristic of a RAM disk using a memory control signal of a host. The preset invention provides a disk system including a central control unit generating a memory control signal corresponding to a RAM memory and an external instruction and controlling the RAM memory, and wherein the RAM memory including a RAM disk constituted by RAMs and storing a system program and data; and a control signal processing unit converting the memory control signal into first and second memory control signals based on access information included in the memory control signal and controlling the RAM disk to access to the system program and the data by the second memory control signal. | 06-24-2010 |
20100185809 | Control System and Control Method of Virtual Memory - A control method of a virtual memory is adapted for using in a computer. The control method includes the following steps. First, a plurality of application programs executed in the computer are monitored. Second, the application programs are compared with at least a predetermined program, respectively. Third, the virtual memory of a solid state disk (SSD) is controlled to be turned on or turned off according to a comparing result. Herein, the virtual memory of the SSD is controlled to be turned on or turned off to enhance both lifetime of the SSD and operation efficiency of the computer. | 07-22-2010 |
20100191903 | MEMORIES FOR ELECTRONIC SYSTEMS | 07-29-2010 |
20100191904 | SYSTEM AND METHOD OF IMAGING A MEMORY MODULE WHILE IN FUNCTIONAL OPERATION - A memory module (e.g. a hard drive, an optical drive, a flash drive, etc.) associated with a computer system may be imaged without substantial interruption to the operation of the overall system. The imaging may include applying an image to the memory module while execution of one or more operations and/or algorithms that require at least intermittent access to information stored initially in the memory module is ongoing. This may enable a system associated with the memory module to continue with normal, or substantially normal, operation while the image is being applied to the memory module. The image applied to the memory module may, for example, update the system, restore the system to a previous state (e.g., to its state at a previous point in time), or otherwise modify the system with which it is associated. | 07-29-2010 |
20100199033 | SOLID-STATE DRIVE COMMAND GROUPING - A method and other embodiments associated with solid-state drive command grouping are described. In one embodiment, a first command and a second command are grouped into a command pack, where the first command and the second command do not share a common channel for execution. A solid-state drive is controlled to execute the command pack on the solid-state drive, where executing the command pack causes the first command and the second command to execute concurrently on separate channels. | 08-05-2010 |
20100205363 | MEMORY DEVICE AND WEAR LEVELING METHOD THEREOF - Disclosed is a memory device including a NVRAM and a page table, and a wear leveling method therefor. The page table includes mapping information which maps virtual addresses of the NVRAM with physical addresses of the NVRAM. A page table entry includes aging information which indicates the wear of a corresponding page. The aging information may be a remaining number of write operations allowed to the page. Whenever data is written in a page, a value indicating a remaining number of write operations allowed to that page is decremented. | 08-12-2010 |
20100211727 | INTEGRATED CIRCUIT BOARD WITH SECURED INPUT/OUTPUT BUFFER - An integrated circuit card including a processor unit associated with RAM and with data exchange means for exchanging data with an external device, the RAM including a memory zone dedicated to exchanged data, and the processor unit being arranged to secure the dedicated memory zone and to store the exchanged data in said zone, and a method of managing the RAM of such a card. | 08-19-2010 |
20100223425 | Monitoring Module - A system and associated method for monitoring the execution of software on one or more computers by receiving traffic from within the monitored computer(s). The monitoring may take place passively, such that the operation of the monitored computer or computers is completely unaffected by the monitoring. More intensive monitoring, such as maintenance of a shadow copy of the RAM of the monitored computer, may be initiated upon recognition of a pattern in the data received from the monitored computer. The execution of software on the monitored computer may be halted by the monitoring module. The monitoring module may also read from or write to the memories of the monitored computer. | 09-02-2010 |
20100241799 | MODULAR MASS STORAGE SYSGTEM AND METHOD THEREFOR - A modular mass storage system and method that enables cableless mounting of ATA and/or similar high speed interface-based mass storage devices in a computer system. The system includes a printed circuit board, a system expansion slot interface on the printed circuit board and comprising power and data pins, a host bus controller on the printed circuit board and electrically connected to the system expansion slot interface, docking connectors connected with the host bus controller to receive power and exchange data therewith and adapted to electrically couple with industry-standard non-volatile memory devices without cabling therebetween, and features on the printed circuit board for securing the memory devices thereto once coupled to the docking connectors. | 09-23-2010 |
20100306457 | Microcontroller with CAN Module - A microcontroller has a random access memory, and a Controller Area Network (CAN) controller with a control unit receiving an assembled CAN message. The control unit generates a buffer descriptor table entry using the assembled CAN message and stores the buffer descriptor table entry in the random access memory, and the buffer descriptor table entry has at least a message identifier and load data from the CAN message and information of a following buffer descriptor table entry. | 12-02-2010 |
20100318731 | OVERRIDE BOOT SEQUENCE BY PRESENCE OF FILE ON USB MEMORY STICK - Consistent with embodiments of the present invention, systems and methods are disclosed for operating an override boot sequence. In some embodiments, a system may comprise a computing device. The computing device may contain client software configured to boot the computing device to a normal state. The computing device may further contain a first memory, wherein the client software may be stored on the first memory. The system may further comprise an interface capable of communicating with a portable memory. The portable memory may contain an override application. The system may further comprise a bootloader program associated with the computing device, wherein the bootloader device may be configured to detect the presence of a connection of the portable memory and the interface. The bootloader program may further be configured to copy the override application to a second memory associated with the computing device and execute the override application instead of the client software. | 12-16-2010 |
20110022790 | SYSTEMS AND METHODS FOR PROVIDING NONLINEAR JOURNALING - In one embodiment, systems and methods are provided for nonlinear journaling. In one embodiment, groups of data designated for storage in a data storage unit are journaled into persistent storage. In one embodiment, the journal data is recorded nonlinearly. In one embodiment, a linked data structure records data and data descriptors in persistent storage. | 01-27-2011 |
20110040933 | Secure Zero-Touch Provisioning of Remote Management Controller - Embodiments enable secure zero-touch remote provisioning/management of a computer system. A computer system is shipped to end customers with its remote management controller enabled but not provisioned. During automatic testing, for example, provisioning authentication data is embedded into the remote management controller. The computer system vendor harvests the provisioning authentication data or derivative data therefrom from the remote management controller and stores it in a database. Upon sale of the computer system, the computer system vendor provides to the end-customer the harvested data of the computer system's remote management controller. The end-customer can then remotely authenticate a remote provisioning/management console to the remote management controller. Once successfully authenticated, the remote provisioning/management console can provision the remote management controller with one or more user accounts/roles with corresponding authentication details, authenticate as one of the provisioned user accounts, and perform computer system provisioning using remote manageability functions as desired. | 02-17-2011 |
20110066795 | STREAM CONTEXT CACHE SYSTEM - The present invention is directed to a stream context cache system, which primarily includes a cache and a mapping table. The cache stores plural stream contexts, and the mapping table stores associated stream context addresses in a system memory. Consequently, a host may, according to the content of the mapping table, directly retrieve the stream context that is pre-fetched and stored in the cache, rather than read the stream context from the system memory. | 03-17-2011 |
20110078367 | CONFIGURABLE CACHE FOR MULTIPLE CLIENTS - One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A “direct mapped” storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A “local and global cache” storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory. | 03-31-2011 |
20110078368 | EFFICIENT CLOCKING SCHEME FOR A BIDIRECTIONAL DATA LINK - A method for communication via a bidirectional data link between a processing device and a memory device. The memory device includes a clock source to generate a clock signal for driving a latching at the memory device of data to and/or from the bidirectional data link. The memory device provides the clock signal to the processing device for driving a latching at the processing device of data to and/or from the bidirectional data link. | 03-31-2011 |
20110078369 | Systems and Methods for Using a Page Table in an Information Handling System Comprising a Semiconductor Storage Device - Systems and methods for using a page table in an information handling system including a semiconductor storage device are disclosed. A page table in an information handling system may be provided. The information handling system may include a memory, and the memory may include a semiconductor storage device. NonDRAM tag data may be stored in the page table. The nonDRAM tag data may indicate one or more attributes of one or more pages in the semiconductor storage device. | 03-31-2011 |
20110082970 | SYSTEM FOR DISTRIBUTING AVAILABLE MEMORY RESOURCE - A system for distributing available memory resource comprising at least two random access memory (RAM) elements and RAM routing logic. The RAM routing logic comprises configuration logic to dynamically distribute the available memory resource into a first memory area providing redundant memory storage and a second memory area providing non-redundant memory storage. | 04-07-2011 |
20110087833 | LOCAL NONVOLATILE WRITE-THROUGH CACHE FOR A DATA SERVER HAVING NETWORK-BASED DATA STORAGE, AND RELATED OPERATING METHODS - A data server, a host adapter system for the data server, and related operating methods facilitate data write and read operations for network-based data storage that is remotely coupled to the data server and for non-network-based data storage in a locally attached cache device. The host adapter system includes a local storage controller module and a network storage controller module. The local storage controller module is utilized for a locally attached, nonvolatile, write-through cache device of the data server. The network storage controller module is utilized for a network-based data storage architecture of the data server. The storage controller modules support concurrent writing of data to the local cache storage and the network-based storage architecture. The storage controller modules also support reading of server-maintained data from the local cache storage and the network-based storage architecture. | 04-14-2011 |
20110099327 | SYSTEM AND METHOD FOR LAUNCHING AN APPLICATION PROGRAMMING UTILIZING A HYBRID VERSION OF DEMAND PAGING - A system and method for launching a computer application program stored in a nonvolatile medium, wherein by reusing a page load scheme from a standard demand page based launch. The system and method includes launching a computer application program one or more times using demand paging to load memory pages of the nonvolatile medium associated with the computer application into a volatile memory for execution of the computer application program. Memory address information corresponding to the pages of the nonvolatile medium corresponding to portions of the computer application program accessed during use of the computer application program are stored in a launch record. The computer application program is launched using the address information stored in the launch record to read nonvolatile medium addresses stored in the launch record in a single or consecutive read step. | 04-28-2011 |
20110107019 | APP (A PRIORI PROBABILITY) STORAGE DESIGN FOR LTE TURBO DECODER WITH QUADRATIC PERMUTATION POLYNOMIAL INTERLEAVER - Systems and methodologies are described that facilitate ensuring contention and/or collision free memory within a turbo decoder. A Posteriori Probability (APP) Random Access Memory (RAM) can be segmented or partitioned into two or more files with an interleaving sub-group within each file. This enables parallel operation in a turbo decoder and allows a turbo decoder to access multiple files simultaneously without memory access contention. | 05-05-2011 |
20110107020 | HIBERNATION SOLUTION FOR EMBEDDED DEVICES AND SYSTEMS - An embedded device is hibernated by storing state data of the embedded device to a non-volatile data storage medium, and powering off the embedded device. The embedded device is later woken up in response to the detection of a wakeup event from a wakeup source. The state data stored in the RAM of the embedded device comprises data in one or more registers of a Central Processing Unit (CPU) of the embedded device, one or more registers of a system-on-chip (SOC) of the embedded device, and the system and applications code and data. Waking the embedded device comprises loading, from the non-volatile data storage medium, initial memory sections that are used to run a kernel of the embedded device. State data that is stored in the RAM of a system may be compressed by dividing the RAM into a plurality of sections and independently choosing, for each section in the plurality of sections, a corresponding compression arithmetic. | 05-05-2011 |
20110107021 | Column Oriented In-Memory Page Caching - A one-dimensional array is allocated in an in-memory cache for each column in a set of tabular data. The data type of each one-dimensional array is set to be the same as the data type of the corresponding column in the tabular data. Once the one-dimensional arrays have been allocated in memory, a portion of the data from each column in the tabular data is stored in a corresponding one-dimensional array. The tabular data stored in the one-dimensional arrays in the cache may then be utilized to generate an on-screen display of a portion of the tabular data. | 05-05-2011 |
20110119437 | Sequentially Written Journal in a Data Store - Systems, methods, and computer storage media for storing and retrieving data from a data store in a distributed computing environment are provided. An embodiment includes receiving data at a data store comprising a sequential journal store, RAM, and a non-sequential target store. When RAM utilization is below a threshold, received data is stored to the RAM as a write cache for the target store and the journal store. But, when the utilization is above the threshold, the data is stored to the journal store without write-caching to the RAM for the target store. When the RAM utilization falls below a threshold, data committed to the journal store, but not write-cached to the RAM for the target store, is later read from the journal store and write-cached to the RAM for a target store. | 05-19-2011 |
20110119438 | FLASH MEMORY FILE SYSTEM - Apparatus having corresponding methods and computer-readable media comprise: a plurality of flash modules, wherein each of the flash modules comprises a cache memory; a flash memory; and a flash controller in communication with the cache memory and the flash memory; wherein the flash controller of a first one of the flash modules is configured to operate the cache memories together as a global cache; wherein the flash controller of a second one of the flash modules is configured to operate a second one of the flash modules as a directory controller for the flash memories. | 05-19-2011 |
20110125960 | FPGA Co-Processor For Accelerated Computation - A co-processor module for accelerating computational performance includes a Field Programmable Gate Array (“FPGA”) and a Programmable Logic Device (“PLD”) coupled to the FPGA and configured to control start-up configuration of the FPGA. A non-volatile memory is coupled to the PLD and configured to store a start-up bitstream for the start-up configuration of the FPGA. A mechanical and electrical interface is for being plugged into a microprocessor socket of a motherboard for direct communication with at least one microprocessor capable of being coupled to the motherboard. After completion of a start-up cycle, the FPGA is configured for direct communication with the at least one microprocessor via a microprocessor bus to which the microprocessor socket is coupled. | 05-26-2011 |
20110145491 | METHOD FOR CONTROLLING ACCESS TO REGIONS OF A MEMORY FROM A PLURALITY OF PROCESSES AND A COMMUNICATION MODULE HAVING A MESSAGE MEMORY FOR IMPLEMENTING THE METHOD - A method for controlling access to regions of a memory from a plurality of processes. In order to allow a plurality of processes to access the most recent data packets stored in the memory without any loss of data and without a waiting period, according to the present invention a first one of the processes controls part of an address bus using which another one of the processes accesses the memory, the first process influencing which memory region is accessed by the other process by controlling the part of the address bus. | 06-16-2011 |
20110153923 | HIGH SPEED MEMORY SYSTEM - A high speed memory system includes a plurality of memory devices; a plurality of buffers; and a memory controller. The plurality of buffers is respectively coupled to the plurality of memory devices. The memory controller is coupled to the plurality of buffers, for generating a plurality of control signal to the plurality of buffers and sequentially controlling access to the plurality of memory devices in a time-sharing manner according to a clock. | 06-23-2011 |
20110161574 | SETTING CONTROL APPARATUS AND METHOD FOR OPERATING SETTING CONTROL APPARATUS - A setting control apparatus includes a setting control part, a special register, and a read-out control part. The setting control part makes stored in a temporary storage part a control value used in a processing circuit, in response to an input of the control value. The special register is electrically connected to the processing circuit and serving as a storage element capable of storing the control value. The read-out control part controls a read-out operation for reading out the control value from the temporary storage part into the special register. The read-out control part performs the read-out operation at a predetermined timing after storing of the control value in the temporary storage part is completed. | 06-30-2011 |
20110161575 | MICROCODE REFACTORING AND CACHING - Methods and apparatus relating to microcode refactoring and/or caching are described. In some embodiments, an off-chip structure that stores microcode is shared by multiple processor cores. Other embodiments are also described and claimed. | 06-30-2011 |
20110173384 | Internet-Safe Computer - The present invention eliminates the possibility of problems with viruses, worms, identity theft, and other hazards that may result from the connection of a computer to the Internet. It does so by creating a new configuration of components within the computer. In addition to commonly used components, two new components are added. These are a secondary hard drive and a secondary random access memory. When the computer is connected to the Internet these secondary components are used in place of their primary counterparts. The primary hard drive is electronically isolated from the Internet, thus preventing Internet contamination of the primary hard drive. | 07-14-2011 |
20110197019 | METHOD OF ACCELERATING ACCESS TO PRIMARY STORAGE AND STORAGE SYSTEM ADOPTING THE METHOD - A RAM disk driver | 08-11-2011 |
20110213922 | PHASE CHANGE RANDOM ACCESS MEMORY DEVICE AND RELATED METHODS OF OPERATION - A method of operating a phase change random access memory (PRAM) device includes performing a program operation to store data in selected PRAM cells of the device, wherein the program operation comprises a plurality of sequential program loops. The method further comprises suspending the program operation in the middle of the program operation, and after suspending the program operation, resuming the program operation in response to a resume command. | 09-01-2011 |
20110219181 | PRE-FETCHING DATA INTO A MEMORY - Systems and methods for pre-fetching of data in a memory are provided. By pre-fetching stored data from a slower memory into a faster memory, the amount of time required for data retrieval and/or processing may be reduced. First, data is received and pre-scanned to generate a sample fingerprint. Fingerprints stored in a faster memory that are similar to the sample fingerprint are identified. Data stored in the slower memory associated with the identified stored fingerprints is copied into the faster memory. The copied data may be compared to the received data. Various embodiments may be included in a network memory architecture to allow for faster data matching and instruction generation in a central appliance. | 09-08-2011 |
20110258374 | METHOD FOR OPTIMIZING THE MEMORY USAGE AND PERFORMANCE OF DATA DEDUPLICATION STORAGE SYSTEMS - A method and system of optimizing the memory usage and performance of data deduplication storage systems includes organizing the metadata of data blocks needed by deduplicating storage systems. A three level hierarchy is used. Level 1 stores the metadata on disk along with the user data. Level 2 uses low latency storage (e.g. RAM and Solid State Disks) to cache the on-disk meta data for faster direct access. Level 3 organizes the fingerprints using a Trie and is entirely resident in RAM. Thus, the search, to determine whether a data block is unique or not and a candidate for transfer, can be more efficiency executed and to ensure that the meta data is transactionally secure. | 10-20-2011 |
20110264853 | Signal control device and signal control method - A signal control device includes: a dual port RAM from or to which data signals are read and written at predetermined operation timings by first and second CPUs connected to two ports, respectively; an address collision detection unit detecting collision between addresses in which the first and second CPUs respectively read and write the data signal from and to the dual port RAM; a first storage unit storing the data signal read by the first CPU; a second storage unit storing the data signal read from the address in which the second CPU writes the data signal to the dual port RAM when the collision between the addresses is detected; and a switching unit switching a reading source outputting the data signal to the port to which the first CPU is connected and outputting the read data signal to the first CPU entering a readable state. | 10-27-2011 |
20110283059 | TECHNIQUES FOR ACCELERATING COMPUTATIONS USING FIELD PROGRAMMABLE GATE ARRAY PROCESSORS - Various embodiments are disclosed for accelerating computations using field programmable gate arrays (FPGA). Various tree traversal techniques, architectures, and hardware implementations are disclosed. Various disclosed embodiments comprise hybrid architectures comprising a central processing unit (CPU), a graphics processor unit (GPU), a field programmable gate array (FPGA), and variations or combinations thereof, to implement raytracing techniques. Additional disclosed embodiments comprise depth-breadth search tree tracing techniques, blocking tree branch traversal techniques to avoid data explosion, compact data structure representations for ray and node representations, and multiplexed processing of multiple rays in a programming element (PE) to leverage pipeline bubble. | 11-17-2011 |
20110302365 | STORAGE SYSTEM USING A RAPID STORAGE DEVICE AS A CACHE - Provided is a storage system using a high speed storage device as a cache. The storage system includes a large-volume of first storage device, a high speed second storage device, and a Random Access Memory (RAM). The large-volume of first storage device corresponds to a Hard Disk Drive (HDD), and the high speed second storage device corresponds to a Solid State Drive (SSD). Also, the high speed second drive is used as a cache. The first storage device manages content files super block by super block, and the second storage device manages cache files block by block. | 12-08-2011 |
20110314209 | DIGITAL SIGNAL PROCESSING ARCHITECTURE SUPPORTING EFFICIENT CODING OF MEMORY ACCESS INFORMATION - A digital signal processing architecture supporting efficient coding of memory access information is provided. In an example embodiment, a digital signal processor includes an adjustment value register to store an initial adjustment value and a succeeding adjustment value. The digital signal processor may also include an address generator circuit to retrieve an instruction including a memory address value that is greater than N, and a further instruction including a further memory address value that is less than or equal to N. In addition, the digital signal processor may include a memory, which includes a high bank address space defined by memory locations that are uniquely identified with memory address values greater than N. The address generator circuit may access the high bank address space, using initial adjustment value and the memory address value, or using the succeeding adjustment value and the further memory address value. | 12-22-2011 |
20110320693 | Method For Paramaterized Application Specific Integrated Circuit (ASIC)/Field Programmable Gate Array (FPGA) Memory-Based Ternary Content Addressable Memory (TCAM) - A method and apparatus for providing TCAM functionality in a custom integrated circuit (IC) is presented. An incoming key is broken into a predefined number of sub-keys. Each sub-key is sued to address a Random Access Memory (RAM), one RAM for each sub-key. An output of the RAM is collected for each sub-key, each output comprising a Partial Match Vector (PMV). The PMVs are bitwise ANDed to obtain a value which is provided to a priority encoder to obtain an index. The index is used to access a result RAM to return a result value for the key. | 12-29-2011 |
20120059982 | INTEGRATED CIRCUIT FOR EXECUTING EXTERNAL PROGRAM CODES AND METHOD THEREOF - An integrated circuit for executing external program codes comprises a processor, a read only memory for storing program codes of a first routine and a second routine, and a random access memory comprising a first memory block and a second memory block. The processor executes the first routine and uses a plurality of first memory units in the first memory block for accessing data. The processor executes the second routine and uses a plurality of second memory units in the first memory block for accessing data. The first and second memory units comprise one or more common memory units. The processor executes a third routine stored in an external read only memory and accesses the data of the third routine in the second memory block. | 03-08-2012 |
20120072656 | MULTI-TIER CACHING - A method for maintaining an index in multi-tier data structure includes providing a plurality of a storage devices forming the multi-tier data structure, caching an index of key-value pairs across the multi-tier data structure, wherein each of the key-value pairs includes a key, and one of a data value and a data pointer, the key-value pairs stored in the multi-tier data structure, providing a journal for interfacing with the multi-tier data structure, providing a plurality of zone allocators recording which zones of the multi-tier data structure are in used, and providing a plurality of zone managers for controlling access to cache lines of the multi-tier data structure through the journal and zone allocators, wherein each zone manager maintains a header object pointing to data to be stored in an allocated zone. | 03-22-2012 |
20120072657 | SYSTEM AND METHOD TO WRITE DATA USING PHASE-CHANGE RAM - A data recording system includes a file system configured to manage block-based input/output of data, a phase-change random access memory (PRAM) configured to write first data among the data in units of sub blocks, and a block abstract layer configured to receive a write command of the first data to a first particular block in the PRAM from the file system and log changed data information to a second particular block in the PRAM in units of sub blocks, and a method to provide the same. | 03-22-2012 |
20120072658 | PROGRAM, CONTROL METHOD, AND CONTROL DEVICE - Provided is a program, control method, and control device that can shorten start-up time. Page table entry is rewritten for a Memory Management Unit (MMU) table, on a computer system equipped with an MMU, so that a page fault will occur at every page, for all the pages necessary for the operation of a software program. Upon start-up, the stored memory image is loaded in page units for page faults that have occurred on the RAM to be accessed. Loading of unnecessary pages will not be executed, because such loading was executed, and the start-up time can be shortened worth that time. This program, control method, and control device can be applied to personal computers, and electronic devices equipped with built-in type computers. | 03-22-2012 |
20120084496 | VALIDATING PERSISTENT MEMORY CONTENT FOR PROCESSOR MAIN MEMORY - Subject matter disclosed herein relates to validating memory content in persistent main memory of a processor. | 04-05-2012 |
20120110254 | OBJECT PERSISTENCY - There is provided a method and computer system for object persistency that includes: running a program; storing an object of the program into a random access memory in response to determining that the object is a non-persistent object; and storing the object into a phase change memory in response to determining that the object is a persistent object. The method and computer system of the present disclosure do not need separate persistency layers, such that the programming model is light weighted, the persistency of object data is more simple and fast, and implicit transaction process is supported, thereby a great deal of development and runtime costs are saved. | 05-03-2012 |
20120137059 | CONTENT LOCALITY-BASED CACHING IN A DATA STORAGE SYSTEM - A data storage caching architecture supports using native local memory such as host-based RAM, and if available, Solid State Disk (SSD) memory for storing pre-cache delta-compression based delta, reference, and independent data by exploiting content locality, temporal locality, and spatial locality of data accesses to primary (e.g. disk-based) storage. The architecture makes excellent use of the physical properties of the different types of memory available (fast r/w RAM, low cost fast read SSD, etc) by applying algorithms to determine what types of data to store in each type of memory. Algorithms include similarity detection, delta compression, least popularly used cache management, conservative insertion and promotion cache replacement, and the like. | 05-31-2012 |
20120144103 | Two-Port Memory Implemented With Single-Port Memory Blocks - A two-port memory having a read port, a write port and a plurality of identical single-port RAM banks. The capacity of one of the single-port RAM banks is used to resolve collisions between simultaneous read and write accesses to the same single-port RAM bank. A read mapping memory stores instance information that maps logical banks and a spare bank to the single-port RAM banks for read accesses. Similarly, a write mapping memory stores write instance information that maps logical banks and a spare bank to the single-port RAM banks for write accesses. If simultaneous read and write accesses are not mapped to the same single-port RAM bank, read and write are performed simultaneously. However, if a collision exists, the write access is re-mapped to a spare bank identified by the write instance information, allowing simultaneous read and write. Both read and write mapping memories are updated to reflect any re-mapping. | 06-07-2012 |
20120159056 | POWER FILTER IN DATA TRANSLATION LOOK-ASIDE BUFFER BASED ON AN INPUT LINEAR ADDRESS - A method and an apparatus for power filtering in a Translation Look-aside Buffer (TLB) are described. In the method and apparatus, power consumption reduction is achieved by suppressing physical address (PA) reads from random access memory (RAM) if the previously translated linear address (LA), or virtual address (VA), is the same as the currently requested LA. To provide the correct translation, the output of the TLB is maintained if the previously translated LA and the LA currently requested for translation are the same. | 06-21-2012 |
20120159057 | MEMORY POWER TOKENS - Techniques are described for controlling availability of memory. As memory write operations are processed, the contents of memory targeted by the write operations are read and compared to the data to be written. The availability of the memory for subsequent write operations is controlled based on the outcomes of the comparing. How many concurrent write operations are being executed may vary according to the comparing. In one implementation, a pool of tokens is maintained based on the comparing. The tokens represent units of power. When write operations require more power, for example when they will alter the values of more cells in PCM memory, they draw (and eventually return) more tokens. The token pool can act as a memory-availability mechanism in that tokens must be obtained for a write operation to be executed. When and how many tokens are reserved or recycled can vary according to implementation. | 06-21-2012 |
20120159058 | MEMORY SYSTEM AND METHOD FOR WRITING DATA INTO MEMORY SYSTEM - A memory system of one embodiment includes: a nonvolatile memory including a plurality of word lines each connected to memory cells, each one of the memory cells being capable storing two bits, the memory cells connected to one of the plurality of word lines constituting an upper page and a lower page, each one of the pages being a unit of data programming; a random access memory configured to store an address translation table indicating relationships between logical addresses designated by a host and physical addresses in the nonvolatile memory. The memory system of the embodiment further includes a memory controller which execute data fixing for saving the address translation table from the random access memory to the nonvolatile memory; and write dummy data to at least one page subsequent to the page in which valid data has been written in the nonvolatile memory before executing the data fixing. | 06-21-2012 |
20120166721 | SEMICONDUCTOR INTEGRATED CIRCUIT DEVICE, METHOD OF CONTROLLING SEMICONDUCTOR INTEGRATED CIRCUIT DEVICE, AND CACHE DEVICE - There are provided a semiconductor integrated circuit device, a method of controlling a semiconductor integrated circuit device, and a cache device capable of efficiently implementing power saving, wherein the cache device includes a low-voltage operation enabling cache ( | 06-28-2012 |
20120239870 | FIFO APPARATUS FOR THE BOUNDARY OF CLOCK TREES AND METHOD THEREOF - A FIFO apparatus uses a first clock signal in a first clock domain to receive an input signal and uses a second clock signal in a second clock domain to output an output signal. An example apparatus includes: at least three write registers belonging to the first clock domain for receiving the input signal. Each of the write registers has a first output. A first controller belonging to the first clock domain enables the registers, in accordance with an order, to generate an initial signal. A multiplexer receives the first outputs. A second controller belonging to the second clock domain, receives the initial signal through an asynchronous interface and controls the multiplexer to output the first outputs in accordance with the order to be the output signal, wherein the second clock domain is a clock tree generated based on the first clock domain. | 09-20-2012 |
20120239871 | VIRTUAL ADDRESS PAGER AND METHOD FOR USE WITH A BULK ERASE MEMORY - A virtual address pager and method for use with a bulk erase memory is disclosed. The virtual address pager includes a page protection controller configured with a heap manager interface configured to receive only bulk erase memory-backed page requests for a plurality of memory pages. A RAM object cache controller is configured to store and bulk write data for a portion of the bulk erase memory. The page protection controller may have an operating system interface configured to generate a page memory access permission for each of the plurality of memory pages. The page protection controller may be configured to receive a virtual memory allocation request and generate the page memory access permission based on the virtual memory allocation request. | 09-20-2012 |
20120239872 | PRE-FETCHING DATA INTO A MEMORY - Systems and methods for pre-fetching of data in a memory are provided. By pre-fetching stored data from a slower memory into a faster memory, the amount of time required for data retrieval and/or processing may be reduced. First, data is received and pre-scanned to generate a sample fingerprint. Fingerprints stored in a faster memory that are similar to the sample fingerprint are identified. Data stored in the slower memory associated with the identified stored fingerprints is copied into the faster memory. The copied data may be compared to the received data. Various embodiments may be included in a network memory architecture to allow for faster data matching and instruction generation in a central appliance. | 09-20-2012 |
20120246400 | METHOD AND APPARATUS FOR PACKET SWITICHING - A method for performing packet lookups is provided. Packets (which each have a body and a header) are received and parsed to parsing headers. A hash function is applied to each header, and each hashed header is compared with a plurality of binary rules stored within a primary table, where each binary rule is a binary version of at least one ternary rule from a first set of ternary rules. For each match failure with the plurality of rules, a secondary table is searched using the header associated with each match failure, where the secondary table includes a second set of ternary rules. | 09-27-2012 |
20120254526 | ROUTING, SECURITY AND STORAGE OF SENSITIVE DATA IN RANDOM ACCESS MEMORY (RAM) - A method and apparatus for securely storing and accessing processor state information in random access memory (RAM) at a time when the processor enters an inactive power state. | 10-04-2012 |
20120260031 | ENHANCED PIPELINING AND MULTI-BUFFER ARCHITECTURE FOR LEVEL TWO CACHE CONTROLLER TO MINIMIZE HAZARD STALLS AND OPTIMIZE PERFORMANCE - This invention is a data processing system including a central processing unit, an external interface, a level one cache, level two memory including level two unified cache and directly addressable memory. A level two memory controller includes a directly addressable memory read pipeline, a central processing unit write pipeline, an external cacheable pipeline and an external non-cacheable pipeline. | 10-11-2012 |
20120278547 | Method and system for hierarchically managing storage resources - The disclosure discloses a method for hierarchically managing storage resources, which comprises: planning a storage space, establishing an address management index, and storing or reading data according to the index and a type of the data. The disclosure further discloses a system for hierarchically managing storage resources. Through the method and system of the disclosure, space can be better saved, storage requirements of data of different sizes can be met, and the storage space can be flexibly recorded and released. | 11-01-2012 |
20120290780 | Multithreaded Operation of A Microprocessor Cache - A method of fetching data from a cache begins by preparing to fetch a first set of cache ways for a first data word of a first cache line a using a first thread. Next, in parallel, a second set cache ways for a first data word of a second cache line is prepared to be fetched using a second thread, and data associated with each cache way of the first set of cache ways are fetched using the first thread. Also performed in parallel, data associated with each cache way of the second set of cache ways is fetched using the second thread and a third set of cache ways for a second data word of the first cache line is prepared to be fetched using the first thread based on a selected cache way, the selected cache way selected from the first set of cache ways. | 11-15-2012 |
20120290781 | NONVOLATILE MEMORY DEVICE WITH INCREASED ENDURANCE AND METHOD OF OPERATING THE SAME - A non-volatile memory device including a memory unit configured to store user data and metadata and a memory controller unit. The memory controller unit is configured to access the memory unit in response to a request from an external host, create metadata which is to be recorded in the memory unit, and convert a format of the metadata based on a result of counting the number of times the memory unit is accessed. | 11-15-2012 |
20120297130 | STACK PROCESSOR USING A FERROELECTRIC RANDOM ACCESS MEMORY (F-RAM) FOR BOTH CODE AND DATA SPACE - A stack processor using a ferroelectric random access memory (F-RAM) for both code and data space which presents the advantages of easy stack pointer management inasmuch as the stack pointer is itself a memory address. Further, the time for saving all critical registers to memory is also minimized in that all registers are already maintained in non-volatile F-RAM per se. | 11-22-2012 |
20120317350 | USING EXTENDED ASYNCHRONOUS DATA MOVER INDIRECT DATA ADDRESS WORDS - An abstraction for storage class memory is provided that hides the details of the implementation of storage class memory from a program, and provides a standard channel programming interface for performing certain actions, such as controlling movement of data between main storage and storage class memory or managing storage class memory. | 12-13-2012 |
20120324156 | METHOD AND SYSTEM OF ORGANIZING A HETEROGENEOUS MEMORY ARCHITECTURE - An exemplary embodiment of the present invention may build data blocks in non-volatile memory. The corresponding parity blocks may be built in a fast, high endurance memory. | 12-20-2012 |
20130031304 | DATA STORAGE IN NONVOLATILE MEMORY - A method for data storage in a nonvolatile memory device includes compressing current data. The compressed current data is written to a space of the nonvolatile memory device that does not include a most recently written data. If the compressed current data is successfully written, identification data is stored on the nonvolatile memory device. The identification data identifies the written compressed current data as a currently valid version. | 01-31-2013 |
20130046922 | CONTENT ADDRESSABLE MEMORY AND METHOD OF SEARCHING DATA THEREOF - The present invention discloses a content addressable memory and a method of searching data thereof. The method includes generating a hash index data item from a received input data item; searching the cache for presence of a row tag of the RAM data row corresponding to the data item of hash index; in response to presence, searching the RAM for a RAM data item corresponding to the input data item according to the corresponding row tag of the RAM data row; in response to absence, searching the RAM for a RAM data item corresponding to the input data item by using the data item of hash index; and in response to finding a RAM data item corresponding to the input data item in the RAM, outputting data corresponding to the RAM data item. The method can accelerate data search in the CAM. | 02-21-2013 |
20130067155 | Memory Type-Specific Access Control Of A Field Of A Record - A computing system includes computer memory of a number of different memory types. An application program compiled for execution on the computing system controls access to a field of a record in the computer memory of the computing system by defining a record that includes one or more fields, the one or more fields including a restricted field having a specification of restricted accessibility when the restricted field is allocated in a particular memory type; allocating an instance of the record in memory of the particular memory type; and denying each attempted access of the restricted field while the record is allocated in the particular memory type. | 03-14-2013 |
20130073802 | Methods and Apparatus for Transferring Data Between Memory Modules - A computer-implemented method for transferring data from a computer system programmed to perform the method includes receiving in a memory buffer in a first memory module hosted by the computer system, a request for data stored in RAM of the first memory module from a host controller of the computer system, retrieving with the memory buffer, the data from the RAM, in response to the request, formatting with the memory buffer, the data from the RAM into formatted data in response to a defined software transport protocol, and initiating with the memory buffer, transfer of the formatted data to a storage destination external to the first memory module via an auxiliary interface of the memory buffer, bypassing the host controller of the computer system. | 03-21-2013 |
20130073803 | COMBINED PARALLEL/SERIAL STATUS REGISTER READ - Methods and devices are disclosed, such as those involving a solid state memory device that includes a status register configured to be read with a combined parallel and serial read scheme. One such solid state memory includes a status register configured to store a plurality of bits indicative of status information of the memory. One such method of providing status information in the memory device includes providing the status information of a memory device in a parallel form. The method also includes providing the status information in a serial form after providing the status information in a parallel form in response to receiving at least one read command. | 03-21-2013 |
20130091324 | DATA PROCESSING APPARATUS AND VALIDITY VERIFICATION METHOD - A data processing apparatus includes an auxiliary storage device having target verification data stored therein, a program memory having a validity verification program stored therein, a first RAM (Random Access Memory), a second RAM, and an execution unit configured to execute a validity verification process in accordance with the validity verification program stored in the program memory. The execution unit is configured to copy the target verification data from the auxiliary storage device into the first RAM, execute the validity verification process on the copied target verification data in the first RAM, and use the second RAM as a work area in a case of executing the validity verification process. | 04-11-2013 |
20130111119 | RAM BLOCK DESIGNED FOR EFFICIENT GANGING | 05-02-2013 |
20130117504 | EMBEDDED MEMORY AND DEDICATED PROCESSOR STRUCTURE WITHIN AN INTEGRATED CIRCUIT - An integrated circuit can include a programmable circuitry operable according to a first clock frequency and a block random access memory. The block random access memory can include a random access memory (RAM) element having at least one data port and a memory processor coupled to the data port of the RAM element and to the programmable circuitry. The memory processor can be operable according to a second clock frequency that is higher than the first clock frequency. Further, the memory processor can be hardwired and dedicated to perform operations in the RAM element of the block random access memory. | 05-09-2013 |
20130132658 | Device For Executing Program Instructions and System For Caching Instructions - The system of the present invention includes an instruction fetch unit | 05-23-2013 |
20130132659 | MICROCONTROLLER AND METHOD OF CONTROLLING MICROCONTROLLER - A microcontroller includes a RAM control unit configured to: perform a RAM access operation when an address designated by a CPU is within a range of a designated area; and read a program from a Flash EEPROM when the address is out of the range of the designated area. As the RAM access operation, the RAM control unit is configured to: read the program from the Flash EEPROM, store the read program into the RAM, and change valid bit information into a valid state, when the valid bit information indicates an invalid state; and output the program stored in the RAM to the CPU when the valid bit information indicates the valid state. | 05-23-2013 |
20130138876 | COMPUTER SYSTEM WITH MEMORY AGING FOR HIGH PERFORMANCE - A memory manager in a computer system that ages memory for high performance. The efficiency of operation of the computer system can be improved by dynamically setting an aging schedule based on a predicted time for trimming pages from a working set. An aging schedule that generates aging information that better discriminates among pages in a working set based on activity level enables selection of pages to trim that are less likely to be accessed following trimming. As a result of being able to identify and trim less active pages, inefficiencies arising from restoring trimmed pages to the working set are avoided. | 05-30-2013 |
20130159614 | PAGE BUFFERING IN A VIRTUALIZED, MEMORY SHARING CONFIGURATION - An apparatus includes a processor and a volatile memory that is configured to be accessible in an active memory sharing configuration. The apparatus includes a machine-readable encoded with instructions executable by the processor. The instructions including first virtual machine instructions configured to access the volatile memory with a first virtual machine. The instructions including second virtual machine instructions configured to access the volatile memory with a second virtual machine. The instructions including virtual machine monitor instructions configured to page data out from a shared memory to a reserved memory section in the volatile memory responsive to the first virtual machine or the second virtual machine paging the data out from the shared memory or paging the data in to the shared memory. The shared memory is shared across the first virtual machine and the second virtual machine. The volatile memory includes the shared memory. | 06-20-2013 |
20130166833 | ELECTRONIC APPARATUS WITH A SAFE CONDITIONAL ACCESS SYSTEM (CAS) AND CONTROL METHOD THEREOF - An electronic apparatus is provided, which includes a central processing unit (CPU), a first memory unit which performs communication with the CPU, and a second memory unit which stores therein conditional access system (CAS) software and platform software. According to the method of controlling the apparatus, upon booting, the CPU copies the CAS software to an internal memory area which may be within the CPU, copies the platform software to the first memory unit and executes the CAS and platform software, and executes CAS operations through communication between the CAS software and the platform software. | 06-27-2013 |
20130185491 | MEMORY CONTROLLER AND A METHOD THEREOF - A memory controller includes a mixed buffer and an arbiter. The mixed buffer includes at least one single-port buffer and at least one multi-port buffer for managing data flow between a host and a storage device. The arbiter determines an order of access to the mixed buffer among a plurality of masters. The data to be written or read are partitioned into at least two parts, which are then moved to the single-port buffer and the multi-port buffer, respectively. | 07-18-2013 |
20130227209 | METHOD AND APPARATUS FOR CONTENT DERIVED DATA PLACEMENT IN MEMORY - Apparatus and method for placing data based on the content of the data in random access memory such that indexing operations are not required. A strong (e.g., cryptographic) hash is applied to a data element resulting in a signature. A weaker hash function is then applied to the signature to generate a storage location in memory for the data element. The weaker hash function assigns multiple data elements to the same storage location while the signature comprises a unique identifier for locating a particular data element at this location. In one embodiment a plurality of weak hash functions are applied successively to increase storage space utilization. In other embodiments, the assigned storage location can be determined by one or more attributes of the data element and/or the storage technology, e.g, long-lived versus short-lived data and/or different regions of the memory having different performance (e.g., access latency memory lifetime) characteristics. | 08-29-2013 |
20130238847 | Interruptible Write Block - A disclosed embodiment is an interruptible write block comprising a primary register having an input coupled to an input of the interruptible write block, a secondary register having an input selectably coupled to an output of the primary register and to an output of the secondary register through an interrupt circuit. The interrupt circuit is utilized to interrupt flow of new data from the primary register to the secondary register during an interrupt of a write operation, such that upon resumption of the write operation the secondary register contains valid data. A method of utilizing an interruptible write block during a write operation comprises loading data into a primary register, interrupting the write operation to perform one or more other operations, loading the data into a secondary register while loading new data into the primary register, and resuming the write operation using valid data from the secondary register. | 09-12-2013 |
20130246696 | System and Method for Implementing a Low-Cost CPU Cache Using a Single SRAM - One embodiment of the present invention relates to a CPU cache system that stores tag information and cached data in the same SRAM. The system includes an SRAM memory device, a lookup buffer, and a cache controller. The SRAM memory device includes a cache data section and a cache tag section. The cache data section includes data entries and the tag section includes tag entries associated with the data entries. The tag entries include memory addresses that correspond to the data entries. The lookup buffer includes lookup entries associated with at least a portion of the data entries. The number of lookup entries is less than the number of tag entries. The cache controller is configured to perform a speculative read of the cache data section and a cache check of the lookup buffer simultaneously or in a single cycle. | 09-19-2013 |
20130246697 | Organizing Data in a Hybrid Memory for Search Operations - Methods, systems, and computer readable storage medium embodiments for configuring a lookup table, such as an access control list (ACL) for a network device are disclosed. Aspects of these embodiments include storing a plurality of data entries in a memory, each of the stored plurality of data entries including a header part and a body part, and encoding each of a plurality of bit-sequences in the header part of a stored data entry from the plurality of data entries to indicate a bit comparing action associated with a respective bit sequence in the body part of the stored data entry. Other embodiments include searching a lookup table in a network device. | 09-19-2013 |
20130254472 | ADJUSTING A MEMORY TRANSFER SETTING WITH LARGE MAIN MEMORY CAPACITY - An apparatus for adjusting a memory transfer setting includes a storage device storing machine-readable code and a processor executing the machine-readable code. The machine-readable code includes a determination module determining that an amount of main memory exceeds a threshold percentage of secondary storage on an information handling device. The machine readable code also includes an adjustment module adjusting a memory transfer setting on the information handling device in response to the determination module determining that the amount of main memory exceeds the threshold percentage. | 09-26-2013 |
20130262756 | SNAPSHOT CONTENT METADATA FOR APPLICATION CONSISTENT BACKUPS - At least one of configuration information of a storage volume stored on a storage system and characteristics of a snapshot, including characteristics of one or more files stored in the snapshot, are identified. Snapshot content metadata, comprising the at least one of the identified characteristics and the configuration information, is created. The snapshot content metadata is associated with the snapshot. | 10-03-2013 |
20130282969 | IMPLEMENTING STORAGE ADAPTER PERFORMANCE OPTIMIZATION WITH HARDWARE OPERATIONS COMPLETION COALESCENCE - A method and controller for implementing storage adapter performance optimization with chained hardware operations completion coalescence, and a design structure on which the subject controller circuit resides are provided. The controller includes a plurality of hardware engines, and a processor. A plurality of the command blocks are selectively arranged by firmware in a predefined chain including a plurality of simultaneous command blocks. All of the simultaneous command blocks are completed in any order by respective hardware engines, then the next command block in the predefined chain is started under hardware control without any hardware-firmware (HW-FW) interlocking with the simultaneous command block completion coalescence. | 10-24-2013 |
20130282970 | Pre-Fetching Data into a Memory - Systems and methods for pre-fetching of data in a memory are provided. By pre-fetching stored data from a slower memory into a faster memory, the amount of time required for data retrieval and/or processing may be reduced. First, data is received and pre-scanned to generate a sample fingerprint. Fingerprints stored in a faster memory that are similar to the sample fingerprint are identified. Data stored in the slower memory associated with the identified stored fingerprints is copied into the faster memory. The copied data may be compared to the received data. Various embodiments may be included in a network memory architecture to allow for faster data matching and instruction generation in a central appliance. | 10-24-2013 |
20130290619 | Apparatus and Method for Sequential Operation on a Random Access Device - The present disclosure involves a method. As a part of the method, a logically sequential range of memory blocks is allocated for sequential access. A pointer is initialized with an address of a first memory block that is within the range of the memory blocks. In response to a data write next request, data is written into the range of the memory blocks, starting with the first memory block and continuing sequentially in subsequent memory blocks within the range until the data write next request is completed. Thereafter, the pointer is updated based on a last memory block in which data is written. | 10-31-2013 |
20130290620 | STORAGE CONTROLLING APPARATUS, STORAGE APPARATUS AND PROCESSING METHOD - A storage controlling apparatus includes a command decoder and command processing section. The command decoder decides whether or not a plurality of access object addresses of different commands included in a command string correspond to words different from each other in a same one of blocks of a memory cell array which have a common plate. The command processing section collectively and successively executes, when it is decided that the access object addresses of the commands correspond to the words different from each other in the same block of the memory cell array, those of operations in processing of the commands in which an equal voltage is applied as a drive voltage between the plate and a bit line. | 10-31-2013 |
20130311717 | MAGNETIC RANDOM ACCESS MEMORY - A magnetic random access memory (MRAM), and a memory module, memory system including the same, and method for controlling the same are disclosed. The MRAM includes magnetic memory cells configured to change between at least two states according to a magnetization direction, and a mode register supporting a plurality of operational modes. | 11-21-2013 |
20130318290 | Garbage Collection Implemented in Hardware - A computing device is provided and includes a memory module, a sweep engine, a root snapshot module, and a trace engine. The memory module has a memory implemented as at least one hardware circuit. The memory module uses a dual-ported memory configuration. The sweep engine includes a stack pointer. The sweep engine is configured to send a garbage collection signal if the stack pointer falls below a specified level. The sweep engine is in communication with the memory module to reclaim memory. The root snapshot engine is configured to take a snapshot of roots from at least one mutator if the garbage collection signal is received from the sweep engine. The trace engine receives roots from the root snapshot engine and is in communication with the memory module to receive data. | 11-28-2013 |
20130326131 | Method for Security Context Switching and Management in a High Performance Security Accelerator System - A security context management system within a security accelerator that can operate with high latency memories and can provide line-rate processing on several security protocols. The method employed hides the memory latencies by having the processing engines working in a pipelined fashion. It is designed to auto-fetch security context from external memory, and will allow any number of simultaneous security connections by caching only limited contexts on-chip and fetching other contexts as needed. The module does the task of fetching and associating security context with ingress packet, and populates the security context RAM with data from the external memory. | 12-05-2013 |
20130332664 | System and Method for Managing Network Navigation - A file comprising an application and data corresponding to a status of the application at a particular time is maintained in a first memory of a user device, the first memory comprising a persistent storage. The application may be a software application, for example. In response to a request, the file is transferred to a second memory of the device, the second memory comprising a random-access memory. The file is activated, or set up, as a running application. The user device may be a cell phone, a wireless telephone, a personal digital assistant, a personal computer, a laptop computer, a workstation, a mainframe computer, etc. In one embodiment, the file is brought to a foreground of the user device. | 12-12-2013 |
20130332665 | MEMORY WITH BANK-CONFLICT-RESOLUTION (BCR) MODULE INCLUDING CACHE - A memory device includes a block of memory cells and a cache. The block of memory cells is not a random access memory with multiple ports. The block of memory cells is partitioned into subunits that have only a single port. The cache is coupled to the block of memory cells adapted to handle a plurality of accesses to a same subunit of memory cells without a conflict such that the memory appears to be a random access memory to said plurality of accesses. A method of operating the memory, and a memory with bank-conflict-resolution (BCR) module including cache are also provided. | 12-12-2013 |
20130332666 | INFORMATION PROCESSOR, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT - According to one embodiment, an information processor configured to execute codes described in Open Computing Language (OpenCL) includes: a first cache; a second cache; a global memory; and an arithmetic module. The first cache is with local scope and configured to be capable of being referred to by all work items in one workgroup. The second cache is with global scope and configured to be capable of being referred to by all work items in a plurality of workgroups. The global memory is with global scope and configured to be capable of being referred to by all work items in a plurality of workgroups. The arithmetic module is configured to execute a code referring to the second cache as a scratch-pad memory. | 12-12-2013 |
20130339591 | RELAYING APPARATUS, RELAY HISTORY RECORDING METHOD, AND DATA PROCESSING APPARATUS - When a relaying apparatus receives communication unit data transmitted from a processing apparatus that performs data processing, the relaying apparatus extracts preset data from the received communication unit data as trace information and calculates the number of pieces of the received communication unit data. History information of the received communication unit data is selected from the extracted trace information and statistical information obtained from the result of the calculation. The selected information is recorded in a storage apparatus available to the processing apparatus. | 12-19-2013 |
20130346680 | EMULATED ELECTRICALLY ERASABLE MEMORY HAVING AN ADDRESS RAM FOR DATA STORED IN FLASH MEMORY - A memory system comprises a memory controller, an address RAM coupled to the memory controller, and a non-volatile memory coupled to the memory controller. The non-volatile memory has an address portion and a data portion. The address portion of the non-volatile memory provides data portion addresses and data portion addresses of valid data to the memory controller. The memory controller loads the data portion addresses and stores them in the address RAM at locations defined by the data portion addresses of valid data into the address RAM. The memory controller uses the data portion addresses, and locations of data blocks within the address RAM, to locate the data blocks within the data portion of non-volatile memory. The memory controller uses the data portion addresses, and locations of the data block addresses within the address RAM, to locate data blocks within the data portion of non-volatile memory | 12-26-2013 |
20130346681 | MAGNETIC RANDOM ACCESS MEMORY - A magnetic random access memory is configured as a read/write memory and at least a first section of the magnetic random access memory is configured to be converted to a read only memory. | 12-26-2013 |
20140006697 | MEMORY BANK HAVING WORKING STATE INDICATION FUNCTION | 01-02-2014 |
20140025878 | Terminal for Accessing Wireless Network and Running Method thereof - Disclosed in the disclosure are a terminal for accessing a wireless network and a method for running the same, wherein the terminal includes an expanded external RAM and is configured to store the terminal firmware program obtained from the host side into the expanded external RAM, run the same and interact with the host side to complete a service. the terminal in the disclosure need not expand the FLASH storage space, and the terminal stores the terminal firmware program obtained from the host side into the expanded external RAM thereof, then runs the same and interacts with the host side to complete the service. The terminal does not use FLASH to store the terminal firmware program, avoiding the failure of not being able to be upgraded or used, wherein the failure is due to the exception of the terminal FLASH and reducing the costs of the wireless network access terminal. | 01-23-2014 |
20140032826 | METHOD OF TRAINING MEMORY CORE AND MEMORY SYSTEM - A method of training a memory device included in a memory system is provided. The method includes testing memory core parameters for a memory core of the memory device during a booting-up sequence of the memory system; determining trimmed memory core parameters based on the test results; storing the determined trimmed memory core parameters; and applying the trimmed memory core parameter to the memory device during a normal operation of the memory device. | 01-30-2014 |
20140032827 | DATA INVERSION BASED APPROACHES FOR REDUCING MEMORY POWER CONSUMPTION - Disclosed herein are approaches to reducing a guardband (margin) used for minimum voltage supply (Vcc) requirements for memory such as cache. | 01-30-2014 |
20140047173 | APPLICATION PRE-LAUNCH TO REDUCE USER INTERFACE LATENCY - A device stores a plurality of applications and a list of associations for those applications. The applications are preferably stored within a secondary memory of the device, and once launched each application is loaded into RAM. Each application is preferably associated to one or more of the other applications. Preferably, no applications are launched when the device is powered on. A user selects an application, which is then launched by the device, thereby loading the application from the secondary memory to RAM. Whenever an application is determined to be associated with a currently active state application, and that associated application has yet to be loaded from secondary memory to RAM, the associated application is pre-launched such that the associated application is loaded into RAM, but is set to an inactive state. | 02-13-2014 |
20140059282 | HYBRID NANOTUBE/CMOS DYNAMICALLY RECONFIGURABLE ARCHITECTURE AND SYSTEM THEREFORE - A hybrid nanotube, high-performance, dynamically reconfigurable architecture, NATURE, is provided, and a design optimization flow method and system, NanoMap. A run-time reconfigurable architecture is provided by associating a non-volatile universal memory to each logic element to enable cycle-by-cycle reconfiguration and logic folding, while remaining CMOS compatible. Through logic folding, significant logic density improvement and flexibility in performing area-delay tradeoffs are possible. NanoMap incorporates temporal logic folding during the logic mapping, temporal clustering and placement steps. NanoMap provides for automatic selection of a best folding level, and uses force-direct scheduling to balance resources across folding stages. Mapping can thereby target various optimization objectives and user constraints. A high-density, high-speed carbon nanotube RAM can be implemented as the universal memory, allowing on-chip multi-context configuration storage, enabling fine-grain temporal logic folding, and providing a significant increase in relative logic density. | 02-27-2014 |
20140059283 | CONTROLLING A MEMORY ARRAY - Methods and systems for controlling a memory array are provided. A method of controlling a memory array includes: providing a next index to be read that indicates a location in the memory array from which to retrieve an output; reading validity information from a validity memory unit; comparing the next index with a last read index stored in an index memory unit; reading the output from an output memory unit when the last read index is the same as the next index and the validity information indicates the output in the output memory unit is valid; and reducing power to the memory array when the output is read from the output memory unit. | 02-27-2014 |
20140059284 | SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS MEMORY SPACE MANAGEMENT FOR STORAGE CLASS MEMORY - Embodiments of the present invention provide a system, method and computer program products for memory space management for storage class memory. One embodiment comprises a method for information storage in an information technology environment. The method comprises storing data in a storage class memory (SCM) space, and storing storage management metadata corresponding to said data, in the SCM in a first data structure. The method further includes buffering storage management metadata corresponding to said data, in a main memory in a second data structure. | 02-27-2014 |
20140068164 | EFFICIENT MEMORY CONTENT ACCESS - A memory content access interface may include, but is not limited to: a read-path memory partition; a write-path memory partition; and a memory access controller configured to regulate access to at least one of the read-path memory partition and the write-path memory partition by an external controller. | 03-06-2014 |
20140068165 | SPLITTING A REAL-TIME THREAD BETWEEN THE USER AND KERNEL SPACE - A method is provided for exchanging large amounts of memory within an operating system containing consumer and producer threads located in a user space and a kernel space, by controlling ownership of a plurality of RAM banks shared by multiple processes or threads in a consumer-producer relationship. The method includes sharing at least two RAM banks between a consumer process or thread and a producer process or thread, thereby allowing memory to be exchanged between said consumer process or thread and said producer process or thread, and alternately assigning ownership of a shared RAM bank to either said consumer process or thread or said producer process or thread, thereby allowing said producer process or thread to insert data into said shared RAM bank and said consumer process or thread to access data from said shared RAM bank. | 03-06-2014 |
20140068166 | MEMORY CONTROL TECHNIQUE - A disclosed information processing apparatus includes: one or plural memories, each of which includes a self-refresh function; and a memory control unit that stops a patrol that includes reading and error correction with respect to a memory among the one or plural memories, upon starting self-refresh of the one or plural memories, and that restarts the patrol, upon stopping the self-refresh of the one or plural memories. A disclosed memory control unit includes: a patrol unit that performs a patrol including reading and error correction with respect to a memory among one or plural memories that has a self-refresh function; and a controller that stops the patrol, upon starting self-refresh of the one or plural memories, and that restarts the patrol, upon stopping the self-refresh of the one or plural memories. | 03-06-2014 |
20140095777 | SYSTEM CACHE WITH FINE GRAIN POWER MANAGEMENT - Methods and apparatuses for reducing leakage power in a system cache within a memory controller. The system cache is divided into multiple small sections, and each section is supplied with power from a separately controllable power supply. When a section is not being accessed, the voltage supplied to the section is reduced to a voltage sufficient for retention of data but not for access. Incoming requests are grouped together based on which section of the system cache they target. When enough requests that target a given section have accumulated, the voltage supplied to the given section is increased to a voltage sufficient for access. Then, once the given section has enough time to ramp-up and stabilize at the higher voltage, the waiting requests may access the given section in a burst of operations. | 04-03-2014 |
20140095778 | METHODS, SYSTEMS AND APPARATUS TO CACHE CODE IN NON-VOLATILE MEMORY - Methods and apparatus are disclosed to cache code in non-volatile memory. A disclosed example method includes identifying an instance of a code request for first code, identifying whether the first code is stored on non-volatile (NV) random access memory (RAM) cache, and when the first code is absent from the NV RAM cache, adding the first code to the NV RAM cache when a first condition associated with the first code is met and preventing storage of the first code to the NV RAM cache when the first condition is not met. | 04-03-2014 |
20140115243 | RESISTIVE RANDOM-ACCESS MEMORY DEVICES - A resistive random-access memory device includes a memory array, a read circuit, a write-back logic circuit and a write-back circuit. The read circuit reads the data stored in a selected memory cell and accordingly generates a first control signal. The write-back logic circuit generates a write-back control signal according to the first control signal and a second control signal. The write-back circuit performs a write-back operation on the selected memory cell according to the write-back control signal and a write-back voltage, so as to change a resistance state of the selected memory cell from a low resistance state to a high resistance state, and generates the second control signal according to the resistance state of the selected memory cell. | 04-24-2014 |
20140143485 | TECHNIQUE FOR OPTIMIZING STATIC RANDOM-ACCESS MEMORY PASSIVE POWER CONSUMPTION - A static read-only memory (SRAM) includes one or more bit cell rows that each includes a collection of bit cells. Each bit cell row is coupled to two or more different wordlines, where each wordline associated with a given bit cell row provides memory access to a different subset of bit cells within that bit cell row. | 05-22-2014 |
20140143486 | FLEXIBLE ARBITRATION SCHEME FOR MULTI ENDPOINT ATOMIC ACCESSES IN MULTICORE SYSTEMS - The MSMC (Multicore Shared Memory Controller) described is a module designed to manage traffic between multiple processor cores, other mastering peripherals or DMA, and the EMIF (External Memory InterFace) in a multicore SoC. The invention unifies all transaction sizes belonging to a slave previous to arbitrating the transactions in order to reduce the complexity of the arbitration process and to provide optimum bandwidth management among all masters. Two consecutive slots are assigned per cache line access to automatically guarantee the atomicity of all transactions within a single cache line. The need for synchronization among all the banks of a particular SRAM is eliminated, as synchronization is accomplished by assigning back to back slots. | 05-22-2014 |
20140149650 | Caching Program Optimization - A method for optimizing performance of programs has steps for scanning storage mechanisms of the computing appliance by executing a configuration utility by a Central Processing Unit (CPU) of the computing appliance to find and identify installed programs, comparing the determined installed programs to a database (dB) of information and files prepared to optimize performance of specific programs through caching, and determining matches between the installed programs and specific programs having information and files in the dB, selecting installed programs to optimize for performance, partitioning a portion of system RAM of the computing appliance as cache, and loading information and files from local storage mechanisms for each program selected to the cache partitioned in system RAM, enabling the programs selected to at least read data in operation from the cache portion partitioned in system RAM. | 05-29-2014 |
20140164688 | SOC SYSTEM AND METHOD FOR OPERATING THE SAME - A SOC system includes a central processing unit; a memory management unit receiving a virtual address from the central processing unit and converting the virtual address into a physical address; a main memory implemented by a volatile memory and directly accessed through the physical address converted by the memory management unit; and a storage implemented by a nonvolatile memory separate from the main memory and including a first area directly accessed through the physical address converted by the memory management unit. | 06-12-2014 |
20140181384 | Memory Scheduling for RAM Caches Based on Tag Caching - A system, method and computer program product to store tag blocks in a tag buffer in order to provide early row-buffer miss detection, early page closing, and reductions in tag block transfers. A system comprises a tag buffer, a request buffer, and a memory controller. The request buffer stores a memory request having an associated tag. The memory controller compares the associated tag to a plurality of tags stored in the tag buffer and issues the memory request stored in the request buffer to either a memory cache or a main memory based on the comparison. | 06-26-2014 |
20140201434 | Managing Volatile File Copies - Persistent files are copied from persistent memory to volatile memory to yield volatile files. At least some requests to open for writing or to close to writing persistent files are redirected to the corresponding volatile files. Openings to writing and closings to writing of volatile files are tracked to yield a synchronization record. Persistent files are synchronized to volatile files based on the synchronization record. | 07-17-2014 |
20140223090 | ACCESSING CONTROL REGISTERS OVER A DATA BUS - An electronic apparatus that includes a controlled device with a plurality of control registers. A data bus is coupled between the controlled device and a processor, and an interface is configured to receive a plurality of portions of data read from or to be written to the plurality of control registers. The electronic apparatus also includes a correlation circuit configured to associate at least some of the plurality of portions of data with respective physical addresses of the plurality of control registers based on respective positions of the respective portions of data within the plurality. | 08-07-2014 |
20140237173 | AGGREGATION OF WRITE TRAFFIC TO A DATA STORE - A method and a processing device are provided for sequentially aggregating data to a write log included in a volume of a random-access medium. When data of a received write request is determined to be suitable for sequentially aggregating to a write log, the data may be written to the write log and a remapping tree, for mapping originally intended destinations on the random-access medium to one or more corresponding entries in the write log, may be maintained and updated. At time periods, a checkpoint may be written to the write log. The checkpoint may include information describing entries of the write log. One or more of the checkpoints may be used to recover the write log, at least partially, after a dirty shutdown. Entries of the write log may be drained to respective originally intended destinations upon an occurrence of one of a number of conditions. | 08-21-2014 |
20140244920 | SCHEME TO ESCALATE REQUESTS WITH ADDRESS CONFLICTS - Techniques for escalating a real time agent's request that has an address conflict with a best effort agent's request. A best effort request can be allocated in a memory controller cache but can progress slowly in the memory system due to its low priority. Therefore, when a real time request has an address conflict with an older best effort request, the best effort request can be escalated if it is still pending when the real time request is received at the memory controller cache. Escalating the best effort request can include setting the push attribute of the best effort request or sending another request with a push attribute to bypass or push the best effort request. | 08-28-2014 |
20140244921 | ASYMMETRIC MULTITHREADED FIFO MEMORY - A First-in First-out (FIFO) memory comprising a latch array and a RAM array and operable to buffer data for multiple threads. Each array is partitioned into multiple sections, and each array comprises a section designated to buffer data for a respective thread. A respective latch array section is assigned higher priority to receive data for a respective thread than the corresponding RAM array section. Incoming data for the respective thread are pushed into the corresponding latch array section while it has vacancies. Upon the latch array section becoming empty, incoming data are pushed into the corresponding RAM array section during a spill-over period. The RAM array section may comprise two spill regions with only one active to receive data at a spill-over period. The allocation of data among the latch array and the spill regions of the RAM array can be transparent to external logic. | 08-28-2014 |
20140258603 | ASYMMETRIC MEMORY MIGRATION IN HYBRID MAIN MEMORY - Main memory is managed by receiving a command from an application to read data associated with a virtual address that is mapped to the main memory. A memory controller determines that the virtual address is mapped to one of the symmetric memory components of the main memory, and accesses memory use characteristics indicating how the data associated with the virtual address has been accessed, The memory controller determines that the data associated with the virtual address has access characteristics suited to an asymmetric memory component of the main memory and loads the data associated with the virtual address to the asymmetric memory component of the main memory. After the loading and using the memory management unit, a command is received from the application to read the data associated with the virtual address, and the data associated with the virtual address is retrieved from the asymmetric memory component. | 09-11-2014 |
20140258604 | NON-VOLATILE STORAGE MODULE HAVING MAGNETIC RANDOM ACCESS MEMORY (MRAM) WITH A BUFFER WINDOW - A block storage system includes a host and comprises a block storage module that is coupled to the host. The block storage module includes a MRAM array and a bridge controller buffer coupled to communicate with the MRAM array. The MRAM array includes a buffer widow that is moveable within the MRAM array to allow contents of the MRAM array to be read by the host through the bridge controller buffer even when the capacity of the bridge controller buffer is less than the size of the data being read from the MRAM array. | 09-11-2014 |
20140281180 | DATA COHERENCY MANAGEMENT - A data processing system | 09-18-2014 |
20140281181 | Enhanced Performance Monitoring Method and Apparatus - A high-performance-computer system includes a statistics accumulation apparatus configured to efficiently accumulate system performance data from a variety of system components, and periodically write such data to processor local memory for efficient subsequent software processing of the thus acquired data, thereby reducing the system hardware and software overhead needed for collection of such data as compared to prior art systems. | 09-18-2014 |
20140281182 | APPARATUSES AND METHODS FOR VARIABLE LATENCY MEMORY OPERATIONS - Apparatuses and methods for variable latency memory operations are disclosed herein. An example apparatus may include a memory configured to receive an activate command indicative of a type of a command during a first addressing phase and to receive the command during a second addressing phase. The memory may further be configured to provide information indicating that the memory is not available to perform a command responsive, at least in part, to receiving the command during a variable latency period and to provide information indicating that the memory is available to perform a command responsive, at least in part, to receiving the command after the variable latency period. | 09-18-2014 |
20140281183 | STAGING SORTED DATA IN INTERMEDIATE STORAGE - A data storage system includes data storage and random access memory. A sorting module is communicatively coupled to the random access memory and is configured to sort data blocks of incoming write data received in the random access memory. A storage controller is communicatively coupled to the random access memory and the data storage and is configured to write the sorted data blocks as individually-sorted data block sets to a staging area of the data storage. A method and processor-implemented process provide for sorting data blocks of incoming write data received in a random access memory of data storage and writing the sorted data blocks as individually-sorted data block sets to a staging area of the data storage. | 09-18-2014 |
20140281184 | MIXED MEMORY TYPE HYBRID CACHE - A hybrid cache includes a static random access memory (SRAM) portion and a resistive random access memory portion. Cache lines of the hybrid cache are configured to include both SRAM macros and resistive random access memory macros. The hybrid cache is configured so that the SRAM macros are accessed before the resistive random memory macros in each cache access cycle. While SRAM macros are accessed, the slower resistive random access memory reach a data access ready state. | 09-18-2014 |
20140281185 | STAGING SORTED DATA IN INTERMEDIATE STORAGE - A data storage system includes data storage and random access memory. A sorting module is communicatively coupled to the random access memory and is configured to sort data blocks of incoming write data received in the random access memory. A storage controller is communicatively coupled to the random access memory and the data storage and is configured to write the sorted data blocks as individually-sorted data block sets to a staging area of the data storage. A method and processor-implemented process provide for sorting data blocks of incoming write data received in a random access memory of data storage and writing the sorted data blocks as individually-sorted data block sets to a staging area of the data storage. | 09-18-2014 |
20140281186 | DYNAMIC GRANULE-BASED INTERMEDIATE STORAGE - A data storage system includes data storage and random access memory. A sorting module is communicatively coupled to the random access memory and sorts data blocks of write data received in the random access memory of the data storage. A storage controller is communicatively coupled to the random access memory and the data storage and being configured to write the sorted data blocks into one or more individually-sorted granules in a granule storage area of the data storage, wherein each granule is dynamically constrained to a subset of logical block addresses. A method and processor-implemented process provide for sorting data blocks of write data received in random access memory of data storage. The method and processor-implemented process write the sorted data blocks into one or more individually-sorted granules in a granule storage area of the data storage, wherein each granule is dynamically constrained to a subset of logical block addresses. | 09-18-2014 |
20140281187 | ELECTRONIC APPARATUS, METHOD OF CREATING SNAPSHOT IMAGE, AND PROGRAM - An electronic apparatus includes a volatile memory, a swap device, and a control unit. The control unit is configured to divide data loaded in the volatile memory between an activation start and a specific time point after the activation start into data used to create a snapshot image and data stored in the swap device. | 09-18-2014 |
20140281188 | METHOD OF UPDATING MAPPING INFORMATION AND MEMORY SYSTEM AND APPARATUS EMPLOYING THE SAME - A method of updating mapping information for a memory system comprises generating write transaction information based on multiple write requests issued by a host, performing program operations in the memory system based on the write transaction information, and following completion of the program operations, updating mapping information based on an order in which the write requests were issued by the host. | 09-18-2014 |
20140281189 | PROCESSOR SYSTEM HAVING VARIABLE CAPACITY MEMORY - According to one embodiment, a processor system includes a variable capacity memory. The memory includes a memory cell array including basic units, each of the basic units including one cell transistor and one variable resistance element, a mode selector switching between first and second modes, a read/write of one bit executed in 2 | 09-18-2014 |
20140304462 | MEMORY MODULE HAVING MULTIPLE MEMORY BANKS SELECTIVELY CONNECTABLE TO A LOCAL MEMORY CONTROLLER AND AN EXTERNAL MEMORY CONTROLLER - A memory module includes memory banks, a local memory controller to access data in the memory banks, and an interface to an external memory controller that is configured to access the memory module. Multiplexing circuitry selectively connects the memory banks to the local memory controller and to the interface to the external memory controller. | 10-09-2014 |
20140304463 | Systems and Methods Involving Multi-Bank, Dual- or Multi-Pipe SRAMs - Systems and methods are disclosed for increasing the performance of static random access memory (SRAM). Various systems herein, for example, may include or involve dual- or multi-pipe, multi-bank SRAMs, such as Quad-B2 SRAMs. In one illustrative implementation, there is provided an SRAM memory device including a memory array comprising a plurality of SRAM banks and pairs of separate and distinct pipes associated with each of the SRAM banks, wherein each pair of pipes may provide independent access to its associated SRAM bank. | 10-09-2014 |
20140317342 | MICROCOMPUTER AND STORING APPARATUS - In a microcomputer provided with a program storing device for storing instruction codes and a micro-processor for reading and executing the instruction codes stored in the program storing device, the program storing device have plural memories for storing instruction codes, an output unit for receiving plural pieces of data output from the plural memories, and selecting and outputs one of the plural pieces of data received from the plural memories, a selecting unit for receiving address data sent from the micro-processor to select one of the plural memories, an activating unit for activating the memory selected by the selecting unit, and a controlling unit for controlling the output unit to output data of the memory activated by the activating unit. | 10-23-2014 |
20140337568 | ELECTRONIC DEVICE AND METHOD FOR OPERATING THE SAME - Provided is an electronic device including a power supply circuit. The power supply circuit includes: a voltage driving unit configured to pull-up drive an output node and generate an output voltage; and a driving control unit configured to receive the output voltage, disable the voltage driving unit from the time at which a divided voltage obtained by dividing the output voltage at a set ratio becomes higher than a first level, and enable the voltage driving unit from the time at which the divided voltage becomes lower than a second level, which is higher than the first level. | 11-13-2014 |
20140365723 | RESISTANCE MEMORY DEVICE AND APPARATUS, FABRICATION METHOD THEREOF, OPERATION METHOD THEREOF, AND SYSTEM HAVING THE SAME - Resistance memory device and apparatus, a fabrication method thereof, an operation method thereof, and a system including the same are provided. The resistance memory device may include a data storage unit and a first interconnection connected to the data storage unit. A first access device may be connected in series with the data storage unit and a second access device may be connected in series with the first access device. A second interconnection may be connected to the second access device. A third interconnection may be connected to the first access device to drive the first access device and a fourth interconnection connected to the second access device to drive the second access device. | 12-11-2014 |
20140372690 | MEMORY SYSTEM, SEMICONDUCTOR DEVICE AND METHODS OF OPERATING THE SAME - A memory system, a semiconductor memory device and methods of operating the same may perform a read operation on the basis of flag data stored in a flag register, without reading the flag data stored in a memory array, when performing the read operation, so that a time taken for the read operation may be reduced. | 12-18-2014 |
20140379975 | PROCESSOR - According to one embodiment, a processor includes a core controlling processing data, a cache data area storing the processing data as cache data in a nonvolatile manner, a first tag area storing a tag data of the cache data in a volatile manner, a second tag area storing the tag data in a nonvolatile manner, a tag controller controlling the tag data. The tag controller determines whether the processing data is stored in the cache data area by acquiring the tag data from one of the first and second tag areas. | 12-25-2014 |
20150012693 | READ BASED TEMPORAL LOCALITY COMPRESSION - For read based temporal locality compression by a processor device in a computing environment, read operations are monitored, traced, and/or analyzed to identify repetitions of read patterns of compressed data. The compressed data is rearranged based on the repetitions of read order of the compressed data that are in a read order. | 01-08-2015 |
20150012694 | HARDWARE ASSISTED META DATA LOOKUP - A memory system including a memory device. The memory device includes a substrate. A memory array defines a plurality of pages, each page including a data area for storing data and a spare area for storing metadata. A compare circuit is configured to receive metadata retrieved from a plurality of pages sequentially and compare the retrieved metadata to a search pattern. The physical location of the page can be determined by finding the search pattern. The memory array and the compare circuit are formed in different layers of the substrate. | 01-08-2015 |
20150067246 | COHERENCE PROCESSING EMPLOYING BLACK BOX DUPLICATE TAGS - An apparatus for processing cache requests in a computing system is disclosed. The apparatus may include a plurality of state memories, a plurality tag memories, and a control circuit. Each of the state memories may be configured to store coherency state information for a cache memory of a respective plurality of coherent agents. Each of the tag memories may be configured to store duplicate tag information a cache memory of the respective plurality of coherent agents. The control circuit may be configured to receive a tag address, access tag information in each of the tag memories in parallel dependent upon the received tag address, determine, for each cache memory, new coherency state information for a cache entry corresponding to the received tag address, and store the new coherency state information for each of the cache memories into a respective one of the plurality of state memories. | 03-05-2015 |
20150081963 | Allocating a Timeslot - An interface of a receiving module in a FPGA chip receives data. The interface writes the data to a buffer of the receiving module, in which the buffer is implemented by a single piece of RAM of which a bit width is B-bit. A first sub-module of the receiving module reads B-bit data from the buffer each timeslot and writes the B-bit data to a data storage of a scheduling module in the FPGA chip, in which the data storage is formed by M pieces of RAM which are numbered in sequence, each of the M pieces of RAM is divided into address spaces which are numbered in sequence, and the timeslot is allocated by a timing generator of the scheduling module and a timeslot cycle is N. A second sub-module of the scheduling module reads data from the data storage, processes the data read out and sends the processed data. | 03-19-2015 |
20150089125 | FRAMEWORK FOR NUMA AFFINITIZED PARALLEL QUERY ON IN-MEMORY OBJECTS WITHIN THE RDBMS - Techniques are provided for performing parallel processing on in-memory objects within a database system. In one embodiment, a plurality of in-memory chunks are maintained on a plurality of non-uniform memory access (NUMA) nodes. In response to receiving a query, a set of clusters is determined for the plurality of in-memory chunks. Each respective cluster in the set of clusters corresponds to a particular NUMA node of the plurality of NUMA nodes and includes a set of one or more in-memory chunks from the plurality of in-memory chunks. For each respective cluster in the set of clusters, a query coordinator assigns, to the respective cluster, a set of one or more processes associated with the particular NUMA node that corresponds to the respective cluster. | 03-26-2015 |
20150100722 | UTILIZING DESTRUCTIVE FEATURES AS RAM CODE FOR A STORAGE DEVICE - A host including a controller configured to be connected to a storage device separate from the host. The controller is configured to maintain random access memory (RAM) code on the host, the RAM code configured to provide a destructive function, temporarily load the RAM code onto a volatile memory in the storage device during a manufacturing process, wherein the loaded RAM code, when executed by a processor in the storage device, is configured to cause the processor in the storage device to perform a destructive function on the storage device, and remove the loaded RAM code from the volatile memory after the manufacturing process, wherein the destructive function is unable to be performed by the processor when the loaded RAM code is removed from the volatile memory. | 04-09-2015 |
20150120995 | DATA STORAGE DEVICE STARTUP - When a read command is received from a host requesting data stored on a disk of a Data Storage Device (DSD), it is determined whether the DSD is in a startup period and whether the requested data is stored in a solid state memory of the DSD. The requested data is designated for storage in the solid state memory if it is determined that the DSD is in the startup period and the requested data is not stored in the solid state memory. | 04-30-2015 |
20150127897 | MANAGING OPEN TABS OF AN APPLICATION - Systems and methods for managing open tabs of an application are provided. In some aspects, a page is presented in a first tab from among multiple tabs open in an application at a computing device. That a content of the page presented in the first tab is different from a default content of the page stored at a web server is determined. Contents of the multiple tabs are retained in a random access memory (RAM). A request is received to reduce an amount of the RAM used by the application. The content of the page presented in the first tab is stored. In response to the request to reduce the amount of the RAM used by the application, a content presented in a second tab from among the plurality of tabs is removed from the RAM. | 05-07-2015 |
20150149713 | MEMORY INTERFACE DESIGN - An improved memory interface design is provided. In some implementations, an integrated circuit includes a first cache memory unit, a second cache memory unit located in parallel with the first cache memory unit, and a floorsweeping module configured to be able to select between the first cache memory unit and the second cache memory unit for cache requests, wherein the selection is based at least partially on the presence or absence of one or more manufacturing defects in the first cache memory unit or the second cache memory unit. | 05-28-2015 |
20150293841 | EFFICIENT RECLAMATION OF PRE-ALLOCATED DIRECT MEMORY ACCESS (DMA) MEMORY - For efficient reclamation of pre-allocated direct memory access (DMA) memory in a computing environment, hot-add random access memory (RAM) is emulated for a general purpose use by reclamation of pre-allocated DMA memory reserved at boot time for responding to an emergency by notifying a non-kernel use device user that the non-kernel use device has a smaller window, stopping and remapping to the smaller window, and notifying a kernel that new memory has been added, wherein the new memory is a region left after the remap. | 10-15-2015 |
20150309750 | TWO-STAGE READ/WRITE 3D ARCHITECTURE FOR MEMORY DEVICES - Some embodiments of the present disclosure relate to a memory device wherein a single memory cell array is partitioned between two or more tiers which are vertically integrated on a single substrate. The memory device also includes support circuitry including a control circuit configured to read and write data to the memory cells on each tier, and a shared input/output (I/O) architecture which is connected the memory cells within each tier and configured to receive input data word prior to a write operation, and further configured to provide output data word after a read operation. Other devices and methods are also disclosed. | 10-29-2015 |
20150331609 | TIME MANAGEMENT USING TIME-DEPENDENT CHANGES TO MEMORY - A time manager controls one or more timing functions on a circuit. The time manager includes a data storage and a time calculator. The data storage device stores a first indication of a performance characteristic of a memory cell at a first time. The data storage device also stores a second indication of the performance characteristic of the memory cell at a second time. The time calculator is coupled to the data storage device. The time calculator calculates a time duration between the first time and the second time based on a change in the performance characteristic of the memory cell from the first indication to the second indication. | 11-19-2015 |
20150331809 | Method and System for Enforcing Kernel Mode Access Protection - A non-transitory computer-readable storage medium storing a set of instructions executable by a processor, the set of instructions, when executed by the processor, causing the processor to perform operations including mapping a memory area storing a segment of code for a kernel of the system during an initialization time of a system. The operations also include executing the segment of code during the initialization time. The operations also include unmapping a portion of the memory area for the kernel after the segment of code has been executed. | 11-19-2015 |
20150363314 | System and Method for Concurrently Checking Availability of Data in Extending Memories - A memory system for use in a system-in-package device (SiP) is disclosed. The memory system includes two cache memories. The first cache memory is on a first die of the SiP and the second cache memory is on a second die of the SiP. Both cache memories include tag random access memories (RAMs) corresponding to data stored in the corresponding cache memories. The second cache memory is of a different cache level from the first cache memories. Also, the first cache memory is on a first die of the SiP, and the second cache memory includes a first portion on the first die of the SiP, and a second portion on a second die of the SiP. Both cache memories can be checked concurrently for data availability by a single physical address. | 12-17-2015 |
20150370483 | SYSTEM AND METHOD FOR REPLICATING DATA - A system for replicating data comprising includes a first and a second computing device. The first computing device has a first storage unit configured to store block level data, a second storage unit and a volatile memory. The second computing device has a third storage unit and a fourth storage unit configured to store block level data, the third storage unit being communicatively coupled to the second storage unit. The first computing device is configured to receive write requests each containing payload data, write the payload data of the write requests to the volatile memory and append the payload data to the second storage unit, and acknowledge the write requests prior to writing the respective payload data to the second storage unit. The second computing device is configured to detect new data in the third storage unit and apply the new data to the fourth storage unit. | 12-24-2015 |
20150378639 | INTER-PROCESSOR MEMORY - Embodiments relate to an inter-processor memory. An aspect includes a plurality of memory banks, each of the plurality of memory banks comprising a respective plurality of parallel memory modules, wherein a number of the plurality of memory banks is equal to a number of read ports of the inter-processor memory, and a number of parallel memory modules within a memory bank is equal to a number of write ports of the inter-processor memory. Another aspect includes each memory bank corresponding to a single respective read port of the inter-processor memory, and wherein, within each memory bank, each memory module of the plurality of parallel memory modules is writable in parallel by a single respective write port of the inter-processor memory. | 12-31-2015 |
20150378883 | IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREOF - An image processing apparatus and control method are provided. An image processing apparatus including: a storage configured to store data which is divided into a plurality of units of code; a random access memory (RAM) configured to be loaded with the data; a central processing unit (CPU) configured to execute the data; and a storage controller configured to read a requested unit of code from the storage in response to receiving a request from the CPU for the unit of code to be currently executed, and load the read unit of code to the RAM so that the unit of code can be processed by the CPU, wherein the storage controller performs validation with regard to the unit of code when reading the unit of code from the storage, and loads the unit of code, when the validation passes, to the RAM. | 12-31-2015 |
20160011782 | SEMICONDUCTOR STORAGE | 01-14-2016 |
20160026742 | SYSTEM-ON-CHIP INTELLECTUAL PROPERTY BLOCK DISCOVERY - An integrated circuit (IC) includes a bridge circuit configured to receive a first request from an external system, a discover circuit coupled to the bridge circuit and configured to process the first request received from the bridge circuit, and a memory map coupled to the discover circuit. The memory map stores a record for each of a plurality of Intellectual Property (IP) blocks implemented within the IC. The discover circuit is configured to generate a list of the IP blocks implemented within the IC from the records of the memory map responsive to the first request. The bridge circuit is configured to send the list to the external system. | 01-28-2016 |
20160041770 | SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS MEMORY SPACE MANAGEMENT FOR STORAGE CLASS MEMORY - Embodiments of the present invention provide a system, method and computer program products for memory space management for storage class memory. One embodiment comprises a method for information storage in an information technology environment. The method comprises storing data in a storage class memory (SCM) space, and storing storage management metadata corresponding to said data, in the SCM in a first data structure. The method further includes buffering storage management metadata corresponding to said data, in a main memory in a second data structure. | 02-11-2016 |
20160043313 | ELECTRONIC DEVICE AND METHOD FOR FABRICATING THE SAME - An electronic device includes a semiconductor memory. The semiconductor memory includes first lines extending in a first direction; second lines extending in a second direction crossing the first direction; insulating patterns interposed between the first and second lines at first intersections of the first and second lines; and variable resistance patterns interposed between the first and the second lines at second intersections of the first and second lines. A central intersection is defined by respective central lines of the first and second lines and corresponds to a coordinate (0, 0). The first intersections are located on first to (n+1) | 02-11-2016 |
20160048447 | MAGNETORESISTIVE RANDOM-ACCESS MEMORY CACHE WRITE MANAGEMENT - Technologies are generally described manage MRAM cache writes in processors. In some examples, when a write request is received with data to be stored in an MRAM cache, the data may be evaluated to determine whether the data is to be further processed. In response to a determination that the data is to be further processed, the data may be stored in a write cache associated with the MRAM cache. In response to a determination that the data is not to be further processed, the data may be stored in the MRAM cache. | 02-18-2016 |
20160054917 | MOBILE ELECTRONIC DEVICE INCLUDING EMBEDDED MEMORY - An electronic device may include first and second semiconductor chips. The first semiconductor chip may include a processor and a first memory. The second semiconductor chip may include a second memory. The first memory and second memory may be configured to exchange first data and second data with the processor, respectively. The processor may be configured to exchange target data processed or to be processed with the first and second memories. The processor may be configured to determine the target data as the first data if the number of accesses of the target data is equal to or greater than a first reference value. The processor may be configured to determine the target data as the second data if the number of accesses of the target data is less than the first reference value. | 02-25-2016 |
20160054919 | METHOD, APPARATUS, AND SYSTEM FOR READING AND WRITING DATA - Embodiments of the present invention provide a method, an apparatus, and a system for reading and writing data, which relate to the computer field, can resolve a problem in the prior art that different algorithms need to be configured for write operations on storage devices of different optimization granularities. The method includes: acquiring first data to be written into a storage device and an address for the first data; acquiring, second data from the address of the storage device; acquiring configuration information; generating, according to the configuration information, a candidate data set; comparing data in the candidate data set with the second data, so as to acquire third data that is in the candidate data set and meets a preset rule; and writing the third data into the storage device according to the address. | 02-25-2016 |
20160062704 | SEMICONDUCTOR DEVICE AND INFORMATION PROCESSING DEVICE - In a semiconductor device in which components to be a basic configuration unit are arranged in an array shape for calculating an interaction model, a technique capable of changing a topology between the components is provided. A semiconductor device includes a plurality of units each of which includes a first memory cell for storing a value indicating a state of one node of an interaction model, a second memory cell for storing an interaction coefficient indicating an interaction from a node connected to the one node, and a calculation circuit for determining a value indicating a next state of the one node based on a value indicating a state of the connected node and on the interaction coefficient. In addition, the semiconductor device includes a plurality of switches for connecting or disconnecting the plurality of units to/from each other. | 03-03-2016 |
20160062914 | ELECTRONIC SYSTEM WITH VERSION CONTROL MECHANISM AND METHOD OF OPERATION THEREOF - An electronic system includes: a storage device configured to store a descriptor, including a key and a value, having multiple versions linked on the storage device; a storage interface, coupled to the storage device, configured to provide an entry having a location; and retrieve the descriptor, including the key and the value, based on the entry having the location for selecting one of the versions of the descriptor. | 03-03-2016 |
20160070494 | HIGHLY PERFORMANT RELIABLE MESSAGE STORAGE USING IN-MEMORY REPLICATION TECHNOLOGY - A system and method can provide a scalable data storage in a middleware environment. The system can include a cluster of replicated store daemon processes in a plurality of processing nodes, wherein each machine node can host a replicated store daemon process of the cluster of replicated store daemon processes. Additionally, the system can include one or more replicated stores associated with an application server the processing node. The replicated store daemon cluster can persist data from a replicated store to another node, the other node also being associated with the replicated store daemon cluster. The system and method can additionally support a messaging service in a middleware environment. The messaging service can use the replicated store to store a copy of a message in the local processing node and on another processing node associated with the same replicated store daemon cluster. | 03-10-2016 |
20160070916 | MALWARE-PROOF DATA PROCESSING SYSTEM - A data processing system may have a strict separation of processor tasks and data categories, wherein processor tasks are separated into software loading and initialisation (loading processor) and data processing (main processor) and data categories are separated into address data, instructions, internal function data, target data of the main processor and target data of the loading processor. In this way, protection is provided against malware, irrespective of the transmission medium and of the type of malware, and also against future malware and without performance losses in the computer system. | 03-10-2016 |
20160071905 | ELECTRONIC DEVICE AND METHOD FOR FABRICATING THE SAME - An electronic device including a semiconductor memory that includes: a selection element; a first plug and a second plug that are coupled with two different sides of the selection element, respectively; a variable resistance element formed over the first plug and configured to store data; and a dummy variable resistance element formed over the second plug and configured to include a conductive path coupled with the second plug. | 03-10-2016 |
20160079363 | TRANSISTOR, ELECTRONIC DEVICE HAVING THE TRANSISTOR, AND METHOD FOR FABRICATING THE SAME - An electronic device includes a semiconductor memory unit that includes: a gate including at least a portion buried in a substrate; a junction portion formed in the substrate on both sides of the gate; and a memory element coupled with the junction portion on one side of the gate, wherein the junction portion includes: a recess having a bottom surface protruded in a pyramid shape; an impurity region formed in the substrate and under the recess; and a contact pad formed in the recess. | 03-17-2016 |
20160092355 | SPLIT WRITE OPERATION FOR RESISTIVE MEMORY CACHE - A method of reading from and writing to a resistive memory cache includes receiving a write command and dividing the write command into multiple write sub-commands. The method also includes receiving a read command and executing the read command before executing a next write sub-command. | 03-31-2016 |
20160098360 | Information Handling System Secret Protection Across Multiple Memory Devices - Information handling system secret protection is enhanced by encrypting secrets into a common file and breaking up the encrypted file into plural portions stored at plural memory devices, such as across plural DIMMs disposed in the information handling system. In one embodiment, a decryption key to decrypt the encrypted file is broken into plural portions stored at the plural memory devices. Upon detection of a predetermined security factor, such as an indication of removal of a the encrypted file is removed from the plural portions. | 04-07-2016 |
20160117116 | ELECTRONIC DEVICE AND A METHOD FOR MANAGING MEMORY SPACE THEREOF - The present invention provides a method for managing memory space in an electronic device including: selecting a candidate page from a first memory space for swapping the candidate page out of the first memory space into the second memory space; compressing the candidate page to obtain a first compressed page and a first hash value of the first compressed page; performing a comparison using the first hash value of the first compressed page and the hash values of the pages stored in a second memory space to find whether the pages have the same content as the first compressed page or the candidate page; and if a page is found to have the same content as the first compressed page or the candidate page, mapping a virtual address of the first compressed page or the candidate page to the found page. | 04-28-2016 |
20160117271 | SMART HOLDING REGISTERS TO ENABLE MULTIPLE REGISTER ACCESSES - A multiple access mechanism allows sources to simultaneously access different target registers at the same time without using a semaphore. The multiple access mechanism is implemented using N holding registers and source identifiers. The N holding registers are located in each slave engine. Each of the N holding registers is associated with a source and is configured to receive partial updates from the source before pushing the full update to a target register. After the source is finished updating the holding register and the holding register is ready to commit to the target register, a source identifier is added to a register bus. The source identifier identifies the holding register as the originator of the transaction on the register bus. The N holding registers are able to simultaneously handle N register transactions. The max value of N is 2 | 04-28-2016 |
20160118575 | ELECTRONIC DEVICES HAVING SEMICONDUCTOR MAGNETIC MEMORY UNITS - A semiconductor device includes a resistance variable element including a free magnetic layer, a tunnel barrier layer and a pinned magnetic layer; and a magnetic correction layer disposed over the resistance variable element to be separated from the resistance variable element, and having a magnetization direction which is opposite to a magnetization direction of the pinned magnetic layer. | 04-28-2016 |
20160124658 | SYSTEM AND METHOD FOR STORING REDUNDANT INFORMATION - A method and system for reducing storage requirements and speeding up storage operations by reducing the storage of redundant data includes receiving a request that identifies one or more data objects to which to apply a storage operation. For each data object, the storage system determines if the data object contains data that matches another data object to which the storage operation was previously applied. If the data objects do not match, then the storage system performs the storage operation in a usual manner. However, if the data objects do match, then the storage system may avoid performing the storage operation. | 05-05-2016 |
20160133304 | CALIBRATION MEMORY CONTROL METHOD AND APPARATUS OF ELECTRONIC CONTROL UNIT - A calibration memory control method of an ECU connected to an external calibration device may include receiving a download command from the external calibration device, checking a sub reference page corresponding to the download command, determining whether a sub working page corresponding to the checked sub reference page is allocated, allocating the sub working page corresponding to the checked sub reference page to a RAM region upon determining that the sub working page is not allocated, and copying data stored in the checked sub reference page to the allocated sub working page. As such, according to the present invention, restrictive memory resources may be efficiently used for calibration. | 05-12-2016 |
20160147461 | INTELLIGENT MEMORY BLOCK REPLACEMENT - A framework for intelligent memory replacement of loaded data blocks by requested data blocks is provided. For example, various factors are taken into account to optimize the selection of loaded data blocks to be discarded from the memory, in favor of the requested data blocks to be loaded into the memory. In some implementations, correlations between the requested data blocks and the loaded data blocks are used to determine which of the loaded data blocks may become candidates to be discarded from memory. | 05-26-2016 |
20160149121 | ELECTRONIC DEVICE AND METHOD FOR FABRICATING THE SAME - This technology provides an electronic device and method for fabricating the same. A method for fabricating an electronic device comprising a transistor includes forming a junction region which is partially amorphized in the semiconductor substrate at a side of the gate; forming a metal layer over the junction region; and performing a heat treatment process on the metal layer into a metal-semiconductor compound layer while crystallizing the junction region. | 05-26-2016 |
20160154602 | INFORMATION PROCESSING APPARATUS, IMAGE PROCESSING APPARATUS WITH INFORMATION PROCESSING APPARATUS, AND CONTROL METHOD FOR INFORMATION PROCESSING APPARATUS | 06-02-2016 |
20160155933 | ELECTRONIC DEVICE AND METHOD FOR FABRICATING THE SAME | 06-02-2016 |
20160179374 | System and Method for Performance Optimal Partial Rank/Bank Interleaving for Non-Symmetrically Populated DIMMs Across DDR Channels | 06-23-2016 |
20160179709 | PROCESSING ELEMENT DATA SHARING | 06-23-2016 |
20160181514 | ELECTRONIC DEVICE AND METHOD FOR FABRICATING THE SAME | 06-23-2016 |
20160202930 | METHOD OF CONTROLLING VOLATILE MEMORY AND SYSTEM THEREOF | 07-14-2016 |
20160378403 | SILENT STORE DETECTION AND RECORDING IN MEMORY STORAGE - An aspect includes receiving a write request that includes a memory address and write data. Stored data is read from a memory location at the memory address. Based on determining that the memory location was not previously modified, the stored data is compared to the write data. Based on the stored data matching the write data, the write request is completed without writing the write data to the memory and a corresponding silent store bit, in a silent store bitmap is set. Based on the stored data not matching the write data, the write data is written to the memory location, the silent store bit is reset and a corresponding modified bit is set. At least one of an application and an operating system is provided access to the silent store bitmap. | 12-29-2016 |
20160379689 | MEMORY SYSTEM PERFORMING STATUS READ OPERATION AND METHOD OF OPERATING THE SAME - A method of operating a controller includes, determining whether an address corresponding to a program operation indicates a lower page or an upper page; waiting for a first waiting time for the program operation to the lower page when the address indicates the lower page; waiting for a second waiting time for the program operation to the upper page when the address indicates the upper page, wherein the second waiting time is longer than the first waiting time; and performing a status read operation on the semiconductor memory device after one of the first waiting time or the second waiting time. | 12-29-2016 |
20170236582 | SEMICONDUCTOR APPARATUS COMPRISING A PLURALITY OF CURRENT SINK UNITS | 08-17-2017 |
20170236583 | SEMICONDUCTOR APPARATUS COMPRISING A PLURALITY OF CURRENT SINK UNITS | 08-17-2017 |
20180024830 | HARDWARE SUPPORT FOR NON-DISRUPTIVE UPGRADES | 01-25-2018 |