Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


01st week of 2009 patent applcation highlights part 68
Patent application numberTitlePublished
20090006714METHOD FOR OPTIMIZING VIRTUALIZATION TECHNOLOGY AND MEMORY PROTECTIONS USING PROCESSOR-EXTENSIONS FOR PAGE TABLE AND PAGE DIRECTORY STRIPING - In a virtualized processor based system causing a transition to a virtual machine monitor executing on the processor based system in response to a modification of a page table of a guest executing in a virtual machine of the processor based system, and the virtual machine monitor responding to the transition by performing a verification action, and for each bit modified in the page table of the guest, reading a status indicator for the bit to determine if the bit is significant; and causing the transition only if the status indicator for any bit modified in the page table indicates that the bit is significant.2009-01-01
20090006715Memory Chip for High Capacity Memory Subsystem Supporting Multiple Speed Bus - A memory module contains an interface for receiving memory access commands from an external source, in which a first portion of the interface receives memory access data at a first bus frequency and a second portion of the interface receives memory access data at a second different bus frequency. Preferably, the memory module contains a second interface for re-transmitting memory access data, also operating at dual frequency. The memory module is preferably used in a high-capacity memory subsystem organized in a tree configuration in which data accesses are interleaved. Preferably, the memory module has multiple-mode operation, one of which supports dual-speed buses for receiving and re-transmitting different parts of data access commands, and another of which supports conventional daisy-chaining.2009-01-01
20090006716PROCESSING WRONG SIDE I/O COMMANDS - A dual ported active-active array controller apparatus is provided having a first policy processor partnered with a first ISP having a first plurality of dedicated purpose FCs, a second policy processor partnered with a second ISP having a second plurality of dedicated purpose FCs, a communication bus interconnecting the ISPs, and programming instructions stored in memory and executed by the array controller to maintain the first policy processor in top level control of transaction requests from both the first plurality of FCs and the second plurality of FCs that are associated with network input/output (I/O) commands directed to a storage logical unit number (LUN) which the first ISP is a logical unit master of.2009-01-01
20090006717EMULATION OF READ-ONCE MEMORIES IN VIRTUALIZED SYSTEMS - The subject matter herein relates to computer systems and, more particularly, to emulation of read-once memories in virtualized systems. Various embodiments described herein provide systems, methods, and software that leverage the value of read-once memory for purposes such as keeping data or instructions secret and protected from unauthorized viewers, applications, hackers, and other processes. Some such embodiments include a virtual machine manager that emulates hardware memories in a system memory to facilitate virtual access to the hardware memories.2009-01-01
20090006718SYSTEM AND METHOD FOR PROGRAMMABLE BANK SELECTION FOR BANKED MEMORY SUBSYSTEMS - A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.2009-01-01
20090006719SCHEDULING METHODS OF PHASED GARBAGE COLLECTION AND HOUSE KEEPING OPERATIONS IN A FLASH MEMORY SYSTEM - An embodiment of a non-volatile memory storage system comprises a memory controller, and a flash memory module. The memory controller manages the storage operations of the flash memory module. The memory controller is configured to assign a priority level to one or more types of house keeping operations that may be higher than a priority level of one or more types of commands received by a host coupled to the storage system, and to service all operations required of the flash memory module according to priority.2009-01-01
20090006720SCHEDULING PHASED GARBAGE COLLECTION AND HOUSE KEEPING OPERATIONS IN A FLASH MEMORY SYSTEM - An embodiment of a non-volatile memory storage system comprises a memory controller, and a flash memory module. The memory controller manages the storage operations of the flash memory module. The memory controller is configured to assign a priority level to one or more types of house keeping operations that may be higher than a priority level of one or more types of commands received by a host coupled to the storage system, and to service all operations required of the flash memory module according to priority.2009-01-01
20090006721METHODS OF AUTO STARTING WITH PORTABLE MASS STORAGE DEVICE - A portable flash memory storage device such as a memory card can configure a host device upon insertion. The configuration may specify applications or other sequences of operations to be executed by the host upon insertion of the card. Files on the card may be associated with an appropriate application and then automatically opened with the appropriate application. A secure configuration may override a more freely modifiable configuration in certain embodiments.2009-01-01
20090006722AUTO START CONFIGURATION WITH PORTABLE MASS STORAGE DEVICE - A portable flash memory storage device such as a memory card can configure a host device upon insertion. The configuration may specify applications or other sequences of operations to be executed by the host upon insertion of the card. Files on the card may be associated with an appropriate application and then automatically opened with the appropriate application. A secure configuration may override a more freely modifiable configuration in certain embodiments.2009-01-01
20090006723METHOD FOR COMMUNICATING WITH A NON-VOLATILE MEMORY STORAGE DEVICE - Method for a storage device is provided. The method includes interpreting a command from a host system, wherein a command parser module for a storage device interprets the command; and extracting information regarding an operation from the command, wherein the command parser module extracts the information and interfaces with the host system.2009-01-01
20090006724Method of Storing and Accessing Header Data From Memory - Methods of storing and accessing data using a header portion of a file are disclosed. In an embodiment, a method of storing content in a non-volatile memory is disclosed. The method includes reading a content file including media content and including a trailer, storing information related to the trailer together with secure data in a header portion of a file, and storing the file to a storage element of the non-volatile memory or a memory area of a host device coupled to the non-volatile memory device.2009-01-01
20090006725MEMORY DEVICE - A memory device includes a nonvolatile memory and a controller. The nonvolatile memory includes a storage area having a plurality of memory blocks each including a plurality of nonvolatile memory cells, and a buffer including a plurality of nonvolatile memory cells and configured to temporarily store data, and in which data is erased for each block. If a size of write data related to one write command is not more than a predetermined size, the controller writes the write data to the buffer.2009-01-01
20090006726MULTIPLE ADAPTER FOR FLASH DRIVE AND ACCESS METHOD FOR SAME - A multiple adapter is used for assembling a plurality of flash drives. The multiple adapter includes a multiple expansion port, a detector, a file manager, and a controller. The multiple expansion port coupled to the flash drives. The detector is coupled to the multiple expansion port for detecting store information of the flash drives. The file manager is coupled to the multiple expansion port and the detector for receiving the store information and calculating total memory capacity and total spare capacity of the flash drives. The controller is used for controlling the detector and the file manager. A writing procedure and a reading procedure of an access method are also provided.2009-01-01
20090006727 SYSTEM PROGRAMMING PROCESS FOR AT LEAST ONE NON-VOLATILE MEANS OF STORAGE OF A WIRELESS COMMUNICATION DEVICE, CORRESPONDING PROGRAMMING EQUIPMENT AND PACKET TO BE DOWNLOADED - It is proposed an in-system programming process, by programming equipment of at least one non-volatile storage memory of a communication device. The process includes the following steps: transmission, by the programming equipment to the communication device, of at least one extension file; transmission, by at least one of the extension files, called an enlightening extension file, of at least one first item of configuration information for the communication device; selection, by the programming equipment depending on the first item(s) of configuration information for the communication device of at least one data file associated to an internal application of the communication device; and transmission, by the programming equipment to the storage memory, of the selected data file(s).2009-01-01
20090006728VIRTUAL MACHINE STATE SNAPSHOTS - Saving state of Random Access Memory (RAM) in use by guest operating system software is accomplished using state saving software that starts a plurality of compression threads for compressing RAM data blocks used by the guest. Each compression thread determines a compression level for a RAM data block based on a size of a queue of data to be written to disk, then compresses the RAM data block, and places the compressed block in the queue.2009-01-01
20090006729CACHE FOR A MULTI THREAD AND MULTI CORE SYSTEM AND METHODS THEREOF - According to one embodiment, the present disclosure generally provides a method for improving the performance of a cache of a processor. The method may include storing a plurality of data in a data Random Access Memory (RAM). The method may further include holding information for all outstanding requests forwarded to a next-level memory subsystem. The method may also include clearing information associated with a serviced request after the request has been fulfilled. The method may additionally include determining if a subsequent request matches an address supplied to one or more requests already in-flight to the next-level memory subsystem. The method may further include matching fulfilled requests serviced by the next-level memory subsystem to at least one requester who issued requests while an original request was in-flight to the next level memory subsystem. The method may also include storing information specific to each request, the information including a set attribute and a way attribute, the set and way attributes configured to identify where the returned data should be held in the data RAM once the data is returned, the information specific to each request further including at least one of thread ID, instruction queue position and color. The method may additionally include scheduling hit and miss data returns. Of course, various alternative embodiments are also within the scope of the present disclosure.2009-01-01
20090006730DATA EYE MONITOR METHOD AND APPARATUS - An apparatus and method for providing a data eye monitor. The data eye monitor apparatus utilizes an inverter/latch string circuit and a set of latches to save the data eye for providing an infinite persistent data eye. In operation, incoming read data signals are adjusted in the first stage individually and latched to provide the read data to the requesting unit. The data is also simultaneously fed into a balanced XOR tree to combine the transitions of all incoming read data signals into a single signal. This signal is passed along a delay chain and tapped at constant intervals. The tap points are fed into latches, capturing the transitions at a delay element interval resolution. Using XORs, differences between adjacent taps and therefore transitions are detected. The eye is defined by segments that show no transitions over a series of samples. The eye size and position can be used to readjust the delay of incoming signals and/or to control environment parameters like voltage, clock speed and temperature.2009-01-01
20090006731SEMICONDUCTOR MEMORY DEVICE - A semiconductor memory device is capable of controlling an address and data mask information through the use of a common part, thereby reducing chip size. The semiconductor memory device for receiving the addresses and data mask information via a common pin includes a buffer unit and a shift register unit. The buffer unit receives the addresses and data mask information. The shift register unit is comprised of a plurality of latch stages connected in series, for sequentially latching the addresses and data mask information being inputted in series, and an address output unit and a data mask information output unit for outputting information from different latch stages.2009-01-01
20090006732STORAGE SYSTEM WITH SYNCHRONIZED PROCESSING ELEMENTS - A storage system is provided with an ASIC having an interconnect selectively coupling a plurality of dedicated purpose function controllers in the ASIC to a policy processor, via a list manager in the ASIC communicating on a peripheral device bus to which the policy processor is connected, and an event ring buffer to which all transaction requests from each of the plurality of function controllers to the policy processor are collectively posted in real time.2009-01-01
20090006733Drive Resources in Storage Library Behind Virtual Library - Embodiments include methods, apparatus, and systems for managing resources in a physical storage library behind a virtual storage library. In one embodiment, priorities are assigned to copy applications and rules determine which when applications are assigned to resources in the physical storage library.2009-01-01
20090006734APPARATUS, SYSTEM, AND METHOD FOR SELECTING A CLUSTER - An apparatus, system, and method are disclosed for selecting a source cluster in a distributed storage configuration. A measurement module measures system factors for a plurality of clusters over a plurality of instances. The clusters are in communication over a network and each cluster comprises at least one tape volume cache. A smoothing module applies a smoothing function to the system factors, wherein recent instances have higher weights. A lifespan module calculates a mount-to-dismount lifespan for each cluster from the smoothed system factors. A selection module selects a source cluster for accessing an instance of a specified volume in response to the mount-to-dismount lifespans and a user policy.2009-01-01
20090006735Storage unit and disk control method - A storage unit is provided which is connected to a host computer through a network, having one or more disks in which read and write operations are performed during rotation and a control unit for controlling the rotation of the disks. In the storage unit, when receiving a message which is sent from the host computer and predicts that at least one of the disks will come in use, the control unit causes the at least one of the disks which will come in use, to rotate.2009-01-01
20090006736SYSTEMS AND METHODS FOR MANAGING DATA STORAGE - This invention is directed to a system by which data received by an electronic device from a server may be selectively stored in cache. The electronic device may define an anchor that is related to the current position of a playhead reading data stored in cache. The electronic device may then dynamically assign values to each data block of the received file based on the position of the anchor. As the anchor moves the value of data blocks changes, and new incoming data may replace less valuable data previously stored in cache.2009-01-01
20090006737Implementing A Redundant Array Of Inexpensive Drives - Methods, apparatus, and products are disclosed for implementing a redundant array of inexpensive drives (‘RAID’) with an external RAID controller and hard disk drives from separate computers, including configuring by the external RAID controller a RAID array, the RAID array comprising hard disk drives from the separate computers, the external RAID controller comprising a hardware RAID controller installed externally with respect to the separate computers, and storing, by one or more of the separate computers through the external RAID controller, computer data on the RAID array.2009-01-01
20090006738HOST ADAPTIVE SEEK TECHNIQUE ENVIRONMENT - A data storage system and associated method implement a HASTE with a policy engine that continuously collects qualitative information about a network load to the data storage system in order to dynamically characterize the load, and continuously correlates a command profile to a data storage device of the data storage system in relation to the characterization.2009-01-01
20090006739REQUEST PRIORITY SEEK MANAGER - As apparatus and associated method for a dual active-active array storage system with a first controller with top level control of a first memory space and a second controller with top level control of a second memory space different than the first memory space. A seek manager residing in only one of the controllers defines individual command profiles derived from a combined list of data transfer requests from both controllers. A policy engine continuously collects qualitative information about a network load to both controllers to dynamically characterize the load, and governs the seek manager to continuously correlate each command profile in relation to the load characterization.2009-01-01
20090006740DATA STRUCTURE FOR HIGHLY EFFICIENT DATA QUERIES - Apparatus and method for highly efficient data queries. In accordance with various embodiments, a data structure is provided in a memory space with a first portion characterized as a virtual data space storing non-sequential entries and a second portion characterized as a first data array of sequential entries. At least a first sequential entry of the data array points to a skip list, at least a second sequential entry of the data array points to a second data array, and at least a third sequential entry points to a selected non-sequential entry in the first portion.2009-01-01
20090006741PREFERRED ZONE SCHEDULING - A data storage system and associated method are provided wherein a policy engine continuously collects qualitative information about a network load to the data storage system in order to dynamically characterize the load and continuously correlates the load characterization to the content of a command queue of transfer requests for writeback commands and host read commands, selectively limiting the content with respect to writeback commands to only those transfer requests for writeback data that are selected on a physical zone basis of a plurality of predefined physical zones of a storage media.2009-01-01
20090006742Method and apparatus improving performance of a digital memory array device - A method for improving performance of a digital memory array device including a plurality of memory cells; each respective memory cell storing a first digital value and a second digital value being an inverse of the first digital value; storing of the first and second digital values being controlled by a first digital signal effecting selection of a specified memory cell for storing; includes: (a) determining an extant value relating to the first digital signal; (b) if the extant value has a first value, effecting a bit flip operation in the specified memory cell to invert values of at least one of the stored first digital and the second digital values; (c) if the extant value does not have the first value, foregoing the bit flip operation in the specified memory cell.2009-01-01
20090006743Writing data to multiple storage devices - In one embodiment, the present invention includes a method to write a first block group of multiple data blocks of a write request into a first disk in a first time period, and write a second block group of the multiple data blocks into the second disk in the first time period. Later a flip write process may be performed to write the first and second block groups into the other disk. Other embodiments are described and claimed.2009-01-01
20090006744Automated intermittent data mirroring volumes - Methods and apparatus relating to automated intermittent data mirroring volumes are described. In one embodiment, data mirroring may be suspended in response to occurrence of a scheduled or predefined event. Other embodiments are also disclosed.2009-01-01
20090006745Accessing snapshot data image of a data mirroring volume - Methods and apparatus relating to accessing snapshot data image of a data mirroring volume are described. In one embodiment, a host computer is allowed to access a first data volume and a second data volume. The second data volume may comprise data corresponding to a snapshot image of the first data volume prior to a suspension of data mirroring. Other embodiments are also disclosed.2009-01-01
20090006746Online Restriping Technique for Distributed Network Based Virtualization - A technique is provided for implementing online restriping of a volume in a storage area network. A first instance of the volume is instantiated at a first port of the fibre channel fabric for enabling I/O operations to be performed at the volume. While restriping operations are being performed at the volume, the first port is able to concurrently perform I/O operations at the volume.2009-01-01
20090006747INFORMATION PROCESSING APPARATUS AND CONTROL METHOD FOR THE SAME - An information processing apparatus capable of connecting to a plurality of terminal devices over a network includes a recording/reproducing unit configured to receive a removable memory medium, a detector configured to detect insertion and removal of the removable memory medium in and from the recording/reproducing unit, and a controller configured to acquire first information identifying a terminal device which is recorded in the removable memory medium when the detector detects the insertion of the removable memory medium and to control permission and prohibition of access to the removable memory medium from the terminal devices based on the first information.2009-01-01
20090006748METHOD FOR OPERATING A MEMORY INTERFACE WITH SIM FUNCTIONS - A method for operating a host device includes inserting a plug-in adapter, having a subscriber identity module (SIM) component disposed thereon, into a host receptacle of the host device. A memory card is inserted into a memory receptacle on the plug-in adapter. After inserting the plug-in adapter and the memory card, communications are conveyed between the host device and the SIM component via the adapter and the memory card.2009-01-01
20090006749DRIVE TRACKING SYSTEM FOR REMOVABLE MEDIA - A system, and associated methods, comprises a storage drive adapted to accommodate a removable storage medium and a central processing unit (“CPU”) configured to execute code. The code causes the storage drive to record audit information onto the storage medium. The audit information may comprise an identifying value identifying the storage drive and a time value indicative of when data was recorded to the storage medium.2009-01-01
20090006750Leveraging transactional memory hardware to accelerate virtualization and emulation - Various technologies and techniques are disclosed for using transactional memory hardware to accelerate virtualization or emulation. State isolation can be facilitated by providing isolated private state on transactional memory hardware and storing the stack of a host that is performing an emulation in the isolated private state. Memory accesses performed by a central processing unit can be monitored by software to detect that a guest being emulated has made a self modification to its own code sequence. Transactional memory hardware can be used to facilitate dispatch table updates in multithreaded environments by taking advantage of the atomic commit feature. An emulator is provided that uses a dispatch table stored in main memory to convert a guest program counter into a host program counter. The dispatch table is accessed to see if the dispatch table contains a particular host program counter for a particular guest program counter.2009-01-01
20090006751Leveraging transactional memory hardware to accelerate virtualization and emulation - Various technologies and techniques are disclosed for using transactional memory hardware to accelerate virtualization or emulation. A central processing unit is provided with the transactional memory hardware. Code backpatching can be facilitated by providing transactional memory hardware that supports a facility to maintain private memory state and an atomic commit feature. Changes made to certain code are stored in the private state facility. Backpatching changes are enacted by attempting to commit all the changes to memory at once using the atomic commit feature. An efficient call return stack can be provided by using transactional memory hardware. A call return cache stored in the private state facility captures a host address to return to after execution of a guest function completes. A direct-lookup hardware-based hash table is used for the call return cache.2009-01-01
20090006752High Capacity Memory Subsystem Architecture Employing Hierarchical Tree Configuration of Memory Modules - A high-capacity memory subsystem architecture utilizes multiple memory modules arranged in a hierarchical tree configuration, in which at least some communications from an external source traverse successive levels of the tree to reach memory modules at the lowest level. Preferably, the memory system employs buffered memory chips having dual-mode operation, one of which supports a tree configuration in which data is interleaved and the communications buses operate at reduced bus width and/or reduced bus frequency to match the level of interleaving2009-01-01
20090006753DESIGN STRUCTURE FOR ACCESSING A CACHE WITH AN EFFECTIVE ADDRESS - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design for accessing a processor cache is provided. The design structure comprises a processor having a processor core, a level one cache, and circuitry. The circuitry is configured to execute an access instruction in the processor's core, wherein the access instruction provides an untranslated effective address of data to be accessed by the access instruction, determine whether the processor core's level one cache includes the data corresponding to the effective address of the access instruction, wherein the effective address of the access instruction is used without address translation to determine whether the processor core's level one cache includes the data corresponding to the effective address, and provide the data for the access instruction from the level one cache if the level one cache includes the data corresponding to the effective address.2009-01-01
20090006754DESIGN STRUCTURE FOR L2 CACHE/NEST ADDRESS TRANSLATION - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design for accessing a processor's cache memory is provided. The design structure comprises a processor having one or more level one caches, a lookaside buffer configured to include a corresponding entry for each cache line placed in each of the processor's one or more level one caches. The corresponding entry indicates a translation from the effective addresses to the real addresses for the cache line. The processor also comprises circuitry configured to access requested data in the processor's one or more level one caches using requested effective addresses of the requested data, translate the requested effective addresses to real addresses if the processor's one or more level one caches do not contain requested data corresponding to the requested effective addresses, and use the translated real addresses to access the level two cache.2009-01-01
20090006755Providing application-level information for use in cache management - In one embodiment, the present invention includes a method for associating a first identifier with data stored by a first agent in a cache line of a cache to indicate the identity of the first agent, and storing the first identifier with the data in the cache line and updating at least one of a plurality of counters associated with the first agent in a metadata storage in the cache, where the counter includes information regarding inter-agent interaction with respect to the cache line. Other embodiments are described and claimed.2009-01-01
20090006756CACHE MEMORY HAVING CONFIGURABLE ASSOCIATIVITY - A processor cache memory subsystem includes a cache memory having a configurable associativity. The cache memory may operate in a fully associative addressing mode and a direct addressing mode with reduced associativity. The cache memory includes a data storage array including a plurality of independently accessible sub-blocks for storing blocks of data. For example each of the sub-blocks implements an n-way set associative cache. The cache memory subsystem also includes a cache controller that may programmably select a number of ways of associativity of the cache memory. When programmed to operate in the fully associative addressing mode, the cache controller may disable independent access to each of the independently accessible sub-blocks and enable concurrent tag lookup of all independently accessible sub-blocks, and when programmed to operate in the direct addressing mode, the cache controller may enable independent access to one or more subsets of the independently accessible sub-blocks.2009-01-01
20090006757HIERARCHICAL CACHE TAG ARCHITECTURE - An apparatus, system, and method are disclosed. In one embodiment, the apparatus includes a cache memory coupled to a processor. The apparatus additionally includes a tag storage structure that is coupled to the cache memory. The tag storage structure can store a tag associated with a location in the cache memory. The apparatus additionally includes a cache of cache tags coupled to the processor. The cache of cache tags can store a smaller subset of the tags stored in the tag storage structure.2009-01-01
20090006758SYSTEM BUS STRUCTURE FOR LARGE L2 CACHE ARRAY TOPOLOGY WITH DIFFERENT LATENCY DOMAINS - A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses.2009-01-01
20090006759SYSTEM BUS STRUCTURE FOR LARGE L2 CACHE ARRAY TOPOLOGY WITH DIFFERENT LATENCY DOMAINS - A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses.2009-01-01
20090006760Structure for Dual-Mode Memory Chip for High Capacity Memory Subsystem - A design structure is provided for a dual-mode memory chip supporting a first operation mode in which received data access commands contain chip select data to identify the chip addressed by the command, and control logic in the memory chip determines whether the command is addressed to the chip, and a second operation mode in which the received data access command addresses a set of multiple chips. Preferably, the first mode supports a daisy-chained configuration of memory chips. Preferably the second mode supports a hierarchical interleaved memory subsystem, in which each addressable set of chips is configured as a tree, command and write data being propagated down the tree, the number of chips increasing at each succeeding level of the tree.2009-01-01
20090006761Cache pollution avoidance - Embodiments of the present invention are directed to a scheme in which information as to the future behavior of particular software is used in order to optimize cache management and reduce cache pollution. Accordingly, a certain type of data can be defined as “short life data” by using knowledge of the expected behavior of particular software. Short life data can be a type of data which, according to the ordinary expected operation of the software, is not expected to be used by the software often in the future. Data blocks which are to be stored in the cache can be examined to determine if they are short life data blocks. If the data blocks are in fact short life data blocks they can be stored only in a particular short life area of the cache.2009-01-01
20090006762METHOD AND APPARATUS OF PREFETCHING STREAMS OF VARYING PREFETCH DEPTH - Method and apparatus of prefetching streams of varying prefetch depth dynamically changes the depth of prefetching so that the number of multiple streams as well as the hit rate of a single stream are optimized. The method and apparatus in one aspect monitor a plurality of load requests from a processing unit for data in a prefetch buffer, determine an access pattern associated with the plurality of load requests and adjust a prefetch depth according to the access pattern.2009-01-01
20090006763Arrangement And Method For Update Of Configuration Cache Data - An arrangement and method for update of configuration cache data in a disk storage subsystem in which a cache memory (2009-01-01
20090006764INSERTION OF COHERENCE REQUESTS FOR DEBUGGING A MULTIPROCESSOR - A method and system are disclosed to insert coherence events in a multiprocessor computer system, and to present those coherence events to the processors of the multiprocessor computer system for analysis and debugging purposes. The coherence events are inserted in the computer system by adding one or more special insert registers. By writing into the insert registers, coherence events are inserted in the multiprocessor system as if they were generated by the normal coherence protocol. Once these coherence events are processed, the processing of coherence events can continue in the normal operation mode.2009-01-01
20090006765METHOD AND SYSTEM FOR REDUCING CACHE CONFLICTS - Disclosed is a system and method for storing a plurality of data packets in a plurality of memory buffers in a cache memory for reducing cache conflicts. The method includes determining size of each of a plurality of data packets; storing a first data packet of the plurality of data packets starting from a first address in a first memory buffer of the plurality of memory buffers; determining an offset based on the size of the first data packet; and storing a second data packet in a second buffer starting from a second address based on the offset.2009-01-01
20090006766DATA PROCESSING SYSTEM AND METHOD FOR PREDICTIVELY SELECTING A SCOPE OF BROADCAST OF AN OPERATION UTILIZING A HISTORY-BASED PREDICTION - According to a method of data processing, a predictor is maintained that indicates a historical scope of broadcast for one or more previous operations transmitted on an interconnect of a data processing system. A scope of broadcast of a subsequent operation is predictively selected by reference to the predictor.2009-01-01
20090006767USING EPHEMERAL STORES FOR FINE-GRAINED CONFLICT DETECTION IN A HARDWARE ACCELERATED STM - A method and apparatus for fine-grained filtering in a hardware accelerated software transactional memory system is herein described. A data object, which may have any arbitrary size, is associated with a filter word. The filter word is in a first default state when no access, such as a read, from the data object has occurred during a pendancy of a transaction. Upon encountering a first access, such as a first read, from the data object, access barrier operations including an ephemeral/private store operation to set the filter word to a second state are performed. Upon a subsequent/redundant access, such as a second read, the access barrier operations are elided to accelerate the subsequent access, based on the filter word being set to the second state to indicate a previous access occurred.2009-01-01
20090006768Method and Apparatus for Accessing a Split Cache Directory - A method and apparatus for accessing a cache. The method includes receiving a request to access the cache. The request includes an address of requested data to be accessed. The method also includes using a first portion of the address to perform an access to a first directory for the cache and using a second portion of the address to perform an access to a second directory for the cache. Results from the access to the first directory for the cache and results from the access to the second directory for the cache are used to determine whether the cache includes the requested data to be accessed.2009-01-01
20090006769PROGRAMMABLE PARTITIONING FOR HIGH-PERFORMANCE COHERENCE DOMAINS IN A MULTIPROCESSOR SYSTEM - A multiprocessor computing system and a method of logically partitioning a multiprocessor computing system are disclosed. The multiprocessor computing system comprises a multitude of processing units, and a multitude of snoop units. Each of the processing units includes a local cache, and the snoop units are provided for supporting cache coherency in the multiprocessor system. Each of the snoop units is connected to a respective one of the processing units and to all of the other snoop units. The multiprocessor computing system further includes a partitioning system for using the snoop units to partition the multitude of processing units into a plurality of independent, memory-consistent, adjustable-size processing groups. Preferably, when the processor units are partitioned into these processing groups, the partitioning system also configures the snoop units to maintain cache coherency within each of said groups.2009-01-01
20090006770NOVEL SNOOP FILTER FOR FILTERING SNOOP REQUESTS - A method and apparatus for supporting cache coherency in a multiprocessor computing environment having multiple processing units, each processing unit having one or more local cache memories associated and operatively connected therewith. The method comprises providing a snoop filter device associated with each processing unit, each snoop filter device having a plurality of dedicated input ports for receiving snoop requests from dedicated memory writing sources in the multiprocessor computing environment. Each snoop filter device includes a plurality of parallel operating port snoop filters in correspondence with the plurality of dedicated input ports, each port snoop filter implementing one or more parallel operating sub-filter elements that are adapted to concurrently filter snoop requests received from respective dedicated memory writing sources and forward a subset of those requests to its associated processing unit.2009-01-01
20090006771Digital data management using shared memory pool - Memory management techniques involve establishing a memory pool having an amount of sharable memory, and dynamically allocating the sharable memory to concurrently manage multiple sets of sequenced units of digital data. In an exemplary scenario, the sets of sequenced units of digital data are sets of time-ordered media samples forming clips of media content, and the techniques are applied when media samples from two or more clips are simultaneously presentable to a user as independently-controlled streams. Variable amounts of sharable memory are dynamically allocated for preparing upcoming media samples for presentation to the user. In one possible implementation, a ratio of average data rates of individual streams is calculated, and amounts of sharable memory are allocated to rendering each stream based on the ratio. Then, the sharable memory allocated to rendering individual streams is reserved as needed to prepare particular upcoming media samples for presentation to the user.2009-01-01
20090006772Memory Chip for High Capacity Memory Subsystem Supporting Replication of Command Data - A memory module contains a first interface for receiving data access commands and a second interface for re-transmitting data access commands to other memory modules, the second interface propagating multiple copies of received data access commands to multiple other memory modules. The memory module is preferably used in a high-capacity memory subsystem organized in a tree configuration in which data accesses are interleaved. Preferably, the memory module has multiple-mode operation, one of which supports multiple replication of commands and another of which supports conventional daisy-chaining2009-01-01
20090006773Signal Processing Apparatus - A signal processing apparatus able to raise a processing capability in processing accompanying access to a storing means is provided. Stream control units (SCU) 2009-01-01
20090006774High Capacity Memory Subsystem Architecture Employing Multiple-Speed Bus - A high-capacity memory subsystem architecture utilizes multiple memory modules coupled to one or more access modules by a communications medium, in which at least some data is transferred between an access module and memory modules at a first bus frequency, and at least some data is transferred between the access module and memory modules at a second bus frequency different from the first. Preferably, data is interleaved to reduce the required bus speed for read/write data, and the higher bus frequency is used to transfer command/address data. Preferably, the memory system employs memory chips having dual-mode operation, one of which supports a dual-speed bus.2009-01-01
20090006775Dual-Mode Memory Chip for High Capacity Memory Subsystem - A dual-mode memory chip supports a first operation mode in which received data access commands contain chip select data to identify the chip addressed by the command, and control logic in the memory chip determines whether the command is addressed to the chip, and a second operation mode in which the received data access command addresses a set of multiple chips. Preferably, the first mode supports a daisy-chained configuration of memory chips. Preferably the second mode supports a hierarchical interleaved memory subsystem, in which each addressable set of chips is configured as a tree, command and write data being propagated down the tree, the number of chips increasing at each succeeding level of the tree.2009-01-01
20090006776MEMORY LINK TRAINING - An apparatus and method are disclosed. In one embodiment, the apparatus trains a memory link using a signal alignment unit. The signal alignment unit aligns a read data strobe signal that is transmitted on the link with the center of a read data eye transmitted on the link. Next, the signal alignment unit aligns a receive enable signal that is transmitted on the link with the absolute time that data returns the data lines of the link a column address strobe signal is sent to the memory coupled to the link. Next, the signal alignment unit aligns a write data strobe signal transmitted on the link with the link's clock signal. Finally, the signal alignment unit aligns the center of the write data eye transmitted on the link with the write data strobe transmitted on the link.2009-01-01
20090006777APPARATUS FOR REDUCING CACHE LATENCY WHILE PRESERVING CACHE BANDWIDTH IN A CACHE SUBSYSTEM OF A PROCESSOR - A processor cache memory subsystem includes a cache controller coupled to a tag logic unit. The cache controller may monitor read request resources associated with the cache subsystem and receive read requests for data stored in a data storage array of the cache subsystem. The tag logic unit may determine whether one or more requested address bits match any address tag stored within a tag array of the cache subsystem. The cache controller may, in response to determining the read request resources associated with the cache subsystem are available, selectably send the request for data with an implicit request indication being asserted. In response to determining the read request resources associated with the cache subsystem are not available, the cache controller may send the request for data without an implicit request indication being asserted.2009-01-01
20090006778METHODS AND APPARATUS FOR H-ARQ PROCESS MEMORY MANAGEMENT - Methods and apparatus are presented for H-ARQ process dynamic memory management. A method for dynamically managing memory for storing data associated with H-ARQ processes is presented, which includes receiving a packet associated with a H-ARQ process, determining if a free memory location is available in a H-ARQ buffer, assigning the packet to the free memory location, determining if the packet was successfully decoded, and retaining the packet in the assigned memory location for combination with a subsequent packet retransmission if the packet was not successfully decoded. Also presented are apparatus having logic configured to perform the presented methods.2009-01-01
20090006779Memory control system and memory data fetching method - The invention discloses a memory control system and a method to read data from memory. The memory control system comprises a control unit, a storage device, and a microprocessor. The memory control system and the method to read data from memory according to the invention utilize an unbalanced microprocessor clock signal with different duration length to control the microprocessor so as to increase the speed of reading memory.2009-01-01
20090006780Storage system and path management method - A storage system and a path management method, which can facilitate node replacement are proposed. In the storage system, the host sets plural paths between the host and the volume and holds path information composed of management information on each of the paths; and the management apparatus includes an integrated path management unit that collects the path information on each of the paths defined between the host and the volume from the corresponding host to manage all the collected information as integrated path information; retrieves an alternate path going through a node other than a specified node and but that has the same function as the specified node, for the path going through the specified node, based on the integrated path information; and displays results of the retrieval.2009-01-01
20090006781 Structure for Memory Chip for High Capacity Memory Subsystem Supporting Multiple Speed Bus - A design structure is provided for a memory module containing an interface for receiving memory access commands from an external source, in which a first portion of the interface receives memory access data at a first bus frequency and a second portion of the interface receives memory access data at a second different bus frequency. Preferably, the memory module contains a second interface for re-transmitting memory access data, also operating at dual frequency. The memory module is preferably used in a high-capacity memory subsystem organized in a tree configuration in which data accesses are interleaved. Preferably, the memory module has multiple-mode operation, one of which supports dual-speed buses for receiving and re-transmitting different parts of data access commands, and another of which supports conventional daisy-chaining.2009-01-01
20090006782APPARATUS AND METHOD FOR ACCESSING A MEMORY DEVICE - An apparatus and a corresponding method for coupling a memory device being addressable by means of an address space to a processing unit, the apparatus consisting: 2009-01-01
20090006783Information Processing System, Reader/Writer, Information Processing Apparatus, Access Control Management Method and Program - There is provided an information processing system having a reader/writer and an information processing apparatus. The reader/writer include a processing section for executing service processing, a processing completion determining section for determining completion of the processing, a control information generating section for generating control information, depending on the determination result and a control information transmitting section for transmitting the control information, and the information processing apparatus includes an internal memory having an access control area, an in-chip communication section for receiving the control information, an internal memory managing section for storing the received control information in the internal memory, a control information obtaining section for obtaining the control information from the internal memory and an access control managing section for setting the access control for the access control area based on the control information.2009-01-01
20090006784ADDRESS EXCLUSIVE CONTROL SYSTEM AND ADDRESS EXCLUSIVE CONTROL METHOD - An address lock register managing address exclusive control is made to retain not only an address but also a request type, an access destination, and a cache block. Upon receiving a new request, firstly, the address lock register is referred to judge whether an exclusive condition is satisfied, that is, whether an address match, CPU match, LINE match or SX-WAY match is present, and whether the address lock is busy in accordance with the output of an AND circuit. Further, the configuration is such that the address lock register is referred to confirm that the addresses are identical to each other, and, additionally, the response source is validated to be identical to a lock flag and the new request causing the lock is validated to be consistent with the response request upon receiving a response request so that the lock is not released unless a correct response is made.2009-01-01
20090006785APPARATUS, METHOD AND SYSTEM FOR COMPARING SAMPLE DATA WITH COMPARISON DATA - An apparatus, method and system for comparing sample data with comparison date is disclosed. One embodiment provides a plurality of storage locations, an interface coupled to a plurality of storage locations for an exchange of data between the plurality of storage locations and external circuitry coupled to the interface, and a data comparator for comparing comparison data stored in the plurality of storage locations and sample data.2009-01-01
20090006786SYSTEM FOR COMMUNICATING WITH A NON-VOLATILE MEMORY STORAGE DEVICE - A storage device is provided. The storage device includes a command parser module for interpreting a command from a host system in a platform independent format; and for extracting information regarding an operation from the command, wherein the command parser module interfaces with the host system.2009-01-01
20090006787Storage device with write barrier sensitive write commands and write barrier insensitive commands - The invention is a storage device which implements a write barrier command and provides means for a host to designate other write commands as being sensitive or insensitive to the existence of write barrier commands. The device can optimize the execution of commands by changing the order of execution of write commands that are insensitive to write barrier command. In an embodiment of the invention a flag associated with the write command indicates whether the command is sensitive or insensitive to the existence of write barrier commands. In an embodiment of the invention the write barrier command can be implemented as a write command with a flag that indicates whether the command is a write barrier command. In one embodiment of the invention the queue of commands and data to be written to the media is stored in a non-volatile cache.2009-01-01
20090006788ASSOCIATING A FLEXIBLE DATA HIERARCHY WITH AN AVAILABILITY CONDITION IN A GRANTING MATRIX - Systems and methods are presented that may involve specifying an availability condition associated with a data hierarchy in a database. It may also involve storing the availability condition in a matrix and using the matrix to determine access to data in the data hierarchy. In embodiments, the data hierarchy may be a flexible data hierarchy wherein a selected dimension of data within the hierarchy may be held temporarily fixed while flexibly accessing other dimensions of the data. In embodiments, the process may further involve specifying an availability condition, wherein the specification of the availability condition does not require modification of the datum or restatement of the database.2009-01-01
20090006789COMPUTER PROGRAM PRODUCT AND A SYSTEM FOR A PRIORITY SCHEME FOR TRANSMITTING BLOCKS OF DATA - Provided are techniques for transmitting blocks of data. It is determined whether any high priority out of sync (HPOOS) indicator is set to indicate that a number of modified segments associated with a block of data are less than or equal to a modified segments threshold. In response to determining that at least one high priority out of sync indicator is set, one or more sub-blocks of data in the modified segments associated with the block of data and with one set high priority out of sync indicator are transferred.2009-01-01
20090006790High Capacity Memory Subsystem Architecture Storing Interleaved Data for Reduced Bus Speed - A high-capacity memory subsystem architecture utilizes multiple memory modules arranged in one or more clusters, each attached to a respective hub which in turn is attached to a memory controller. Within a cluster, data is interleaved so that each data access command accesses all modules of the cluster. The hub communicates with the memory modules at a lower bus frequency, but the distributing of data among multiple modules enables the cluster to maintain the composite data rate of the memory-controller-to-hub bus. Preferably, the memory system employs buffered memory chips having dual-mode operation, one of which supports a cluster configuration in which data is interleaved and the communications buses operate at reduced bus width and/or reduced bus frequency to match the level of interleaving2009-01-01
20090006791DATA MOVEMENT AND INITIALIZATION AGGREGATION - A system and method for copying and initializing a block of memory. To copy several data entities from a source region of memory to a destination region of memory, an instruction may copy each data entity one at a time. If an aggregate condition is determined to be satisfied, multiple data entities may be copied simultaneously. The aggregate condition may rely on an aggregate data size, the size of the data entities to be copied, and the alignment of the source and destination addresses.2009-01-01
20090006792System and Method to Identify Changed Data Blocks - Differences between data objects stored on a mass storage device can be identified quickly and efficiently by comparing block numbers stored in data structures that describe the data objects. Bit-by-bit or byte-by-byte comparisons of the objects' actual data need only be performed if the block numbers are different. Objects that share many data blocks can be compared much faster than by a direct comparison of all the objects' data. The fast comparison techniques can be used to improve storage server mirrors and database storage operations, among other applications.2009-01-01
20090006793Method And Apparatus To Enable Runtime Memory Migration With Operating System Assistance - In a method for switching to a spare memory module during runtime, a processing system determines that utilization of an active memory module in the processing system should be discontinued. The processing system may then activate a mirror copy mode that causes a memory controller in the processing system to copy data from the active memory module to the spare memory module when the data is accessed in the active memory module. An operating system (OS) in the processing system may then access data in the active memory module to cause the memory controller to copy data from the active memory module to the spare memory module. The processing system may then reconfigure the memory controller to direct reads and writes to the spare memory module instead of the active memory module. Other embodiments are described and claimed.2009-01-01
20090006794Asynchronous remote copy system and control method for the same - Provided is a control method for an asynchronous remote copy system, the method including: fixing data in a secondary volume; determining a certain area in the secondary volume that contains data required to be copied to a primary volume from the secondary volume in which the data is fixed; copying the data in the certain area to the primary volume; and swapping the primary volume and the secondary volume for restarting the asynchronous remote copy.2009-01-01
20090006795Security protection for cumputer long-term memory devices - A security protection device provides protection for computer long-term storage devices, such as hard drives. The security protection device is placed between a host computer and the storage device. The security protection device intercepts communications between the host and the storage device and examines any commands from the host to the storage device. Only “safe” commands that match commands on a pre-approved list are passed to the storage device. All other commands may be discarded.2009-01-01
20090006796Media Content Processing System and Non-Volatile Memory That Utilizes A Header Portion of a File - A computer readable media storing operational instructions is disclosed. The instructions includes at least one instruction to store data of an encrypted computer readable file that includes a header portion and associated content data into a storage area of a non-volatile memory. The storage area includes a secure memory area to store data from the header portion including at least one encryption ID. The storage area further includes a memory area to store the content data. The header portion further includes trailer data derived from a portion of the content data. The instructions also include at least one instruction to provide data read access to the header portion and to the content data with respect to a host device.2009-01-01
20090006797FENCING USING A HIERARCHICAL RELATIONSHIP - A method and apparatus for processing a write request at a storage device is provided. A write request that identifies a sender of the write request is received at a storage device. The write request is examined to determine the identity of the sender. A determination is made as to whether, within a hierarchical relationship, the sender is subordinate to any entity that has been designated as being unable to perform write requests at the storage device. Upon determining that (a) the sender is not subordinate to any entity that has been designated as being unable to perform write requests at the storage device, and (b) the sender has not been designated as being unable to perform write requests at the storage device, the sender is allowed to write to the storage device. Thereafter, the write request from the sender may be performed at the storage device.2009-01-01
20090006798Structure for Memory Chip for High Capacity Memory Subsystem Supporting Replication of Command Data - A design structure is provided for a memory module containing a first interface for receiving data access commands and a second interface for re-transmitting data access commands to other memory modules, the second interface propagating multiple copies of received data access commands to multiple other memory modules. The memory module is preferably used in a high-capacity memory subsystem organized in a tree configuration in which data accesses are interleaved. Preferably, the memory module has multiple-mode operation, one of which supports multiple replication of commands and another of which supports conventional daisy-chaining2009-01-01
20090006799HANDLING MULTI-RANK POOLS AND VARYING DEGREES OF CONTROL IN VOLUME ALLOCATION ON STORAGE CONTROLLERS - Techniques are disclosed for optimizing volume allocation on storage controllers that may have varying degrees of control over directing storage on ranks of pools attached storage components. A performance-based volume allocation algorithm can optimize allocation for such various controllers in a smooth, uniform manner allowing changes from one degree of control to another without incurring costly code changes and re-architecting costs. Where control is not available a surrogate set of possible ranks where the allocation could be made is developed and employed to calculate an adjusted utilization cost. In turn, the adjusted utilization cost is used to calculate a space limit value limited by a target performance threshold.2009-01-01
20090006800CONFIGURABLE MEMORY SYSTEM AND METHOD FOR PROVIDING ATOMIC COUNTING OPERATIONS IN A MEMORY DEVICE - A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.2009-01-01
20090006801SYSTEM, METHOD AND PROGRAM TO MANAGE MEMORY OF A VIRTUAL MACHINE - Management of virtual memory allocated by a virtual machine control program to a plurality of virtual machines. Each of the virtual machines has an allocation of virtual private memory divided into working memory, cache memory and swap memory. The virtual machine control program determines that it needs additional virtual memory allocation, and in response, makes respective requests to the virtual machines to convert some of their respective working memory and/or cache memory to swap memory. At another time, the virtual machine control program determines that it needs less virtual memory allocation, and in response, makes respective requests to the virtual machines to convert some of their respective swap memory to working memory and/or cache memory.2009-01-01
20090006802VIRTUAL STORAGE SPACE WITH CYCLICAL WRAPPING GRID FUNCTION - Apparatus and method for arranging a virtual storage space with a cyclical wrapping grid function. The virtual storage space is formed from a physical memory and comprises a plurality of larger grains of selected storage capacity, each divided into a power of two number of smaller grains. Each of the larger grains are distributed across a non-power of two number of storage elements so that each of the storage elements receives the same number of smaller grains.2009-01-01
20090006803L2 Cache/Nest Address Translation - A method and apparatus for accessing cache memory in a processor. The method includes accessing requested data in one or more level one caches of the processor using requested effective addresses of the requested data. If the one or more level one caches of the processor do not contain requested data corresponding to the requested effective addresses, the requested effective addresses are translated to real addresses. A lookaside buffer includes a corresponding entry for each cache line in each of the one or more level one caches of the processor. The corresponding entry indicates a translation from the effective addresses to the real addresses for the cache line. The translated real addresses are used to access a level two cache.2009-01-01
20090006804BI-LEVEL MAP STRUCTURE FOR SPARSE ALLOCATION OF VIRTUAL STORAGE - Apparatus and method for accessing a virtual storage space. The space is arranged across a plurality of storage elements, and a skip list is used to map as individual nodes each of a plurality of non-overlapping ranges of virtual block addresses of the virtual storage space from a selected storage element.2009-01-01
20090006805Method and apparatus for supporting address translation in a virtual machine environment - In one embodiment, a method includes receiving control transitioned from a virtual machine (VM) due to a privileged event pertaining to a translation-lookaside buffer (TLB), and determining which entries in a guest translation data structure were modified by the VM. The determination is made based on metadata extracted from a shadow translation data structure maintained by a virtual machine monitor (VMM) and attributes associated with entries in the shadow translation data structure. The method further includes synchronizing entries in the shadow translation data structure that correspond to the modified entries in the guest translation data structure with the modified entries in the guest translation data structure.2009-01-01
20090006806Local Memory And Main Memory Management In A Data Processing System - A data processing system (2009-01-01
20090006807METHOD FOR MEMORY ADDRESS ARRANGEMENT - A method for memory address arrangement is provided. Data of different Y coordinates is moved to operation units divided by different X coordinates, or data of different X coordinates is moved to operation units divided by different Y coordinates, so as to realize the function of simultaneously longitudinally and laterally reading and writing a plurality of batches of data, thereby preventing the limitation of only longitudinally or laterally reading and writing a plurality of batches of data.2009-01-01
20090006808ULTRASCALABLE PETAFLOP PARALLEL SUPERCOMPUTER - A novel massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. Novel use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.2009-01-01
20090006809NON-DISRUPTIVE CODE UPDATE OF A SINGLE PROCESSOR IN A MULTI-PROCESSOR COMPUTING SYSTEM - Updating code of a single processor in a multi-processor system includes halting transactions processed by a first processor in the system and processing of transactions by a second processor in the system are maintained. The first processor then receives new code and an operating system running on the first processor is terminated whereby all processes and threads being executed by the first processor are terminated. Execution of a self-reset of the first processor is commenced and interrupts associated with the first processor are disabled. Only those system resources exclusively associated with the first processor are reset, and memory transactions associated with the first processor are disabled. An image of the new code is copied into memory associated with the first processor, registers associated with the first processor are reset and the new code is booted by the first processor.2009-01-01
20090006810MECHANISM TO SUPPORT GENERIC COLLECTIVE COMMUNICATION ACROSS A VARIETY OF PROGRAMMING MODELS - A system and method for supporting collective communications on a plurality of processors that use different parallel programming paradigms, in one aspect, may comprise a schedule defining one or more tasks in a collective operation an executor that executes the task, a multisend module to perform one or more data transfer functions associated with the tasks, and a connection manager that controls one or more connections and identifies an available connection. The multisend module uses the available connection in performing the one or more data transfer functions. A plurality of processors that use different parallel programming paradigms can use a common implementation of the schedule module, the executor module, the connection manager and the multisend module via a language adaptor specific to a parallel programming paradigm implemented on a processor.2009-01-01
20090006811Method and System for Expanding a Conditional Instruction into a Unconditional Instruction and a Select Instruction - A method of expanding a conditional instruction having a plurality of operands within a pipeline processor is disclosed. The method identifies the conditional instruction prior to an issue stage and determines if the plurality of operands exceeds a predetermined threshold. The method expands the conditional instruction into a non-conditional instruction and a select instruction. The method further executes the non-conditional instruction and the select instruction in separate pipelines.2009-01-01
20090006812Method and Apparatus for Accessing a Cache With an Effective Address - A method and apparatus for accessing a processor cache. The method includes executing an access instruction in a processor core of the processor. The access instruction provides an untranslated effective address of data to be accessed by the access instruction. The method also includes determining whether a level one cache for the processor core includes the data corresponding to the effective address of the access instruction. The effective address of the access instruction is used without address translation to determine whether the level one cache for the processor core includes the data corresponding to the effective address. If the level one cache includes the data corresponding to the effective address, the data for the access instruction is provided from the level one cache.2009-01-01
20090006813DATA FORWARDING FROM SYSTEM MEMORY-SIDE PREFETCHER - An apparatus, system, and method are disclosed. In one embodiment, the apparatus includes a system memory-side prefetcher that is coupled to a memory controller. The system memory-side prefetcher includes a stride detection unit to identify one or more patterns in a stream. The system memory-side prefetcher also includes a prefetch injection unit to insert prefetches into the memory controller based on the detected one or more patterns. The system memory-side prefetcher also includes a prefetch data forwarding unit to forward the prefetched data to a cache memory coupled to a processor.2009-01-01
Website © 2025 Advameg, Inc.