52nd week of 2011 patent applcation highlights part 62 |
Patent application number | Title | Published |
20110320685 | Use of Guard Bands and Phased Maintenance Operations to Avoid Exceeding Maximum Latency Requirements in Non-Volatile Memory Systems - Techniques are presented for performing maintenance operations, such as garbage collection, on non-volatile memory systems will still respecting the maximum latency, or time-out, requirements of a protocol. A safety guard band in the space available for storing host data, control data, or both, is provided. If, on an access of the memory, it is determined that the guard band space is exceeded, the system uses a recovery back to the base state by triggering and prioritising clean-up operations to re-establish all safety guard bands without breaking the timing requirements. To respect these timing requirements, the operations are split into portions and done in a phased manner during allowed latency periods. | 2011-12-29 |
20110320686 | FRDY PULL-UP RESISTOR ACTIVATION - A method and apparatus for reducing power consumption during an operation in a non-volatile storage device is disclosed. A non-volatile storage device controller that is in communication with a non-volatile memory in the non-volatile storage device receives a characteristic corresponding to a time duration required for the non-volatile memory to complete an operation. The controller disables a circuit that indicates when an operation by the non-volatile memory is complete. The controller then initiates the operation in the non-volatile memory, and maintains the circuit in a disabled state for a first predetermined time that is a portion of the time duration. The controller enables the circuit upon expiration of the first predetermined time and prior to the completion of the operation. The controller receives an indication of the completion of the operation via the circuit. | 2011-12-29 |
20110320687 | REDUCING WRITE AMPLIFICATION IN A CACHE WITH FLASH MEMORY USED AS A WRITE CACHE - Embodiments of the invention are directed to reducing write amplification in a cache with flash memory used as a write cache. An embodiment of the invention includes partitioning at least one flash memory device in the cache into a plurality of logical partitions. Each of the plurality of logical partitions is a logical subdivision of one of the at least one flash memory device and comprises a plurality of memory pages. Data are buffered in a buffer. The data includes data to be cached, and data to be destaged from the cache to a storage subsystem. Data to be cached are written from the buffer to the at least one flash memory device. A processor coupled to the buffer is provided with access to the data written to the at least one flash memory device from the buffer, and a location of the data written to the at least one flash memory device within the plurality of logical partitions. The data written to the at least one flash memory device are destaged from the buffer to the storage subsystem. | 2011-12-29 |
20110320688 | Memory Systems And Wear Leveling Methods - Wear leveling methods in memory systems with nonvolatile memory devices including a plurality of physical blocks and memory controllers controlling the nonvolatile memory devices. The wear leveling method increases a stress index of the physical blocks according to operations the physical blocks have undergone and performs wear leveling of the physical block on the basis of the stress index. | 2011-12-29 |
20110320689 | Data Storage Devices and Data Management Methods for Processing Mapping Tables - Methods of operating integrated circuit devices include updating a mapping table with physical address information by reading forward link information from a plurality of spare sectors in a corresponding plurality of pages within a nonvolatile memory device and then writing mapping table information derived from the forward link information into the mapping table. This forward link information may be configured as absolute address information (e.g., next physical address) and/or relative address information (e.g., change in physical address). This updating of the mapping table may include updating a mapping table within a volatile memory, in response to a resumption of power within the integrated circuit device. This resumption of power may follow a power failure during which the contents of the volatile memory are lost. | 2011-12-29 |
20110320690 | MASS STORAGE SYSTEM AND METHOD USING HARD DISK AND SOLID-STATE MEDIA - Methods and systems for mass storage of data over two or more tiers of mass storage media that include nonvolatile solid-state memory devices, hard disk devices, and optionally volatile memory devices or nonvolatile MRAM in an SDRAM configuration. The mass storage media interface with a host through one or more PCIe lanes on a single printed circuit board. | 2011-12-29 |
20110320691 | MEMORY SYSTEM, MULTI-BIT FLASH MEMORY DEVICE, AND ASSOCIATED METHODS - A memory system includes a multi-bit flash memory device and a flash controller configured to control the multi-bit flash memory device. The flash controller is configured to output a series of commands, pointers, and addresses to the multi-bit flash memory device for read/program operations. | 2011-12-29 |
20110320692 | ACCESS DEVICE, INFORMATION RECORDING DEVICE, CONTROLLER, REAL TIME INFORMATION RECORDING SYSTEM, ACCESS METHOD, AND PROGRAM - Provided is a method for stabilizing and increasing the speed of processing for writing a plurality of different-sized files such as a video file and a management file in parallel in the case where the area in a non-volatile memory of an information recording module is managed by a file system. An access module ( | 2011-12-29 |
20110320693 | Method For Paramaterized Application Specific Integrated Circuit (ASIC)/Field Programmable Gate Array (FPGA) Memory-Based Ternary Content Addressable Memory (TCAM) - A method and apparatus for providing TCAM functionality in a custom integrated circuit (IC) is presented. An incoming key is broken into a predefined number of sub-keys. Each sub-key is sued to address a Random Access Memory (RAM), one RAM for each sub-key. An output of the RAM is collected for each sub-key, each output comprising a Partial Match Vector (PMV). The PMVs are bitwise ANDed to obtain a value which is provided to a priority encoder to obtain an index. The index is used to access a result RAM to return a result value for the key. | 2011-12-29 |
20110320694 | CACHED LATENCY REDUCTION UTILIZING EARLY ACCESS TO A SHARED PIPELINE - A method of performing operations in a shared cache coupled to a first requestor and a second requestor includes receiving at the shared cache a first request from the second requester; assigning the request to a state machine; transmitting a first pipe pass request from the state machine to an arbiter; providing a first instruction from the first pipe pass request to a cache pipeline, the first instruction causing a first pipe pass; and providing a second pipe pass request to the arbiter before the first pipe pass is completed. | 2011-12-29 |
20110320695 | MITIGATING BUSY TIME IN A HIGH PERFORMANCE CACHE - Various embodiments of the present invention mitigate busy time in a hierarchical store-through memory cache structure. In one embodiment, a cache directory associated with a memory cache is divided into a plurality of portions each associated with a portion memory cache. Simultaneous cache lookup operations and cache write operations between the plurality of portions of the cache directory are supported. Two or more store commands are simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory. | 2011-12-29 |
20110320696 | EDRAM REFRESH IN A HIGH PERFORMANCE CACHE ARCHITECTURE - A memory refresh requestor, a memory request interpreter, a cache memory, and a cache controller on a single chip. The cache controller configured to receive a memory access request, the memory access request for a memory address range in the cache memory, detect that the cache memory located at the memory address range is available, and send the memory access request to the memory request interpreter when the memory address range is available. The memory request interpreter configured to receive the memory access request from the cache controller, determine if the memory access request is a request to refresh a contents of the memory address range, and refresh data in the memory address range when the memory access request is a request to refresh memory. | 2011-12-29 |
20110320697 | DYNAMICALLY SUPPORTING VARIABLE CACHE ARRAY BUSY AND ACCESS TIMES - Various embodiments of the present invention manage access to a cache memory. In or more embodiments a request for a targeted interleave within a cache memory is received. The request is associated with an operation of a given type. The target is determined to be available. The request is granted in response to the determining that the target is available. A first interleave availability table associated with a first busy time associated with the cache memory is updated based on the operation associated with the request in response to granting the request. A second interleave availability table associated with a second busy time associated with the cache memory is updated based on the operation associated with the request in response to granting the request. | 2011-12-29 |
20110320698 | Multi-Channel Multi-Port Memory - A multi-channel multi-port memory is disclosed. In a particular embodiment, the multi-channel memory includes a plurality of channels responsive to a plurality of memory controllers. The multi-channel memory may also include a first multi-port multi-bank structure accessible to a first set of the plurality of channels and a second multi-port multi-bank structure accessible to a second set of the plurality of channels. | 2011-12-29 |
20110320699 | System Refresh in Cache Memory - System refresh in a cache memory includes generating a refresh time period (RTIM) pulse at a centralized refresh controller of the cache memory, activating a refresh request at the centralized refresh controller in response to generating the RTIM pulse, the refresh request associated with a single cache memory bank of the cache memory, receiving a refresh grant in response to activating the refresh request, and transmitting the refresh grant to a bank controller, the bank controller associated, and localized, at the single cache memory bank of the cache memory. | 2011-12-29 |
20110320700 | Concurrent Refresh In Cache Memory - Concurrent refresh in a cache memory includes calculating a refresh time interval at a centralized refresh controller, the centralized refresh controller being common to all cache memory banks of the cache memory, transmitting a starting time of the refresh time interval to a bank controller, the bank controller being local to, and associated with, only one cache memory bank of the cache memory, sampling a continuous refresh status indicative of a number of refreshes necessary to maintain data within the cache memory bank associated with the bank controller, requesting a gap in a processing pipeline of the cache memory to facilitate the number of refreshes necessary, receiving a refresh grant in response to the requesting, and transmitting an encoded refresh command to the bank controller, the encoded refresh command indicating a number of refresh operations granted to the cache memory bank associated with the bank controller. | 2011-12-29 |
20110320701 | OPTIMIZING EDRAM REFRESH RATES IN A HIGH PERFORMANCE CACHE ARCHITECTURE - Optimizing refresh request transmission rates in a high performance cache comprising: a refresh requestor configured to transmit a refresh request to a cache memory at a first refresh rate, the first refresh rate comprising an interval, the interval comprising receiving a plurality of first signals, the first refresh rate corresponding to a maximum refresh rate, and a refresh counter operatively coupled to the refresh requestor and configured to reset in response to receiving a second signal, increment in response to receiving each of a plurality of refresh requests from the refresh requestor, and reset and transmit a current count to the refresh requestor in response to receiving a third signal, wherein the refresh requestor is configured to transmit a refresh request at a second refresh rate, in response to receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold. | 2011-12-29 |
20110320702 | Operation Frequency Adjusting System and Method - Techniques pertaining to adjusting the operation frequency of a DRAM are disclosed. According to one embodiment, the DRAM operation frequency adjusting system includes a statistic module counting effective operations of a DRAM to obtain a bandwidth utilization rate of the DRAM at a present operation frequency; a parameter configuration module including a target frequency configuration sub-module configured to generate a target operation frequency; and a frequency switch controller for switching a present operation frequency of the DRAM to the target operation frequency. The invention adjusts the operation frequency of a DRAM according to the application environment, and creates a balance between performance and power consumption of DRAMs, and thus improves operation speed of system-on-chips as well as decreases the power consumption. | 2011-12-29 |
20110320703 | ASSOCIATING INPUT/OUTPUT DEVICE REQUESTS WITH MEMORY ASSOCIATED WITH A LOGICAL PARTITION - An address controller includes a bit selector that receives a first portion of a requester id and selects a bit from a vector that identifies whether a requesting function is an SR-IOV device or a standard PCIe device. The controller also includes a selector coupled to the bit selector that forms an output comprised of either a second portion of the RID or a first portion of the address portion based on an input received from the selector and an address control unit that receives the first portion of the RID and the output and determines the LPAR that owns the requesting function based thereon, the address control unit providing the corrected memory request to the memory. | 2011-12-29 |
20110320704 | CONTENT ADDRESSABLE MEMORY SYSTEM - A content addressable memory system, method and computer program product is described. The memory system comprises a location addressable store having data identified by location and multiple levels of content addressable stores each holding ternary content words. The content words are associated with references to data in the location addressable store. The content store levels might be implemented using different technologies that have different performance, capacity, and cost attributes. The memory system includes a content based cache for improved performance and a content addressable memory management unit for managing memory access operations and virtual memory addressing. | 2011-12-29 |
20110320705 | METHOD FOR TCAM LOOKUP IN MULTI-THREADED PACKET PROCESSORS - A method, apparatus and computer program product for performing TCAM lookups in multi-threaded packet processors is presented. A Ternary Content Addressable Memory (TCAM) key is constructed for a packet and a Packet Reference Number (PRN) is generated. The TCAM key and the packet are tagged with the PRN. The TCAM key and the PRN are sent to a TCAM and in parallel the packet and the PRN are sent to a packet processing thread. The PRN is used to read the TCAM result when it is ready. | 2011-12-29 |
20110320706 | STORAGE APPARATUS AND METHOD FOR CONTROLLING THE SAME - A storage apparatus capable of improving the reliability of a large-scale storage system, and a method for controlling such a storage apparatus are suggested. A storage apparatus including a storage device for storing data, and a multiplexer for multiplexing a port for the storage device, the multiplexer being connected to one or more host controllers, and a method for controlling such a storage apparatus, wherein the multiplexer judges whether a command sent from the host controller to the storage device is proper or not; and if the command is improper, the multiplexer discards the command without transferring it to the storage device, and sends an error response to the host controller. | 2011-12-29 |
20110320707 | STORAGE APPARATUS AND STORAGE MANAGEMENT METHOD - The performance to transfer data to external storage media in thin provisioning is enhanced. | 2011-12-29 |
20110320708 | STORAGE SYSTEM - A storage expander apparatus for accessing storage units includes first interfaces for accessing the storage units, a second interface for accessing subordinate expander apparatus, and a processor for executing receiving from an external apparatus a first request for obtaining first information indicative of a state of a connection of the storage expander apparatus, transmitting a second request for obtaining second information indicative of a state of a connection of the subordinate expander apparatus, measuring an elapsing time that has elapsed since transmitting the second request, storing a first response corresponding to the second request upon receiving the first response, starting a process for obtaining third information indicative of a state of a connection to be connected with the first interfaces upon the elapsing time exceeding a predetermined time, and transmitting a second response including the third information to the external apparatus upon receiving the third response. | 2011-12-29 |
20110320709 | REALIZING A STORAGE SYSTEM - A storage system and a method for realizing a storage system is disclosed, the storage system comprising: a disk array comprising at least one solid state disk and at least one non-solid state disk; and a storage control means configured to: in response to entering a scrubbing mode, scan and move data blocks in the at least one non-solid state disk in the disk array to form more continuous free blocks. The storage system of the present invention has good read and write performances, higher data reliability and availability, and lower cost. | 2011-12-29 |
20110320710 | STORAGE SYSTEM AND DATA MANAGEMENT METHOD - A storage system including: a virtualization apparatus having a control unit, said control unit setting an actual volume for storing data sent from a host apparatus, formed in a storage area provided by a physical disk; and a virtual volume paired with the actual volume, for storing replicated data for the data; and an external storage apparatus having a logical volume that functions as an actual storage area for the virtual volume; and a tape associated with the logical volume, for storing the replicated data; wherein the external storage apparatus has a copy unit for copying the replicated data stored in the logical volume to the tape. | 2011-12-29 |
20110320711 | Method and System for Rebuilding Data in a Distributed RAID System - Embodiments of the systems and methods disclosed provide a distributed RAID system comprising a set of data banks. More particularly, in certain embodiments of a distributed RAID system each data bank has a set of associated storage media and executes a similar distributed RAID application. The distributed RAID applications on each of the data banks coordinate among themselves to distribute and control data flow associated with implementing a level of RAID in conjunction with a volume stored on the associated storage media of the data banks. Migration of volumes, or portions thereof, from one configuration to another configuration may be accomplished according to a priority associated with the volume. | 2011-12-29 |
20110320712 | METHOD AND APPARATUS FOR CONTROLLING STATE OF STORAGE DEVICE AND STORAGE DEVICE - The embodiments of the present invention provide a method and an apparatus for controlling a state of a storage device, and a storage device, and relate to the field of electronic technologies. State control information of logic disks in the storage device is obtained; it is judged whether the state control information of all the logic disks in the storage device includes sleep instructions; and the storage device is controlled to switch into a sleep state when the state control information of all the logic disks includes the sleep instructions. The technical solutions may effectively control the storage device to switch into the sleep state, overcome the inconvenience of the read and write operations when the storage device automatically switches into the sleep state, and save the power consumption of the storage device with convenient use. | 2011-12-29 |
20110320713 | Smartconnect Flash Card Adapter - A multi-memory media adapter to read a plurality of different types of memory media cards. Signals are mapped to the contact pins depending upon the type of memory media card. In one embodiment, a controller connected to an interconnection means maps at least one signal to the contact pins depending upon the type of memory card inserted. | 2011-12-29 |
20110320714 | MAINFRAME STORAGE APPARATUS THAT UTILIZES THIN PROVISIONING - Each actual page inside a pool is configured from a plurality of actual tracks, and each virtual page inside a virtual volume is configured from a plurality of virtual tracks. A storage control apparatus of a mainframe system has management information that includes information denoting a track in which there exists a user record, which is a record including user data (the data used by a host apparatus of a mainframe system). Based on the management information, a controller identifies an actual page that is configured only from tracks that do not comprise the user record, and cancels the allocation of the identified actual page to the virtual page. | 2011-12-29 |
20110320715 | IDENTIFYING TRENDING CONTENT ITEMS USING CONTENT ITEM HISTOGRAMS - Within a content item set, particular content items may be identified as trending, based on changes in a frequency of references to the content items. For example, users of a social network may reference web resources by posting the uniform resource locators (URLs) thereof in messages, and trending web resources may be identified by detecting changes in the frequencies of such references. These trends may be tracked by counting such references in content item histograms, and by computing trend scores at the time of detecting each reference to a content item. Trending content items may then be identified at a second time by comparing the trend scores after decaying the trend scores of respective content items, based on the period between the second time and the last reference time of the last detected reference to the content item. | 2011-12-29 |
20110320716 | LOADING AND UNLOADING A MEMORY ELEMENT FOR DEBUG - A method of debugging a memory element is provided. The method includes initializing a line fetch controller with at least one of write data and read data; utilizing at least two separate clocks for performing at least one of write requests and read requests based on the at least one of the write data and the read data; and debugging the memory element based on the at least one of write requests and read requests. | 2011-12-29 |
20110320717 | STORAGE CONTROL APPARATUS, STORAGE SYSTEM AND METHOD - A storage control apparatus includes a memory configured to store access management information concerning access from a host to each of a plurality of logical volumes, and a controller configured to refer to the access management information read from the memory, when receiving an entirety of updated data from the host, to set a write mode for data transfer from each of the plurality of logical volumes to the corresponding physical volume on the basis of the access management information to one of a difference data write mode in which difference data indicating a difference between an entirety of data stored in a storage apparatus and the entirety of updated data is written into a storage apparatus and an entire data write mode in which the entirety of updated data is written into the storage apparatus. | 2011-12-29 |
20110320718 | READING OR WRITING TO MEMORY - To increase the efficiency of a running application, it is determined whether using a cache or directly a storage is more efficient block size-specifically; and the determined memory type is used for a data stream having a corresponding block size. | 2011-12-29 |
20110320719 | PROPAGATING SHARED STATE CHANGES TO MULTIPLE THREADS WITHIN A MULTITHREADED PROCESSING ENVIRONMENT - A circuit arrangement and method make state changes to shared state data in a highly multithreaded environment by propagating or streaming the changes to multiple parallel hardware threads of execution in the multithreaded environment using an on-chip communications network and without attempting to access any copy of the shared state data in a shared memory to which the parallel threads of execution are also coupled. Through the use of an on-chip communications network, changes to the shared state data may be communicated quickly and efficiently to multiple threads of execution, enabling those threads to locally update their local copies of the shared state. Furthermore, by avoiding attempts to access a shared memory, the interface to the shared memory is not overloaded with concurrent access attempts, thus preserving memory bandwidth for other activities and reducing memory latency. Particularly for larger shared states, propagating the changes, rather than an entire shared state, further improves performance by reducing the amount of data communicated over the on-chip communications network. | 2011-12-29 |
20110320720 | Cache Line Replacement In A Symmetric Multiprocessing Computer - Cache line replacement in a symmetric multiprocessing computer, the computer having a plurality of processors, a main memory that is shared among the processors, a plurality of cache levels including at least one high level of private caches and a low level shared cache, and a cache controller that controls the shared cache, including receiving in the cache controller a memory instruction that requires replacement of a cache line in the low level shared cache; and selecting for replacement by the cache controller a least recently used cache line in the low level shared cache that has no copy stored in any higher level cache. | 2011-12-29 |
20110320721 | DYNAMIC TRAILING EDGE LATENCY ABSORPTION FOR FETCH DATA FORWARDED FROM A SHARED DATA/CONTROL INTERFACE - A computer-implemented method for managing data transfer in a multi-level memory hierarchy that includes receiving a fetch request for allocation of data in a higher level memory, determining whether a data bus between the higher level memory and a lower level memory is available, bypassing an intervening memory between the higher level memory and the lower level memory when it is determined that the data bus is available, and transferring the requested data directly from the higher level memory to the lower level memory. | 2011-12-29 |
20110320722 | MANAGEMENT OF MULTIPURPOSE COMMAND QUEUES IN A MULTILEVEL CACHE HIERARCHY - An apparatus for controlling access to a pipeline includes a plurality of command queues including a first subset of the plurality of command queues being assigned processes the commands of first command type, a second subset of the plurality of command queues being assigned to process commands of the second command type, and a third subset of the plurality of the command queues not being assigned to either the first subset or the second subset. The apparatus also includes an input controller configured to receive requests having the first command type and the second command type and assign requests having the first command type to command queues in the first subset until all command queues in the first subset are filled and then assign requests having the first command type to command queues in the third subset. | 2011-12-29 |
20110320723 | METHOD AND SYSTEM TO REDUCE THE POWER CONSUMPTION OF A MEMORY DEVICE - A method and system to reduce the power consumption of a memory device. In one embodiment of the invention, the memory device is a N-way set-associative level one (L1) cache memory and there is logic coupled with the data cache memory to facilitate access to only part of the N-ways of the N-way set-associative L1 cache memory in response to a load instruction or a store instruction. By reducing the number of ways to access the N-way set-associative L1 cache memory for each load or store request, the power requirements of the N-way set-associative L1 cache memory is reduced in one embodiment of the invention. In one embodiment of the invention, when a prediction is made that the accesses to cache memory only requires the data arrays of the N-way set-associative L1 cache memory, the access to the fill buffers are deactivated or disabled. | 2011-12-29 |
20110320724 | DMA-BASED ACCELERATION OF COMMAND PUSH BUFFER BETWEEN HOST AND TARGET DEVICES - Direct Memory Access (DMA) is used in connection with passing commands between a host device and a target device coupled via a push buffer. Commands passed to a push buffer by a host device may be accumulated by the host device prior to forwarding the commands to the push buffer, such that DMA may be used to collectively pass a block of commands to the push buffer. In addition, a host device may utilize DMA to pass command parameters for commands to a command buffer that is accessible by the target device but is separate from the push buffer, with the commands that are passed to the push buffer including pointers to the associated command parameters in the command buffer. | 2011-12-29 |
20110320725 | DYNAMIC MODE TRANSITIONS FOR CACHE INSTRUCTIONS - A method of providing requests to a cache pipeline includes receiving a plurality of requests from one or more state machines at an arbiter, selecting one of the plurality of requests as a selected request, the selected request having been provided by a first state machine, determining that the selected request includes a mode that requires a first step and a second step, the first step including an access to a location in a cache, determining that the location in the cache is unavailable, and replacing the mode with a modified mode that only includes the second step. | 2011-12-29 |
20110320726 | STORAGE APPARATUS AND METHOD FOR CONTROLLING STORAGE APPARATUS - A storage apparatus has a channel board | 2011-12-29 |
20110320727 | DYNAMIC CACHE QUEUE ALLOCATION BASED ON DESTINATION AVAILABILITY - An apparatus for controlling operation of a cache includes a first command queue, a second command queue and an input controller configured to receive requests having a first command type and a second command type and to assign a first request having the first command type to the first command queue and a second command having the first command type to the second command queue in the event that the first command queue has not received an indication that a first dedicated buffer is available. | 2011-12-29 |
20110320728 | PERFORMANCE OPTIMIZATION AND DYNAMIC RESOURCE RESERVATION FOR GUARANTEED COHERENCY UPDATES IN A MULTI-LEVEL CACHE HIERARCHY - A cache includes a cache pipeline, a request receiver configured to receive off chip coherency requests from an off chip cache and a plurality of state machines coupled to the request receiver. The cache also includes an arbiter coupled between the plurality of state machines and the cache pipe line and is configured to give priority to off chip coherency requests as well as a counter configured to count the number of coherency requests sent from the cache pipeline to a lower level cache. The cache pipeline is halted from sending coherency requests when the counter exceeds a predetermined limit. | 2011-12-29 |
20110320729 | CACHE BANK MODELING WITH VARIABLE ACCESS AND BUSY TIMES - Various embodiments of the present invention manage access to a cache memory. In one embodiment, a set of cache bank availability vectors are generated based on a current set of cache access requests currently operating on a set of cache banks and at least a variable busy time of a cache memory includes the set of cache banks. The set of cache bank availability vectors indicate an availability of the set of cache banks. A set of cache access requests for accessing a set of given cache banks within the set of cache banks is received. At least one cache access request in the set of cache access requests is selected to access a given cache bank based on the a cache bank availability vectors associated with the given cache bank and the set of access request parameters associated with the at least one cache access that has been selected. | 2011-12-29 |
20110320730 | NON-BLOCKING DATA MOVE DESIGN - A mechanism for data buffering is provided. A portion of a cache is allocated as buffer regions, and another portion of the cache is designated as random access memory (RAM). One of the buffer regions is assigned to a processor. A data block is stored to the one of the buffer regions of the cache according an instruction of the processor. The data block is stored from the one of the buffer regions of the cache to the memory. | 2011-12-29 |
20110320731 | ON DEMAND ALLOCATION OF CACHE BUFFER SLOTS - Dynamic allocation of cache buffer slots includes receiving a request to perform an operation that requires a storage buffer slot, the storage buffer slot residing in a level of storage. The dynamic allocation of cache buffer slots also includes determining availability of the storage buffer slot for the cache index as specified by the request. Upon determining the storage buffer slot is not available, the dynamic allocation of cache buffer slots includes evicting data stored in the storage buffer slot, and reserving the storage buffer slot for data associated with the request. | 2011-12-29 |
20110320732 | USER-CONTROLLED TARGETED CACHE PURGE - User-controlled targeted cache purging includes receiving a request to perform an operation to purge data from a cache, the request including an index identifier identifying an index associated with the cache. The index specifies a portion of the cache to be purged. The user-controlled targeted cache purging also includes purging the data from the cache, and providing notification of successful completion of the operation. | 2011-12-29 |
20110320733 | CACHE MANAGEMENT AND ACCELERATION OF STORAGE MEDIA - Examples of described systems utilize a cache media in one or more computing devices that may accelerate access to other storage media. A solid state drive may be used as the local cache media. In some embodiments, the solid state drive may be used as a log structured cache, may employ multi-level metadata management, may use read and write gating. | 2011-12-29 |
20110320734 | SYSTEM AND METHOD FOR SUPPORTING MUTABLE OBJECT HANDLING - A computer-implemented method and system can support mutable object handling. The system comprises a cache space that is capable of storing one or more mutable cache objects, and one or more cached object graphs. Each said mutable cache object is reachable via one or more retrieval paths in the one or more cached object graph. The system further comprises a mutable-handling decorator that maintains an internal instance map that transparently translates between the one or more cached object graphs and the one or more mutable cache objects stored in the cache space. | 2011-12-29 |
20110320735 | DYNAMICALLY ALTERING A PIPELINE CONTROLLER MODE BASED ON RESOURCE AVAILABILITY - A mechanism for dynamically altering a request received at a hardware component is provided. The request is received at the hardware component, and the request includes a mode option. It is determined whether an action of the request requires an unavailable resource and it is determined whether the mode option is for the action requiring the unavailable resource. In response to the mode option being for the action requiring the unavailable resource, the action is automatically removed from the request. The request is passed for pipeline arbitration without the action requiring the unavailable resource. | 2011-12-29 |
20110320736 | PREEMPTIVE IN-PIPELINE STORE COMPARE RESOLUTION - A computer-implemented method that includes receiving a plurality of stores in a store queue, via a processor, comparing a fetch request against the store queue to search for a target store having a same memory address as the fetch request, determining whether the target store is ahead of the fetch request in a same pipeline, and processing the fetch request when it is determined that the target store is ahead of the fetch request. | 2011-12-29 |
20110320737 | Main Memory Operations In A Symmetric Multiprocessing Computer - Main memory operation in a symmetric multiprocessing computer, the computer comprising one or more processors operatively coupled through a cache controller to at least one cache of main memory, the main memory shared among the processors, the computer further comprising input/output (‘I/O’) resources, including receiving, in the cache controller from an issuing resource, a memory instruction for a memory address, the memory instruction requiring writing data to main memory; locking by the cache controller the memory address against further memory operations for the memory address; advising the issuing resource of completion of the memory instruction before the memory instruction completes in main memory; issuing by the cache controller the memory instruction to main memory; and unlocking the memory address only after completion of the memory instruction in main memory. | 2011-12-29 |
20110320738 | Maintaining Cache Coherence In A Multi-Node, Symmetric Multiprocessing Computer - Maintaining cache coherence in a multi-node, symmetric multiprocessing computer, the computer composed of a plurality of compute nodes, including, broadcasting upon a cache miss by the first compute node to other compute nodes a request for the cache line; if at least two of the compute nodes has a correct copy of the cache line, selecting which compute node is to transmit the correct copy of the cache line to the first node, and transmitting from the selected compute node to the first node the correct copy of the cache line; and updating by each node the state of the cache line in each node, in dependence upon one or more of the states of the cache line in all the nodes. | 2011-12-29 |
20110320739 | DISCOVERY OF NETWORK SERVICES - Discovery of network services consumable by a client executing on a first device. A request is received from the client for a list of services. There is a determination of whether a second device on the network which maintains a current list of services can or can not be located. Responsive to a determination that the second device can not be located, a local cached copy of a list of services is returned to the client. Responsive to a determination that the second device can be located, a request for the current list of services is sent to the second device, and a response containing the current list of services is received from the second device. The current list of services is returned to the client. | 2011-12-29 |
20110320740 | METHOD FOR OPTIMIZING SEQUENTIAL DATA FETCHES IN A COMPUTER SYSTEM - A computer implemented method of optimizing sequential data fetches in a computer system is provided. The method includes fetching a data segment from a main memory, the data segment having a plurality of target data entries; extracting a first portion of the data segment and storing the first portion into a target data cache, the first portion having a first target data entry; and storing the data segment into an intermediate cache line buffer in communication with the target data cache to enable subsequent fetches to a number target data entries in the data segment. | 2011-12-29 |
20110320741 | METHOD AND APPARATUS PROVIDING FOR DIRECT CONTROLLED ACCESS TO A DYNAMIC USER PROFILE - An apparatus may include a profile determiner configured to determine a user profile. A contextual characteristic determiner may be configured to determine contextual characteristics relating to the apparatus and/or the user of the apparatus such that the profile determiner may infer user preferences and thereby create a dynamic portion of the user profile. An index builder may be configured to build an index of profile categories included within the user profile. A subscription registrar may cause the user profile to be registered for sharing with a service provider. Thereby a profile manager may provide for direct controlled access to the user profile which may be limited by user selection of permission levels and/or profile categories which are shared. Thereby access to the user profile may occur directly with the apparatus without storing the user profile on a separate server. | 2011-12-29 |
20110320742 | METHOD, APPARATUS AND SYSTEM FOR GENERATING ACCESS INFORMATION FROM AN LRU TRACKING LIST - Techniques for generating access information indicating a least recently used (LRU) memory region in a set of memory regions. In an embodiment, data is stored in an entry of an LRU tracking list (LTL) based on a touch message indicating when a memory group has been touched—e.g. read from, written to and/or associated with a memory region. The data stored in an LTL entry may include an identifier of a memory group and/or validity data specifying whether that LTL entry stores a set of default data. In another embodiment, access information may be generated based on the memory group identifier and the validity data. | 2011-12-29 |
20110320743 | MEMORY ORDERED STORE SYSTEM IN A MULTIPROCESSOR COMPUTER SYSTEM - A system and computer implemented method for storing of data in the memory of a computer system in order at a fast rate is provided. The method includes launching a first store to memory. A wait counter is initiated. A second store to memory is speculatively launched when the wait counter expires. The second store to memory is cancelled when the second store achieves coherency prior to the first store to memory. | 2011-12-29 |
20110320744 | DIAGNOSTIC DATA COLLECTION AND STORAGE PUT-AWAY STATION IN A MULTIPROCESSOR SYSTEM - A computer-implemented method for collecting diagnostic data within a multiprocessor system that includes capturing diagnostic data via a plurality of collection points disposed at a source location within the multiprocessor system, routing the captured diagnostic data to a data collection station at the source location, providing a plurality of buffers within the data collection station, and temporarily storing the captured diagnostic data on at least one of the plurality of buffers, and transferring the captured diagnostic data to a target storage location on a same chip as the source location or another storage location on a same node. | 2011-12-29 |
20110320745 | DATA-SCOPED DYNAMIC DATA RACE DETECTION - A dynamic shared-memory data race detection tool with data-scoping capabilities to reduce runtime overheads is disclosed. The tool allows users to restrict analysis of memory locations to heap and/or stack variables that are of interest to them using explicit calls to functions provided in a library that is part of the race detection tool. The application code is instrumented to insert probes at all memory instructions and linked with the data race detection library to perform data-scoped race detection. | 2011-12-29 |
20110320746 | HANDLING CONTENT ASSOCIATED WITH CONTENT IDENTIFIERS - Apparatus having at least one processor and at least one memory having computer-readable code stored thereon which when executed controls the at least one processor: to cause a content identifier associated with content to be displayed; to cause a first indicator and a second indicator to be displayed in association with the content identifier; to cause the first indicator to indicate whether or not the some or all of the content is stored in a first memory; to cause the second indicator to indicate whether or not some or all of the content is stored in a second memory; and to be responsive, when the first indicator indicates that none of the content is stored in the first memory and the second indicator indicates that some or all of the content is stored in the second memory, to selection of the first indicator to cause the content associated with the first content identifier to be copied from the second memory into the first memory. | 2011-12-29 |
20110320747 | IDENTIFYING REPLACEMENT MEMORY PAGES FROM THREE PAGE RECORD LISTS - A replacement memory page is identified by accessing a first list of page records, and if the first list is not empty, identifying a replacement page from a next page record indicator of the first list. A second list of page records is accessed if the first list is empty, and if the second list is not empty, the replacement page is identified from a next page record indicator of the second list. A third list of page records is accessed if the first and second lists are empty, and the replacement page is identified from a next page record indicator of the third list. | 2011-12-29 |
20110320748 | DATA PROCESSING APPARATUS AND DATA PROCESSING METHOD OF DATA PROCESSING APPARATUS - When one of a plurality of storage units is not available, execution of operation modes can be switched according to an option status. A data processing apparatus that can respectively store data to a first storage unit and a second storage unit, includes a control unit configured to execute a first operation mode for limiting data processing using the second storage unit and enabling data processing using the first storage unit in a case where the second storage unit is not available and an option for storing encrypted data in the second storage unit is not used, and execute a second operation mode for limiting the data processing using the first storage unit and the data processing using the second storage unit, in a case where the second storage unit is not available and the option is used. | 2011-12-29 |
20110320749 | PAGE FAULT PREDICTION FOR PROCESSING VECTOR INSTRUCTIONS - The described embodiments comprise a processor that handles a TLB miss while executing a vector read instruction in a processor. In the described embodiments, the processor performs a lookup in a TLB for addresses in active elements in the vector read instruction. The processor then determines that a TLB miss occurred for the address from an active element other than a first active element. Upon predicting that a page table walk for the vector read instruction will result in a page fault, the processor sets a bit in a corresponding bit position in an FSR. In the described embodiments, a set bit in a bit position in FSR indicates that data in a corresponding element of the vector read instruction is invalid. The processor then immediately performs memory reads for at least one of the first active element and other active elements for which TLB misses did not occur. | 2011-12-29 |
20110320750 | INFORMATION PROCESSING SYSTEM AND METHOD - The present invention provides information storage system and method capable of changing an information storage format to a format suitable for the data utilization form. There are provided a means that records history information of information processing on data, a plurality of information storing means that store information in mutually different information storage formats, and an information storage format control means that changes an information storage format of data, on the basis of a history of processing relating to the data. | 2011-12-29 |
20110320751 | Dynamic Interleaving Of Multi-Channel Memory - In a particular embodiment, a dynamic interleaving system changes the number of interleaving channels of a multi-channel memory based on a detected level of bandwidth requests from a plurality of master ports to a plurality of slave ports. At a low level of bandwidth requests, the number of interleaving channels is reduced. | 2011-12-29 |
20110320752 | INFORMATION PROCESSING APPARATUS INFORMATION PROCESSING METHOD, PROGRAM, AND RECORDING MEDIUM - The present invention relates to an information processing apparatus, an information processing method, a program, and a recording medium in which information to be processed by an application program recorded on a recording medium can be taken over and used by an application program recorded on a different recording medium. | 2011-12-29 |
20110320753 | DATA PROCESSING APPARATUS, COMPUTER PROGRAM THEREFOR, AND DATA PROCESSING METHOD - A data processing apparatus uses a characteristic where an OS or an application program divides a file in units of cluster and writes information when information is written in an HDD and changes (redirect) a writing place in the units of cluster, thereby classifying and storing confidential information with a small consumption amount of the HDD. Therefore, the present invention provides a data processing apparatus that can classify and store confidential information and normal information with a small consumption amount of the HDD. | 2011-12-29 |
20110320754 | MANAGEMENT SYSTEM FOR STORAGE SYSTEM AND METHOD FOR MANAGING STORAGE SYSTEM - Provided is a management system for a storage system, the storage system including a first storage subsystem and a second storage subsystem each including logical storage areas for storing data to be processed by a host computer, the logical storage areas in each of the first and second storage subsystems having storage tiers associated respectively with a plurality of storage area characteristic information pieces which are information pieces characterizing the corresponding logical storage areas and which are different from each other, the management system comprising a data migration management part, wherein when the data is migrated from the first storage subsystem to the second storage subsystem, the data migration management part: acquires a configuration of the storage tiers of the logical storage areas of the first storage subsystem in which the data of a migration target is stored; compares the configuration with a configuration of the storage tiers of the logical storage areas of the second storage subsystem; and then migrates the migration target data stored in the logical storage areas of the first storage subsystem to the logical storage areas of the second storage subsystem in accordance with a result of the comparison. | 2011-12-29 |
20110320755 | TRACKING DYNAMIC MEMORY REALLOCATION USING A SINGLE STORAGE ADDRESS CONFIGURATION TABLE - Tracking dynamic memory de-allocation using a single configuration table having a first register and a second register includes setting the first register as an active register, initiating a de-allocation of desired storage increments from a memory partition, setting the storage increments in the second register as invalid, purging all caches associated with the single configuration table, setting the second register as the active register and the first register as an inactive register, setting the desired storage increments in the first register as invalid, switching the active register from the second register to the first register to complete memory de-allocation using the single configuration table. | 2011-12-29 |
20110320756 | RUNTIME DETERMINATION OF TRANSLATION FORMATS FOR ADAPTER FUNCTIONS - Various address translation formats are available for use in obtaining system memory addresses for use by requestors, such as adapter functions, in accessing system memory. The particular address translation format to be used by a given requestor is pre-registered in a device table entry associated with that requestor. | 2011-12-29 |
20110320757 | STORE/STORE BLOCK INSTRUCTIONS FOR COMMUNICATING WITH ADAPTERS - Communication with adapters of a computing environment is facilitated. Instructions are provided that explicitly target the adapters. Information provided in an instruction is used to steer the instruction to an appropriate location within the adapter. | 2011-12-29 |
20110320758 | TRANSLATION OF INPUT/OUTPUT ADDRESSES TO MEMORY ADDRESSES - An address provided in a request issued by an adapter is converted to an address directly usable in accessing system memory. The address includes a plurality of bits, in which the plurality of bits includes a first portion of bits and a second portion of bits. The second portion of bits is used to index into one or more levels of address translation tables to perform the conversion, while the first portion of bits are ignored for the conversion. The first portion of bits are used to validate the address. | 2011-12-29 |
20110320759 | MULTIPLE ADDRESS SPACES PER ADAPTER - A plurality of address spaces are assigned to an adapter. To select a particular address space for the adapter, a requestor identifier and address space identifier provided in a request by the adapter are used. Each address space may have a different address translation mechanism associated therewith. | 2011-12-29 |
20110320760 | TRANSLATING REQUESTS BETWEEN FULL SPEED BUS AND SLOWER SPEED DEVICE - Methods and apparatus related to techniques for translating requests between a full speed bus and a slower speed device are described. In one embodiment, a translation logic translates requests between a full speed bus (such as a front side bus, e.g., running relatively higher frequencies, for example at MHz levels) and a much slower speed device (such as a System On Chip (SOC) device (or SOC Device Under Test (DUT)), e.g., logic provided through emulation, which may be running at much lower frequency, for example kHz levels). Other embodiments are also disclosed. | 2011-12-29 |
20110320761 | ADDRESS TRANSLATION, ADDRESS TRANSLATION UNIT DATA PROCESSING PROGRAM, AND COMPUTER PROGRAM PRODUCT FOR ADDRESS TRANSLATION - A lookup operation is performed in a translation look aside buffer based on a first translation request as current translation request, wherein a respective absolute address is returned to a corresponding requestor for the first translation request as translation result in case of a hit. A translation engine is activated to perform at least one translation table fetch in case the current translation request does not hit an entry in the translation look aside buffer, wherein the translation engine is idle waiting for the at least one translation table fetch to return data, reporting the idle state of the translation engine as lookup under miss condition and accepting a currently pending translation request as second translation request, wherein a lookup under miss sequence is performed in the translation look aside buffer based on said second translation request. | 2011-12-29 |
20110320762 | REGION BASED TECHNIQUE FOR ACCURATELY PREDICTING MEMORY ACCESSES - In one embodiment, the present invention includes a processor comprising a page tracker buffer (PTB), the PTB including a plurality of entries to store an address to a cache page and to store a signature to track an access to each cache line of the cache page, and a PTB handler, the PTB handler to load entries into the PTB and to update the signature. Other embodiments are also described and claimed. | 2011-12-29 |
20110320763 | USING ADDRESSES TO DETECT OVERLAPPING MEMORY REGIONS - The described embodiments determine if two addressed memory regions overlap. First, a first address for a first memory region and a second address for a second memory region are received. Then a composite address is generated from the first and second addresses. Next, an upper subset and a lower subset of the bits in the addresses are determined. Then, using the upper and lower subsets of the addresses, a determination is made whether the addresses meet a condition from a set of conditions. If so, a determination is made whether the lower subset of the bits in the addresses meet a criteria from a set of criteria. Based on the determination whether the lower subset of the bits in the addresses meet a criteria, a determination is made whether the memory regions overlap or do not overlap. | 2011-12-29 |
20110320764 | LOAD INSTRUCTION FOR COMMUNICATING WITH ADAPTERS - Communication with adapters of a computing environment is facilitated. Instructions are provided that explicitly target the adapters. Information provided in an instruction is used to steer the instruction to an appropriate location within the adapter. | 2011-12-29 |
20110320765 | VARIABLE WIDTH VECTOR INSTRUCTION PROCESSOR - A computer processor, method, and computer program product for executing vector processing instructions on a variable width vector register file. An example embodiment is a computer processor that includes an instruction execution unit coupled to a variable width vector register file which contains a number of vector registers, the width of the vector registers is changeable during operation of the computer processor. | 2011-12-29 |
20110320766 | APPARATUS, METHOD, AND SYSTEM FOR IMPROVING POWER, PERFORMANCE EFFICIENCY BY COUPLING A FIRST CORE TYPE WITH A SECOND CORE TYPE - An apparatus and method is described herein for coupling a processor core of a first type with a co-designed core of a second type. Execution of program code on the first core is monitored and hot sections of the program code are identified. Those hot sections are optimize for execution on the co-designed core, such that upon subsequently encountering those hot sections, the optimized hot sections are executed on the co-designed core. When the co-designed core is executing optimized hot code, the first processor core may be in a low-power state to save power or executing other code in parallel. Furthermore, multiple threads of cold code may be pipelined on the first core, while multiple threads of hot code are pipeline on the co-designed core to achieve maximum performance. | 2011-12-29 |
20110320767 | Parallelization of Online Learning Algorithms - Methods, systems, and media are provided for a dynamic batch strategy utilized in parallelization of online learning algorithms. The dynamic batch strategy provides a merge function on the basis of a threshold level difference between the original model state and an updated model state, rather than according to a constant or pre-determined batch size. The merging includes reading a batch of incoming streaming data, retrieving any missing model beliefs from partner processors, and training on the batch of incoming streaming data. The steps of reading, retrieving, and training are repeated until the measured difference in states exceeds a set threshold level. The measured differences which exceed the threshold level are merged for each of the plurality of processors according to attributes. The merged differences which exceed the threshold level are combined with the original partial model states to obtain an updated global model state. | 2011-12-29 |
20110320768 | METHOD OF, AND APPARATUS FOR, MITIGATING MEMORY BANDWIDTH LIMITATIONS WHEN PERFORMING NUMERICAL CALCULATIONS - There is provided a method of, and apparatus for, processing a computation on a computing device comprising at least one processor and a memory, the method comprising: storing, in said memory, plural copies of a set of data, each copy of said set of data having a different compression ratio and/or compression scheme; selecting a copy of said set of data; and performing, on a processor, a computation using said selected copy of said set of data. By providing such a method, different compression ratios and/or compression schemes can be selected as appropriate. For example, if high precision is required in a computation, a copy of the set of data can be chosen which has a low compression ratio at the expense of processing time and memory transfer time. In the alternative, if low precision is acceptable, then the speed benefits of a high compression ratio and/or lossy compression scheme may be utilised. | 2011-12-29 |
20110320769 | PARALLEL COMPUTING DEVICE, INFORMATION PROCESSING SYSTEM, PARALLEL COMPUTING METHOD, AND INFORMATION PROCESSING DEVICE - A computing section is provided with a plurality of computing units and correlatively stores entries of configuration information that describes configurations of the plurality of computing units with physical configuration numbers that represent the entries of configuration information and executes a computation in a configuration corresponding to a designated physical configuration number. A status management section designates a physical configuration number corresponding to a status to which the computing section needs to advance the next time for the computing section and outputs the status to which the computing section needs to advance the next time as a logical status number that uniquely identifies the status to which the computing section needs to advance the next time in an object code. A determination section determines whether or not the computing section has stored an entry of configuration information corresponding to the status to which the computing section needs to advance the next time based on the logical status number that is output from the status management section. A rewriting section correlatively stores the entry of the configuration information and a physical configuration number corresponding to the entry of the configuration information in the computing section when the determination section determines that the computing section has not stored the entry of configuration information corresponding to the status to which the computing section needs to advance the next time. | 2011-12-29 |
20110320770 | DATA PROCESSING DEVICE - An internal buffer is provided for a DRP core. A selector SEL switches input/output destination of the DRP core between external memory and an internal buffer. Control software executed by a CPU core receives information a pipeline of configurations for a sequence of target processing and generates combinations as to whether the processing result is transferred between the configurations via the external memory or via the internal buffer as transfer manners. Next, for each manner, bandwidth and performance of the external memory used by the DRP core in the manner are calculated. The manner of the best performance satisfying a previously specified bandwidth restriction is selected among the manners and the selector SEL is switched in accordance with the manner. | 2011-12-29 |
20110320771 | INSTRUCTION UNIT WITH INSTRUCTION BUFFER PIPELINE BYPASS - A circuit arrangement and method selectively bypass an instruction buffer for selected instructions so that bypassed instructions can be dispatched without having to first pass through the instruction buffer. Thus, for example, in the case that an instruction buffer is partially or completely flushed as a result of an instruction redirect (e.g., due to a branch mispredict), instructions can be forwarded to subsequent stages in an instruction unit and/or to one or more execution units without the latency associated with passing through the instruction buffer. | 2011-12-29 |
20110320772 | CONTROLLING THE SELECTIVELY SETTING OF OPERATIONAL PARAMETERS FOR AN ADAPTER - An instruction is provided to establish various operational parameters for an adapter. These parameters include adapter interruption parameters, input/output address translation parameters, resetting error indications, setting measurement parameters, and setting an interception control, as examples. The instruction specifies a function information block, which is a program representation of a device table entry used by the adapter, to be used in certain situations in establishing the parameters. A store instruction is also provided that stores the current contents of the function information block. | 2011-12-29 |
20110320773 | FUNCTION VIRTUALIZATION FACILITY FOR BLOCKING INSTRUCTION FUNCTION OF A MULTI-FUNCTION INSTRUCTION OF A VIRTUAL PROCESSOR - In a processor supporting execution of a plurality of functions of an instruction, an instruction blocking value is set for blocking one or more of the plurality of functions, such that an attempt to execute one of the blocked functions, will result in a program exception and the instruction will not execute, however the same instruction will be able to execute any of the functions that are not blocked functions. | 2011-12-29 |
20110320774 | OPERAND FETCHING CONTROL AS A FUNCTION OF BRANCH CONFIDENCE - A system for data operand fetching control includes a computer processor that includes a control unit for determining memory access operations. The control unit is configured to perform a method. The method includes calculating a summation weight value for each instruction in a pipeline, the summation weight value calculated as a function of branch uncertainty and a pendency in which the instruction resides in the pipeline relative to other instructions in the pipeline. The method also includes mapping the summation weight value of a selected instruction that is attempting to access system memory to a memory access control, each memory access control specifying a manner of handling data fetching operations. The method further includes performing a memory access operation for the selected instruction based upon the mapping. | 2011-12-29 |
20110320775 | ACCELERATING EXECUTION OF COMPRESSED CODE - Methods and apparatus relating to accelerating execution of compressed code are described. In one embodiment, a two-level embedded code decompression scheme is utilized which eliminates bubbles, which may increase speed and/or reduce power consumption. Other embodiments are also described and claimed. | 2011-12-29 |
20110320776 | MECHANISM FOR IRREVOCABLE TRANSACTIONS - A method and apparatus for designating and handling irrevocable transactions is herein described. In response to detecting an irrevocable event, such as an I/O operation, a user-defined irrevocable designation, and a dynamic failure profile, a transaction is designated as irrevocable. In response to designating a transaction as irrevocable, Single Owner Read Locks (SORLs) are acquired for previous and subsequent reads in the irrevocably designated transaction to ensure the transaction is able to complete without modification to locations read from, while permitting remote resources to load from those locations to continue execution. | 2011-12-29 |
20110320777 | DIRECT MEMORY ACCESS ENGINE PHYSICAL MEMORY DESCRIPTORS FOR MULTI-MEDIA DEMULTIPLEXING OPERATIONS - The architecture and techniques described herein can improve system performance with respect to the following. Communication between two interdependent hardware engines, that are part of pipeline, such that the engines are synchronized to consume resources when the engines are done with the work. Reduction of the role of software/firmware from feeding each stage of the hardware pipeline when the previous stage of the pipeline has completed. Reduction in the memory allocation for software-initialized hardware descriptors to improve performance by reducing pipeline stalls due to software interaction. | 2011-12-29 |
20110320778 | CENTRALIZED SERIALIZATION OF REQUESTS IN A MULTIPROCESSOR SYSTEM - Serializing instructions in a multiprocessor system includes receiving a plurality of processor requests at a central point in the multiprocessor system. Each of the plurality of processor requests includes a needs register having a requestor needs switch and a resource needs switch. The method also includes establishing a tail switch indicating the presence of the plurality of processor requests at the central point, establishing a sequential order of the plurality of processor requests, and processing the plurality of processor requests at the central point in the sequential order. | 2011-12-29 |
20110320779 | PERFORMANCE MONITORING IN A SHARED PIPELINE - A pipelined processing device includes: a device controller configured to receive a request to perform an operation; a plurality of subcontrollers configured to receive at least one instruction associated with the operation, each of the plurality of subcontrollers including a counter configured to generate an active time value indicating at least a portion of a time taken to process the at least one instruction; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor configured to receive the active time value; and a shared pipeline storage area configured to store the active time value for each of the plurality of subcontrollers. | 2011-12-29 |
20110320780 | HYBRID COMPARE AND SWAP/PERFORM LOCKED OPERATION QUEUE ALGORITHM - Systems, methods, and computer program products are disclosed for intermixing different types of machine instructions. One embodiment of the invention provides a protocol for intermixing the different types of machine instructions. By adhering to the protocol, different types of machine instructions may be intermixed to concurrently update data structures without leading to unpredictable results. | 2011-12-29 |
20110320781 | DYNAMIC DATA SYNCHRONIZATION IN THREAD-LEVEL SPECULATION - In one embodiment, the present invention introduces a speculation engine to parallelize serial instructions by creating separate threads from the serial instructions and inserting processor instructions to set a synchronization bit before a dependence source and to clear the synchronization bit after a dependence source, where the synchronization bit is designed to stall a dependence sink from a thread running on a separate core. Other embodiments are described and claimed. | 2011-12-29 |
20110320782 | PROGRAM STATUS WORD DEPENDENCY HANDLING IN AN OUT OF ORDER MICROPROCESSOR DESIGN - A computer implemented method of processing instructions of a computer program. The method comprises providing at least two copies of program status data; identifying a first update instruction of the instructions that writes to at least one field of the program status data; and associating the first update instruction with a first copy of the at least two copies of program status data. | 2011-12-29 |
20110320783 | VERIFICATION USING OPCODE COMPARE - A verification method is provided and includes randomly choosing a hardware executed instruction in a predefined program to force Opcode Compare on, determining an identity of a corresponding opcode from the chosen instruction and initializing Opcode Compare logic to trap the chosen instruction to firmware and creating firmware to initiate performance of hardware verification in the firmware and re-initiating performance of the hardware verification in hardware. | 2011-12-29 |
20110320784 | VERIFICATION OF PROCESSOR ARCHITECTURES ALLOWING FOR SELF MODIFYING CODE - A verification operation including generating a predefined instruction, initializing a relevant self modifying code (SMC) target memory location to form an SMC trap, binding the SMC trap to the predefined instruction to form an SMC trap source and propagating initialization of instruction code into the SMC trap source. | 2011-12-29 |