24th week of 2013 patent applcation highlights part 63 |
Patent application number | Title | Published |
20130151723 | STREAM MEDIA CHANNEL SWITCH METHOD, SWITCH AGENT, CLIENT AND TERMINAL - The channel switch method includes: playing stream media data of a current channel by a playing module of a stream media client; after a channel switch agent receives a channel switch request message sent from a service module of the stream media client, sending the channel switch request message to a stream media server by the channel switch agent, wherein the channel switch request message carries with switched channel ID information and client ID information of the stream media client; sending stream media data corresponding to the switched channel ID to the playing module of the stream media client by the stream media server through a stream playing session corresponding to the client ID, and playing the stream media data corresponding to the switched channel ID by the playing module. | 2013-06-13 |
20130151724 | Streaming media delivery system - Streaming media, such as audio or video files, is sent via the Internet. The media are immediately played on a user's computer. Audio/video data is transmitted from the server under control of a transport mechanism. A server buffer is prefilled with a predetermined amount of the audio/video data. When the transport mechanism causes data to be sent to the user's computer, it is sent more rapidly than it is played out by the user system. The audio/video data in the user buffer accumulates; and interruptions in playback as well as temporary modem delays are avoided. | 2013-06-13 |
20130151725 | Method and System for Handling a Domain Name Service Request - A method and system is provided for handling Domain Name Service (“DNS”) requests. A network interface device can broadcast multiple virtual addresses to a client device, where the multiple virtual addresses correlate to multiple actual DNS server addresses. The network interface device can process a DNS request originating from the client device, where the DNS request is directed to one of the multiple virtual addresses and where the DNS request is based on DNS server management logic executing on the client device. | 2013-06-13 |
20130151726 | Establishing Unique Sessions for DNS Subscribers - A system establishes virtual DNS servers that are supported by a DNS server. Target IP addresses are assigned for the virtual DNS servers. Network capable devices are uniquely assigned to the virtual DNS servers for domain name resolution. Each network capable device accesses the communication network through a corresponding network device associated with a corresponding source IP address. A client's service plan is assigned to a first network capable device used by the client. The service plan is implemented through a DNS request under a session established between the first network capable device and its assigned first virtual DNS server. The session is uniquely identified by a first source IP address of a first network device used by the first network capable device to access the communication network and a first target IP address of the first virtual DNS server. | 2013-06-13 |
20130151727 | MEDIA PROCESSING SYSTEM SUPPORTING DIFFERENT MEDIA FORMATS VIA SERVER-BASED TRANSCODING - Systems and methods that reformat media are described. In one embodiment, a system may include, for example, a server, a first communications device and a second communications device. The server, the first communications device and the second communications device may be operatively coupled to a network. The second communications device may receive, from the first communications device, a device profile relating to the first communications device and may send the device profile and media content to the server. The server may reformat the media content based on the device | 2013-06-13 |
20130151728 | PROVIDING SYNCHRONOUS CONTENT AND SUPPLEMENTAL EXPERIENCES - Methods, systems, computer readable media, and apparatuses for providing synchronous supplemental experiences are presented. According to one or more aspects, a video signal may be transmitted to a display device, and a sync signal may be transmitted to at least one other device. The sync signal may include an identifier of a program currently being displayed and a time value indicating a current point in time of the program. In at least one arrangement, the sync signal may cause the at least one other device to access content synchronization data; determine, based on the content synchronization data, that at least one supplemental experience corresponds to the identifier of the program and the time value; and present the at least one supplemental experience to a user. | 2013-06-13 |
20130151729 | THROTTLING TO REDUCE SYNCHRONIZATIONS OF EXCESSIVELY CHANGING DATA - Embodiments of the invention determine if a user preference or other piece of data is being synchronized too frequently. If the user preference is being synchronized too frequently, synchronization of the user preference is throttled to prevent it from synchronizing for some number of synchronization cycles. If the user preference rarely changes, the user preference is rewarded by throttling it less often. | 2013-06-13 |
20130151730 | APPARATUS, METHOD AND PROCESS OF INFLUENCING INFORMATION GATHERED BY A TETHERED ITEM AND COMPUTER-READABLE MEDIUM THEREOF - A tethered item is associated with an identifier that uniquely identifies the item, and one or more content processing devices execute obtaining an identifier of the item, and correlating the obtained item identifier with information related to the tethered item. | 2013-06-13 |
20130151731 | USB CHARGING MODULE - An apparatus is provided for charging a Universal Serial Bus (USB) device according to an optimal charging mode. The apparatus includes a charging module that is configured to obtain a descriptor from the USB device upon detection of the USB device on a USB bus. The charging module includes one or more descriptor entries disposed in a memory and a controller. The one or more descriptor entries include descriptor data, for matching the descriptor to a specific descriptor entry, and charging data, that specifies the optimal charging mode for the USB device. The controller is coupled to the memory, and is configured to match the descriptor to the specific descriptor entry, and is configured to initiate the optimal charging mode on the USB bus according to the charging data. | 2013-06-13 |
20130151732 | ELECTRONIC DEVICE AND PORT REDUCING METHOD - An exemplary port reducing method is for removing unselected ports of an original S-parameter file and generating an optimized S-parameter file. The method controls a display unit to display a user interface to receive commands from a user in response to user operation; the commands comprise a calling command, a selecting command, and an executing command. The method obtains the original S-parameter file in response to the calling command Next, the method determines which of the ports of the original S-parameter file are unselected in response to the selecting command, and connects each unselected port to the ground through one terminal impedance. The method then generates an optimized S-parameter file that comprises only the selected ports in response to the executing command. | 2013-06-13 |
20130151733 | REUSING SYSTEM CONFIGURATION INFORMATION AND METADATA FOR RELATED OPERATIONS - Reusing system configuration information and metadata for related operations is disclosed. It is determined that a group of content management system commands may be treated as a related set for purposes of updating content management system configuration information and/or metadata. The content management system configuration information and/or metadata are updated once for purposes of processing the group. | 2013-06-13 |
20130151734 | MEMORY DEVICE INITIATE AND TERMINATE BOOT COMMANDS - Methods of operating memory devices and electronic systems having memory devices include initiating a boot mode of operation of the memory device in response to receiving a first command, wherein the first command comprises a pattern of two or more command signals, and terminating the boot mode of operation in response to receiving a second command, wherein the second command comprises a pattern of two or more command signals. | 2013-06-13 |
20130151735 | I/O VIRTUALIZATION AND SWITCHING SYSTEM - Described herein is a system ( | 2013-06-13 |
20130151736 | DEVICE CONFIGURATION WITH CACHED PRE-ASSEMBLED DRIVER STATE - A computer with cached pre-assembled device configurations for a faster and more reliable user experience. Pre-assembled device configurations may be obtained in a variety of ways, for example, by pre-processing installation information obtained from driver packages, or by being retrieved from a suitable source. Pre-processing driver packages may involve, for example, copying binary files to their run-time locations and computing settings for the device and driver. The pre-processed device configuration settings may be cached and indexed in a database. When a device connects to the computer, a cached device configuration may be applied to the device without performing a full installation process. Pre-assembly of device configurations may be performed before a device first connects to the computer, for example, upon detecting an applicable driver or during manufacture of the computer, and is not restricted to being performed on the same computer on which the device configuration will be used. | 2013-06-13 |
20130151737 | Multi-function Device ID with Unique Identifier - A computer system that recognizes multi-function devices and associates functions with multi-function devices. Each multi-function device may be represented by a multi-function object, allowing tools, applications or other components within the computer to take actions relating to the entire device or relating to a function based on the association of that function with other functions in the same device. These actions include displaying information about devices, instead of or in addition to information about functions. Actions also include selecting functions based on proximity within a device. Functions may be associated with a multi-function device using a unique device identifier provided by the device or generated for the function based on a connection hierarchy between functions and the computer. Devices may be configured to provide the same identifier regardless of the transport over which the device is accessed. | 2013-06-13 |
20130151738 | APPARATUS AND MANAGING METHOD USING A PRESET POLICY BASED ON AN INVALIDATED I/O PATH BY USING CONFIGURATION INFORMATION ACQUIRED FROM STORAGE SYSTEM - To appropriately manage configuration information acquired from a storage system for the purpose of performance management, etc., an information processing apparatus managing the configuration information, i.e., information indicative of a configuration of resources making up the storage system in a database, detects a change in setting of an I/O path to extract resources making up an invalidated I/O path, which is the I/O path subject to the change, as monitoring object resources, acquires performance information that is information indicative of operation statuses of the monitoring object resources from the storage system, judges whether the performance information of the monitoring object resource matches a preset policy, determines a timing to make invalidated configuration information, which is the configuration information related to the invalidated I/O path, deletable from the storage device based on the result of the judgement, and deletes the invalidated configuration information from the database when the determined timing comes. | 2013-06-13 |
20130151739 | METHODS AND SYSTEMS FOR SECURE INTEROPERABILITY BETWEEN MEDICAL DEVICES - An interface device is configured to provide one or more links to first-party medical devices, each of which communicates using a proprietary protocol. The interface device can translate between the proprietary protocol and a second protocol that is accessible via a second link to the interface device. Details of the second protocol can be provided to third parties for configuring third-party medical devices to connect to the interface device via the second link. Using the second link, one or more third-party medical devices can send information to and/or receive information from the first-party medical devices without the need for the third-party device (or devices) to have any information about the proprietary protocol(s) of the first-party medical device(s). The first-party medical devices can include surgical tools and related support equipment and the third-party medical device can include a control station used to monitor and control the tools and support equipment. | 2013-06-13 |
20130151740 | AUTONOMIC ASSIGNMENT OF COMMUNICATION BUFFERS BY AGGREGATING SYSTEM PROFILES - A method, system and apparatus for autonomic buffer configuration. In accordance with the present invention, an autonomic buffer configuration method can include monitoring data flowing through buffers in a communications system and recording in at least one buffer profile different data sizes for different ones of the data flowing through the buffers during an established interval of time. An optimal buffer size can be computed based upon a specification of a required percentage of times a buffer must be able to accommodate data of a particular size. Subsequently, at least one of the buffers can be re-sized without re-initializing the at least one resized buffer. | 2013-06-13 |
20130151741 | MEMORY APPARATUSES, COMPUTER SYSTEMS AND METHODS FOR ORDERING MEMORY RESPONSES - Memory apparatuses that may be used for receiving commands and ordering memory responses are provided. One such memory apparatus includes response logic that is coupled to a plurality of memory units by a plurality of channels and may be configured to receiving a plurality of memory responses from the plurality of memory units. Ordering logic may be coupled to the response logic and be configured to cause the plurality of memory responses in the response logic to be provided in an order based, at least in part, on a system protocol. For example, the ordering logic may enforce bus protocol rules on the plurality of memory responses stored in the response logic to ensure that responses are provided from the memory apparatus in a correct order. | 2013-06-13 |
20130151742 | SEMICONDUCTOR DEVICE AND CONTROLLING METHOD THEREOF - According to one embodiment, a semiconductor device includes a storing section that stores a setting state that is one of a first connecting state in which another end of a first outbound system bus is connected to an outbound output terminal and another end of a first inbound system bus is connected to an inbound output terminal, and a second connecting state in which another end of a second outbound system bus is connected to the outbound output terminal and another end of a second inbound system bus is connected to the inbound output terminal; and a control section that controls an outbound path switching section and an inbound path switching section based on the setting state so as to assume one of the first connecting state and the second connecting state. | 2013-06-13 |
20130151743 | NETWORK ADAPTOR OPTIMIZATION AND INTERRUPT REDUCTION - A method and system are disclosed for network adaptor optimization and interrupt reduction. The method may also build an outbound buffer list based on outgoing data and add the outgoing data to an outbound buffer queue. Furthermore, the method may set a buffer state from an empty state to a primed state to indicate that the outgoing data is prepared for transmitting and signal a network adaptor with a notification signal. | 2013-06-13 |
20130151744 | Interrupt Moderation - A technique for interrupt moderation allows coalescing interrupts from a device into groups to be processed as a batch by a host processor. Receive and send completions may be processed differently. When the host is interrupted for receive completions, it may check for send completions, reducing the need for interrupts related to send completions. Timers and a counter allow coalescing interrupts into a single interrupt that can be used to signal the host to process multiple completions. The technique is suitable for both dedicated interrupt line and message-signaled interrupts. | 2013-06-13 |
20130151745 | SERIAL ADVANCED TECHNOLOGY ATTACHMENT DUAL IN-LINE MEMORY MODULE ASSEMBLY - A serial advanced technology attachment dual-in-line memory module (SATA DIMM) assembly includes a SATA DIMM module with a first circuit board, an expansion slot, and an expansion card with a second circuit board. A first edge connector is arranged on a bottom edge of the first circuit board and includes first power pins connected to a control chip and first storage chips, and first ground pins. A second edge connector connected to the expansion slot is arranged on a top edge of the first circuit board and includes second power pins connected to the first power pins, second ground pins, and four first signal pins connected to the control chip. A third edge connector engaged in the expansion slot is arranged on a bottom edge of the second circuit board and includes third power pins and four second signal pins connected to the second storage chips, and third ground pins. | 2013-06-13 |
20130151746 | ELECTRONIC DEVICE WITH GENERAL PURPOSE INPUT OUTPUT EXPANDER AND SIGNAL DETECTION METHOD - An electronic device includes a general purpose input output (GPIO) expander and a baseboard management controller (BMC). The GPIO expander includes a number of GPIO interfaces and a gathering interface connected to the GPIO interfaces. The BMC includes a public interface and a scanning interface connected to the gathering interface. Each element is connected to the public interface and a different one of the GPIO interfaces. The BMC periodically detects whether there is a signal input from the public interface, scans the GPIO interfaces when there is a signal input from the public interface to determine a GPIO interface with a logic high level, an element connected to the GPIO interface, and a signal input from the element, and records an event including the GPIO interface, the element connected to the GPIO interface, and the signal, and stores the event. | 2013-06-13 |
20130151747 | CO-PROCESSING ACCELERATION METHOD, APPARATUS, AND SYSTEM - An embodiment of the present invention discloses a co-processing acceleration method, including: receiving a co-processing request message which is sent by a compute node in a computer system and carries address information of to-be-processed data; according to the co-processing request message, obtaining the to-be-processed data, and storing the to-be-processed data in a public buffer card; and allocating the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing. An added public buffer card is used as a public data buffer channel between a hard disk and each co-processor card of a computer system, and to-be-processed data does not need to be transferred by a memory of the compute node, which avoids overheads of the data in transmission through the memory of the compute node, and thereby breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed. | 2013-06-13 |
20130151748 | STRUCTURE FOR TRANSMITTING SIGNALS OF PCI EXPRESS AND METHOD THEREOF - A structure for transmitting signals of PCI express and a method thereof provides a converting device and a high-definition multimedia interface (HDMI) cable. The converting device has a plug connector to into a PCI express slot, along with a HDMI connector. The signal converting circuit connects the signal pins of the PCI express slot to the signal pins of HDMI connector. One end of the HDMI cable is connected with the HDMI connector of the converting device. The present invention can extends the signal distance of the PCI express to exactly perform the signal test. | 2013-06-13 |
20130151749 | APPARATUS FOR COUPLING TO A USB DEVICE AND A HOST AND METHOD THEREOF - An apparatus is provided for coupling a Universal Serial Bus (USB) device and a USB host. The apparatus includes a memory and a controller. The memory includes one or more descriptor entries. The controller is configured to obtain a descriptor of the USB device upon detection of the USB device on a USB bus, and compare the descriptor to a specific descriptor entry to generate a comparing result. Then the controller enables or disables a link path between the USB host and the USB device according the comparing result. | 2013-06-13 |
20130151750 | MULTI-ROOT INPUT OUTPUT VIRTUALIZATION AWARE SWITCH - A system having a multi protocol multi-root aware (MP-MRA) switch ( | 2013-06-13 |
20130151751 | HIGH SPEED SERIAL PERIPHERAL INTERFACE MEMORY SUBSYSTEM - A memory subsystem is disclosed. The memory subsystem includes a serial peripheral interface (SPI) double data rate (DDR) volatile memory component, a serial peripheral interface (SPI) double data rate (DDR) non-volatile memory component coupled to the serial peripheral interface (SPI) double data rate (DDR) volatile memory component and a serial peripheral interface (SPI) double data rate (DDR) interface. The serial peripheral interface (SPI) double data rate (DDR) interface accesses the serial peripheral interface (SPI) double data rate (DDR) volatile memory component and the serial peripheral interface (SPI) double data rate (DDR) non-volatile memory component where data is accessed on leading and falling edges of a clock signal. | 2013-06-13 |
20130151752 | BIT-LEVEL MEMORY CONTROLLER AND A METHOD THEREOF - The present invention is directed to a bit-level memory controller and method adaptable to managing defect bits of a non-volatile memory. A bad column management (BCM) unit retrieves a bit-level mapping table, in which defect bits are respectively marked, based on which the BCM unit constructs a bit-level script (BLS) that contains a plurality of entries denoting defect-bit groups respectively. An internal buffer is configured to store data managed by the BCM unit according to the BLS. | 2013-06-13 |
20130151753 | SYSTEMS AND METHODS OF UPDATING READ VOLTAGES IN A MEMORY - A method includes receiving hard bit data and soft bit data corresponding to a portion of a memory, where each storage element of the memory stores multiple bits per storage element. The hard bit data and the soft bit data is received in connection with reading a single bit of the multiple bits from each storage element in the portion of the memory based on one or more first read voltages. One or more second read voltages based on the hard bit data and the soft bit data are generated in response to a read voltage update operation. The memory reads data from the portion of the memory using the one or more second read voltages. | 2013-06-13 |
20130151754 | LBA BITMAP USAGE - Systems and methods are disclosed for logical block address (“LBA) bitmap usage for a system having non-volatile memory (“NVM”). A bitmap can be stored in volatile memory of the system, where the bitmap can store the mapping statuses of one or more logical addresses. By using the bitmap, the system can determine the mapping status of a LBA without having to access the NVM. In addition, the system can update the mapping status of a LBA with minimal NVM accesses. By reducing the number of NVM accesses, the system can avoid triggering a garbage collection process, which can improve overall system performance. | 2013-06-13 |
20130151755 | Non-Volatile Storage Systems with Go To Sleep Adaption - A non-volatile memory system goes into a low-power standby sleep mode to reduce power consumption if a host command is not received within delay period. The duration of this delay period is adjustable. In one set of embodiments, host commands can specify the delay value, the operation types to which it applies, and whether the value is power the current power session or to be used to reset a default value as well. In other aspects, the parameters related to the delay value are kept in a host resettable parameter file. In other embodiments, the memory system monitors the time between host commands and adjusts this delay automatically. | 2013-06-13 |
20130151756 | Data de-duplication and solid state memory device - Example methods and apparatus concern identifying placement and/or erasure data for a flash memory based solid state device that supports de-duplication. One example apparatus include a processor, a memory, a set of logics and an interface to connect the processor, the memory, and the set of logics. The apparatus may include an SSD placement logic configured to determine placement data for a de-duplication data set. The placement data may be based on forensic data acquired for the de-duplication data set. The apparatus may also include a write logic configured to write at least a portion of the de-duplication data set to an SSD as controlled by the placement data. The forensic data may identify, for example, the order in which sub-blocks are accessed, reference counts, access frequency, access groups, and other access information. | 2013-06-13 |
20130151757 | INDEPENDENT WRITE AND READ CONTROL IN SERIALLY-CONNECTED DEVICES - A memory device, comprising a first control input port, a second control input port, a third control input port, a data input port, a data output port, an internal memory and control circuitry. The control circuitry is responsive to a control signal on the first control input port to capture command and address information via the data input port. When the command is a read command, the control circuitry is further responsive to a read control signal on the second control input port to transfer data associated with the address information from the internal memory onto the data output port. When the command is a write command, the control circuitry is responsive to a write control signal on the third control input port to write data captured via the data input port into the internal memory at a location associated with the address information. | 2013-06-13 |
20130151758 | NONVOLATILE MEMORY DEVICE - A nonvolatile memory device includes: N (N is an integer equal to or greater than 2) number of nonvolatile memory cells disposed in a flag area of a page, N number of flag page buffers configured to input and output flag data to and from the nonvolatile memory cells of the flag area, and a data input/output control unit configured to select R number of flag page buffers so that the flag data is inputted and outputted from the R selected flag page buffers and no flag data is inputted and outputted through unselected N−R number of flag page buffers, wherein no one flag page buffer of the R selected flag page buffers is immediately adjacent to another one of the R selected flag page buffers. | 2013-06-13 |
20130151759 | STORAGE DEVICE AND OPERATING METHOD ELIMINATING DUPLICATE DATA STORAGE - A storage device includes storage media and a controller. The controller includes a de-duplication table that manages hash information for data stored in the storage media, and compares hash information for received write-requested data with hash information managed by the de-duplication table to determine whether the write-requested data is duplicate data. | 2013-06-13 |
20130151760 | NONVOLATILE MEMORY DEVICE AND OPERATING METHOD THEREOF - Disclosed is a memory system which includes a nonvolatile memory device configured to store data information; and a memory controller configured to control the nonvolatile memory device. The memory controller provides the nonvolatile memory device with a program command sequence including program speed information according to an urgency level of an internally requested program operation. | 2013-06-13 |
20130151761 | DATA STORAGE DEVICE STORING PARTITIONED FILE BETWEEN DIFFERENT STORAGE MEDIUMS AND DATA MANAGEMENT METHOD - A data management method for a data storage device includes receiving a write request; partitioning the file into first and second portions; encrypting the first portion, and storing the encrypted first portion in a first storage medium and the second portion in a second storage medium. | 2013-06-13 |
20130151762 | STORAGE DEVICE - The present invention aims to improve the performance of accessing flash memory used as a storage medium in a storage device. In the storage device in accordance with the present invention, a storage controller, before accessing the flash memory, queries a flash controller as to whether the flash memory is accessible. | 2013-06-13 |
20130151763 | STORAGE SYSTEM HAVING A PLURALITY OF FLASH PACKAGES - A storage system | 2013-06-13 |
20130151764 | SYSTEMS AND METHODS FOR STORING DATA IN A MULTI-LEVEL CELL SOLID STATE STORAGE DEVICE - This disclosure is related to systems and methods for storing data in multi-level cell solid state storage devices, such as Flash memory devices. In one example, a multi-level cell memory array has programmable pages, a first page having a first programming time, and a second page having a second programming time that is different than the first programming time. In one embodiment, the first programming time is faster than the to second programming time. Further, a controller coupled to the multi-level cell memory array may be configured to select the first page to store the data when a priority level of a write operation indicates a first priority level and select the second page to store the data when the priority level indicates a second priority level. | 2013-06-13 |
20130151765 | CLUSTER BASED NON-VOLATILE MEMORY TRANSLATION LAYER - Methods of operating non-volatile memory devices including dividing the non-volatile memory device into a plurality of sequentially addressed clusters, wherein each cluster contains a plurality of sequentially addressed logical blocks, and where at least one cluster of the plurality of sequentially addressed clusters addresses a different number of sequentially addressed logical blocks than another one of the clusters of the plurality of sequentially addressed clusters. | 2013-06-13 |
20130151766 | CONVERGENCE OF MEMORY AND STORAGE INPUT/OUTPUT IN DIGITAL SYSTEMS - Embodiments of the present invention relate to CPU and/or digital memory architecture. Specifically, embodiments of the present invention relate to various approaches for adapting current designs to provide connection of a storage unit to a CPU via a memory unit through the use of controllers. This allows for system data to flow from the CPU to the memory unit to the storage unit. Such a configuration is enabled by the use of an extended memory access scheme that comprises a plurality of row address strobes (RAS) and a column address strobe (CAS) (and, optionally, one or more data bit line DQs). | 2013-06-13 |
20130151767 | MEMORY CONTROLLER-INDEPENDENT MEMORY MIRRORING - A method of memory controller-independent memory mirroring includes providing a mirroring association between a first memory segment and a second memory segment that is independent of a memory controller. A memory buffer receives data from the memory controller that is directed to a first memory location in the first memory segment. The memory buffer writes the data, independent of the memory controller, to both the first memory segment and the second memory segment according to the mirroring association. The memory buffer receives a plurality of read commands from the memory controller that are directed to the first memory location in the first memory segment and, in response, reads data from an alternating one of the first memory segment and the second memory segment and stores both first data from the first memory segment and second data from the second memory segment. | 2013-06-13 |
20130151768 | SYSTEM AND METHOD FOR MANAGING SELF-REFRESH IN A MULTI-RANK MEMORY - Multi-rank memories and methods for self-refreshing multi-rank memories are disclosed. One such multi-rank memory includes a plurality of ranks of memory and self-refresh logic coupled to the plurality of ranks of memory. The self-refresh logic is configured to refresh a first rank of memory in a self-refresh state in response to refreshing a second rank of memory not in a self-refresh state in response to receiving a non-self-refresh refresh command for the second rank of memory. | 2013-06-13 |
20130151769 | Hard Disk Drive Reliability In Server Environment Using Forced Hot Swapping - An approach is provided to inactivate a selected drive included in a RAID configuration. While inactive, write requests are handled by identifying data blocks to be written to each of the RAID drives. The identification also identifies a data block address corresponding to the data blocks. Data blocks destined to non-selected drives are written to the non-selected drives. The data blocks destined to the selected drive is written to a memory area outside of the RAID configuration. The data block addresses corresponding to the data blocks are also written to the memory area. After a period of time, the selected drive is reactivated. During reactivation, the data block addresses and their corresponding data blocks that were written to the memory area are read from the memory area and each of the data blocks are written to the selected drive at the corresponding data block addresses. | 2013-06-13 |
20130151770 | REMOTE COPY SYSTEM AND REMOTE COPY CONTROL METHOD - A first storage system comprises a first RAID group comprising multiple first storage devices, which constitute the basis of a first logical volume. A second storage system comprises a second RAID group comprising multiple second storage devices, which constitute the basis of a second logical volume. The RAID configuration of the first RAID group and the RAID configuration of the second RAID group are the same, and the type of a compression/decompression function of the respective first storage devices and the type of a compression/decompression function of the respective second storage devices are the same. Compressed data is read from a first storage device without being decompressed with respect to the data inside a first logical volume, and the read compressed data is written to a second storage device, which is in the same location in RAID in the second RAID group as the location in RAID of this first storage device. | 2013-06-13 |
20130151771 | DISK ARRAY DEVICE, CONTROL DEVICE AND DATA WRITE METHOD - A disk array device includes: a plurality of disk devices including a strip that stores divided data or a parity; a control device to divide the stripe for each of the plurality of disk devices into the divided data having a size of the strip and write the divided data; and a memory to store new data that corresponds to the divided data stored in the strip, wherein the control device detects whether or not the new data is discrete and performs a first write operation or a second write operation when the new data is discrete. | 2013-06-13 |
20130151772 | STORAGE CONTROL APPARATUS AND STORAGE METHOD THEREFOR - A storage control apparatus which connects a portable storage medium and stores content data acquired from the portable storage medium in a storage unit communicates with another apparatus to set the correspondence between a storage location within the portable storage medium and that within the storage unit. The storage control apparatus acquires, among content data within the portable storage medium, content data to be stored in the storage unit, and stores the content data acquired from the portable storage medium by the acquisition unit in a storage location within the storage unit corresponding to that within the portable storage medium, in which the acquired content data is stored, based on the correspondence between the storage location within the portable storage medium and that within the storage unit. | 2013-06-13 |
20130151773 | DETERMINING AVAILABILITY OF DATA ELEMENTS IN A STORAGE SYSTEM - Data elements are stored at a plurality of nodes. Each data element is a member data element of one of a plurality of layouts. Each layout indicates a unique subset of nodes. All member data elements of the layout are stored on each node in the unique subset of nodes. A stored dependency list includes every layout that has member data elements. The dependency list is used to determine availability of data elements based on ability to access data from nodes from the plurality of nodes. | 2013-06-13 |
20130151774 | Controlling a Storage System - A method, computer-readable storage medium and computer system for controlling a storage system, the storage system comprising a plurality of logical storage volumes, the method comprising: monitoring, for each of the logical storage volumes, one or more load parameters; receiving, for each of the logical storage volumes, one or more load parameter threshold values; comparing, for each of the logical storage volumes, the first load parameter values of said logical storage volume with the corresponding one or more load parameter threshold values; in case at least one of the first load parameter values of one of the logical storage volumes violates the load parameter threshold value it is compared with, automatically executing a corrective action. | 2013-06-13 |
20130151775 | Information Processing Apparatus and Driver - According to one embodiment, an information processing apparatus includes a memory includes a buffer area, a first storage, a second storage and a driver. Controlling the first and second external storages, the driver comprises a cache reservation module configured to reserve a cache area in the memory. The cache area is logically between the buffer area and the first external storage and between the buffer area and the second external storage. The driver being configured to use the cache area, secured on the memory by the cache reservation module, as a primary cache for the second external storage and a cache for the first external storage, and uses part or the entire first external storage as a secondary cache for the second external storage. The buffer area is reserved in order to transfer data between the driver and a host system that requests for data writing and data reading. | 2013-06-13 |
20130151776 | RAPID MEMORY BUFFER WRITE STORAGE SYSTEM AND METHOD - Efficient and convenient storage systems and methods are presented. In one embodiment a storage system includes a host for processing information, a memory controller and a memory. The memory controller controls communication of the information between the host and the memory, wherein the memory controller routes data rapidly to a buffer of the memory without buffering in the memory controller. The memory stores the information. The memory includes a buffer for temporarily storing the data while corresponding address information is determined. | 2013-06-13 |
20130151777 | Dynamic Inclusive Policy in a Hybrid Cache Hierarchy Using Hit Rate - A mechanism is provided for dynamic cache allocation using a cache hit rate. A first cache hit rate is monitored in a first subset utilizing a first allocation policy of N sets of a lower level cache. A second cache hit rate is also monitored in a second subset utilizing a second allocation policy different from the first allocation policy of the N sets of the lower level cache. A periodic comparison of the first cache hit rate to the second cache hit rate is made to identify a third allocation policy for a third subset of the N-sets of the lower level cache. The third allocation policy for the third subset is then periodically adjusted to at least one of the first allocation policy or the second allocation policy based on the comparison of the first cache hit rate to the second cache hit rate. | 2013-06-13 |
20130151778 | Dynamic Inclusive Policy in a Hybrid Cache Hierarchy Using Bandwidth - A mechanism is provided for dynamic cache allocation using bandwidth. A bandwidth between a higher level cache and a lower level cache is monitored. Responsive to bandwidth usage between the higher level cache and the lower level cache being below a predetermined low bandwidth threshold, the higher level cache and the lower level cache are set to operate in accordance with a first allocation policy. Responsive to bandwidth usage between the higher level cache and the lower level cache being above a predetermined high bandwidth threshold, the higher level cache and the lower level cache are set to operate in accordance with a second allocation policy. | 2013-06-13 |
20130151779 | Weighted History Allocation Predictor Algorithm in a Hybrid Cache - A mechanism is provided for weighted history allocation prediction. For each member in a plurality of members in a lower level cache, an associated reference counter is initialized to an initial value based on an operation type that caused data to be allocated to a member location of the member. For each access to the member in the lower level cache, the associated reference counter is incremented. Responsive to a new allocation of data to the lower level cache and responsive to the new allocation of data requiring the victimization of another member in the lower level cache, a member of the lower level cache is identified that has a lowest reference count value in its associated reference counter. The member with the lowest reference count value in its associated reference counter is then evicted. | 2013-06-13 |
20130151780 | Weighted History Allocation Predictor Algorithm in a Hybrid Cache - A mechanism is provided for weighted history allocation prediction. For each member in a plurality of members in a lower level cache, an associated reference counter is initialized to an initial value based on an operation type that caused data to be allocated to a member location of the member. For each access to the member in the lower level cache, the associated reference counter is incremented. Responsive to a new allocation of data to the lower level cache and responsive to the new allocation of data requiring the victimization of another member in the lower level cache, a member of the lower level cache is identified that has a lowest reference count value in its associated reference counter. The member with the lowest reference count value in its associated reference counter is then evicted. | 2013-06-13 |
20130151781 | Cache Implementing Multiple Replacement Policies - In an embodiment, a cache stores tags for cache blocks stored in the cache. Each tag may include an indication identifying which of two or more replacement policies supported by the cache is in use for the corresponding cache block, and a replacement record indicating the status of the corresponding cache block in the replacement policy. Requests may include a replacement attribute that identifies the desired replacement policy for the cache block accessed by the request. If the request is a miss in the cache, a cache block storage location may be allocated to store the corresponding cache block. The tag associated with the cache block storage location may be updated to include the indication of the desired replacement policy, and the cache may manage the block in accordance with the policy. For example, in an embodiment, the cache may support both an LRR and an LRU policy. | 2013-06-13 |
20130151782 | Providing Common Caching Agent For Core And Integrated Input/Output (IO) Module - In one embodiment, the present invention includes a multicore processor having a plurality of cores, a shared cache memory, an integrated input/output (IIO) module to interface between the multicore processor and at least one IO device coupled to the multicore processor, and a caching agent to perform cache coherency operations for the plurality of cores and the IIO module. Other embodiments are described and claimed. | 2013-06-13 |
20130151783 | INTERFACE AND METHOD FOR INTER-THREAD COMMUNICATION - The interface for inter-thread communication between a plurality of threads including a number of producer threads for producing data objects and a number of consumer threads for consuming the produced data objects includes a specifier and a provider. The specifier is configured to specify a certain relationship between a certain producer thread of the number of producer threads which is adapted to produce a certain data object and a consumer thread of the number of consumer threads which is adapted to consume the produced certain data object. Further, the provider is configured to provide direct cache line injection of a cache line of the produced certain data object to a cache allocated to the certain consumer thread related to the certain producer thread by the specified certain relationship. | 2013-06-13 |
20130151784 | DYNAMIC PRIORITIZATION OF CACHE ACCESS - Some embodiments of the inventive subject matter are directed to determining that a memory access request results in a cache miss and determining an amount of cache resources used to service cache misses within a past period in response to determining that the memory access request results in the cache miss. Some embodiments are further directed to determining that servicing the memory access request would increase the amount of cache resources used to service cache misses within the past period to exceed a threshold. In some embodiments, the threshold corresponds to reservation of a given amount of cache resources for potential cache hits. Some embodiments are further directed to rejecting the memory access request in response to the determining that servicing the memory access request would increase the amount of cache resources used to service cache misses within the past period to exceed the threshold. | 2013-06-13 |
20130151785 | DIRECTORY REPLACEMENT METHOD AND DEVICE - The present invention provides a directory replacement method and device. An HA receives a data access request including a first address from a first CA, if a designated storage where a directory is located is entirely occupied by the directory, and a first directory entry corresponding to the first address is not in the directory, the HA selects a second directory entry from the directory, deletes it and adds the first directory entry into the directory; before the HA replaces the directory entry in the directory, no matter what status (for example, I status, S status or A status) a share status of a cache line corresponding to an address in the directory entry to be replaced is, the HA does not need to request a corresponding CA to perform an invalidating operation on data, but directly replaces the directory entry in the directory, thereby improving replacement efficiency. | 2013-06-13 |
20130151786 | LOGICAL BUFFER POOL EXTENSION - A method for logical buffer pool extension identifies a page in a memory for eviction, and analyzes characteristics of the page to form a differentiated page. The characteristics of the page include descriptors that include a workload type, a page weight, a page type, frequency of access and timing of most recent access. The method also identifies a target location for the differentiated page from a set of locations including a fastcache storage and a hard disk storage to form an identified target location. The method further selects an eviction operation from a set of eviction operations using the characteristics of the differentiated page and the identified target location. The differentiated page is written to the identified target location using the selected eviction operation, where the differentiated page is written only to the fastcache storage. | 2013-06-13 |
20130151787 | Mechanism for Using a GPU Controller for Preloading Caches - Provided is a method and system for preloading a cache on a graphical processing unit. The method includes receiving a command message, the command message including data related to a portion of memory. The method also includes interpreting the command message, identifying policy information of the cache, identifying a location and size of the portion of memory, and creating a fetch message including data related to contents of the portion, wherein the fetch message causes the cache to preload data of the portion of memory. | 2013-06-13 |
20130151788 | DYNAMIC PRIORITIZATION OF CACHE ACCESS - Some embodiments of the inventive subject matter are directed to a cache comprising a tracking unit and cache state machines. In some embodiments, the tracking unit is configured to track an amount of cache resources used to service cache misses within a past period. In some embodiments, each of the cache state machines is configured to, determine whether a memory access request results in a cache miss or cache hit, and in response to a cache miss for a memory access request, query the tracking unit for the amount of cache resources used to service cache misses within the past period. In some embodiments, the each of the cache state machines is configured to service the memory access request based, at least in part, on the amount of cache resources used to service the cache misses within the past period according to the tracking unit. | 2013-06-13 |
20130151789 | MANAGING A REGION CACHE - A method, system or computer usable program product for managing a cache region including receiving a new region to be stored within the cache, the cache including multiple regions defined by one or more ranges having a starting index and an ending index, and storing the new region in the cache in accordance with a cache invariant, the cache invariant ensuring that regions in the cache are not overlapping and that the regions are stored in a specified order. | 2013-06-13 |
20130151790 | Efficient Storage of Meta-Bits Within a System Memory - Mechanisms are provided for efficient storage of meta-bits within a system memory. The mechanisms combine an L/G bit and an SUE bit to form meta-bits. The mechanisms then determine the local/global state of a cache line on the first cycle of data. The mechanisms forward the data to the requesting cache, and the requesting cache may reissue the request globally based on the local/global state of the cache line. The mechanisms then determine the special uncorrectable error state of the cache line on the second or subsequent cycle of data. The mechanisms perform error processing regardless of whether the request was reissued globally. | 2013-06-13 |
20130151791 | TRANSACTIONAL MEMORY CONFLICT MANAGEMENT - A computing device initiates a transaction, corresponding to an application, which includes operations for accessing data stored in a shared memory and buffering alterations to the data as speculative alterations to the shared memory. The computing device detects a transaction abort scenario corresponding to the transaction and notifies the application regarding the transaction abort scenario. The computing device determines whether to abort the transaction based on instructions received from the application regarding the transaction abort scenario. When the transaction is to be aborted, the computing device restores the transaction to an operation prior to accessing the data stored in the shared memory and buffering alterations to the data as speculative alterations to the shared memory. When the transaction is not to be aborted, the computing device enables the transaction to continue. | 2013-06-13 |
20130151792 | PROCESSOR COMMUNICATIONS - A processor module including a processor configured to share data with at least one further processor module processor; and a memory mapped peripheral configured to communicate with at least one further processor memory mapped peripheral to control the sharing of the data, wherein the memory mapped peripheral includes a sender part including a data request generator configured to output a data request indicator to the further processor module dependent on a data request register write signal from the processor; and an acknowledgement waiting signal generator configured to output an acknowledgement waiting signal to the processor dependent on a data acknowledgement signal from the further processor module, wherein the data request generator data request indicator is further dependent on the data acknowledgement signal and the acknowledgement waiting signal generator acknowledgement waiting signal is further dependent on the acknowledgement waiting register write signal. | 2013-06-13 |
20130151793 | Multi-Context Configurable Memory Controller - The exemplary embodiments provide a multi-context configurable memory controller comprising: an input-output data port array comprising a plurality of input queues and a plurality of output queues; at least one configuration and control register to store, for each context of a plurality of contexts, a plurality of configuration bits; a configurable circuit element configurable for a plurality of data operations, each data operation corresponding to a context of a plurality of contexts, the plurality of data operations comprising memory address generation, memory write operations, and memory read operations, the configurable circuit element comprising a plurality of configurable address generators; and an element controller, the element controller comprising a port arbitration circuit to arbitrate among a plurality of contexts having a ready-to-run status, and the element controller to allow concurrent execution of multiple data operations for multiple contexts having the ready-to-run status. | 2013-06-13 |
20130151794 | MEMORY CONTROLLER AND MEMORY CONTROL METHOD - Provided is a memory controller that manages memory access requests between the processor and the memory. In response to the memory controller receiving two or more memory access requests for the same area of memory, the memory controller is configured to stall the memory controller and sequentially process the memory access requests. | 2013-06-13 |
20130151795 | APPARATUS AND METHOD FOR CONTROLLING MEMORY - Disclosed herein are an apparatus and method for controlling memory. The apparatus includes a memory access request buffer unit, a memory access request control unit, and a bank control unit. The memory access request buffer unit determines and stores memory access request order so that the plurality of memory access requests is processed in the order of input except that memory access requests attempting to access the same bank and the same row are successively processed. The memory access request control unit reads the memory access requests from the memory access request buffer unit in the determined order, distributes the memory access requests to banks, and transfers the memory access requests to memory. The bank control unit stores a preset number of memory access requests in each of buffer units for respective banks, and controls the operating state of each of the banks. | 2013-06-13 |
20130151796 | SYSTEM AND METHOD FOR CALIBRATION OF SERIAL LINKS USING A SERIAL-TO-PARALLEL LOOPBACK - A system and method for calibration of serial links using serial-to-parallel loopback. Embodiments of the present invention are operable for calibrating serial links using parallel links thereby reducing the number of links that need calibration. The method includes sending serialized data over a serial interface and receiving parallel data via a parallel interface. The serialized data is looped back via the parallel interface. The method further includes comparing the parallel data and the serialized data for a match thereof and calibrating the serial interface by adjusting the sending of the serialized data until the comparing detects the match. The adjusting of the sending is operable to calibrate the sending of the serialized data over the serial interface. | 2013-06-13 |
20130151797 | METHOD AND APPARATUS FOR CENTRALIZED TIMESTAMP PROCESSING - Method and apparatus for centralized timestamp processing is described herein. A graphics processing system includes multiple graphics engines and a timestamp module. For each task, a graphics driver assigns the task to a graphics engine and writes a task command packet to a memory buffer associated with the graphics engine. The graphics driver also writes a timestamp command packet for each task to a timestamp module memory buffer. A command processor associated with the graphics engine signals the timestamp module memory buffer upon completion of the task. If the read pointer is at the appropriate position in the timestamp module memory buffer, the timestamp module/timestamp module memory buffer executes the timestamp command packet and writes the timestamp to a timestamp memory. The timestamp memory is accessible by the graphics driver. | 2013-06-13 |
20130151798 | Expedited Module Unloading For Kernel Modules That Execute Read-Copy Update Callback Processing Code - A technique for expediting the unloading of an operating system kernel module that executes read-copy update (RCU) callback processing code in a computing system having one or more processors. According to embodiments of the disclosed technique, an RCU callback is enqueued so that it can be processed by the kernel module's callback processing code following completion of a grace period in which each of the one or more processors has passed through a quiescent state. An expediting operation is performed to expedite processing of the RCU callback. The RCU callback is then processed and the kernel module is unloaded. | 2013-06-13 |
20130151799 | Auto-Ordering of Strongly Ordered, Device, and Exclusive Transactions Across Multiple Memory Regions - Efficient techniques are described for controlling ordered accesses in a weakly ordered storage system. A stream of memory requests is split into two or more streams of memory requests and a memory access counter is incremented for each memory request. A memory request requiring ordered memory accesses is identified in one of the two or more streams of memory requests. The memory request requiring ordered memory accesses is stalled upon determining a previous memory request from a different stream of memory requests is pending. The memory access counter is decremented for each memory request guaranteed to complete. A count value in the memory access counter that is different from an initialized state of the memory access counter indicates there are pending memory requests. The memory request requiring ordered memory accesses is processed upon determining there are no further pending memory requests. | 2013-06-13 |
20130151800 | NUCLEAR MEDICINE IMAGING APPARATUS AND CONTROL METHOD - According to one embodiment, a nuclear medicine imaging apparatus includes a counting information collection unit, a determination unit, and a discarding unit. The counting information collection unit collects counting information including detection time of a gamma ray from a counting result output by a detector for counting light derived from a gamma ray, and stores the counting information in a buffer. The determination unit determines whether the volume of the counting information stored in the buffer exceeds a threshold. The discarding unit, in a case that the determination unit determines that the volume exceeds the threshold, intermittently discards, in chronological order, counting information whose detection time is within longer duration than predetermined duration used for generating two pieces of counting information obtained by counting pair annihilation gamma rays nearly coincidentally as coincidence counting information among the counting information collected from the detector. | 2013-06-13 |
20130151801 | ARCHIVE SYSTEMS AND METHODS - Archive systems and methods are presented. In one embodiment, an archival information storage configuration method comprises: performing an information accessing process including determining if the information is associated with an archive process; and performing an archive storage boundary determination process including establishing archive storage boundaries based upon characteristics indicating potential sharing of the information and potential impacts on performance of archival storage operations. In one exemplary implementation, the archive storage boundary determination process comprises: performing an information mining process including identifying an indication the information is potentially shared; and performing an archival boundary selection process including selecting an archive storage boundary based in at least part upon results of the information mining process. | 2013-06-13 |
20130151802 | FORMAT-PRESERVING DEDUPLICATION OF DATA - Data blocks are copied from a source (e.g., a source virtual disk) to a target (e.g., a target virtual disk). The source virtual disk format is preserved on the target virtual disk. Offsets for extents stored in the target virtual disk are converted to offsets for corresponding extents in the source virtual disk. A map of the extents for the source virtual disk can therefore be used to create, for deduplication, segments of data that are aligned to boundaries of the extents in the target virtual disk. | 2013-06-13 |
20130151803 | Frequency and migration based re-parsing - Example apparatus and methods associated with frequency and migration based re-parsing are provided. One example data de-duplication apparatus includes a migration logic and a parsing logic. The migration logic may be configured to perform a data transfer according to an access frequency to the data. The parsing logic may be configured to re-parse the data based on the access frequency to the data. In different examples, parsing the data may be performed in response to migrating the data. In one example, parsing the data may be performed during or after the migration. Additional examples illustrate parsing the data to balance performance against reduction in light of access frequency to the data block. | 2013-06-13 |
20130151804 | Controlling the Placement of Data in a Storage System - A method, computer readable storage medium and computer system for controlling the allocation of data to one of a plurality of storage units of a storage system, the method comprising: accessing a source storage unit comprising the data; gathering file system level (FS-level) metadata from the source storage unit; analyzing the gathered FS-level metadata for determining if the data should be moved to one of the other storage units, said other storage unit acting as a destination storage unit; and in case the data should be moved, displaying an indication of the destination storage unit and/or automatically moving the data to the determined destination storage unit. | 2013-06-13 |
20130151805 | REORGANIZATION OF SOFTWARE IMAGES BASED ON PREDICTED USE THEREOF - A solution for managing a software image being stored in a plurality of physical blocks of a storage system comprises monitoring each access to the physical blocks, calculating a predicted sequence of access to the physical blocks according to the monitored accesses, and reorganizing the physical blocks according to the predicted sequence. The monitoring may be performed as the physical blocks are accessed during the booting of virtual images on the software image. | 2013-06-13 |
20130151806 | TIERED STORAGE POOL MANAGEMENT AND CONTROL FOR LOOSELY COUPLED MULTIPLE STORAGE ENVIRONMENT - A system comprises a first storage system including a first storage controller, which receives input/output commands from host computers and provides first storage volumes to the host computers; and a second storage system including a second storage controller which receives input/output commands from host computers and provides second storage volumes to the host computers. A first data storing region of one of the first storage volumes is allocated from a first pool by the first storage controller. A second data storing region of another one of the first storage volumes is allocated from a second pool by the first storage controller. A third data storing region of one of the second storage volumes is allocated from the first pool by the second storage controller. A fourth data storing region of another one of the second storage volumes is allocated from the second pool by the second storage controller. | 2013-06-13 |
20130151807 | Storage Router and Method for Providing Virtual Local Storage - A storage router and method for providing virtual local storage on remote storage devices to devices are provided. Devices are connected to a first transport medium, and a plurality of storage devices are connected to a second transport medium. In one embodiment, the storage router maintains a map to allocate storage space on the remote storage devices to devices connected to the first transport medium by associating representations of the devices connected to the first transport medium with representations of storage space on the remote storage devices, wherein each representation of a device connected to the first transport medium is associated with one or more representations of storage space on the remote storage devices. The storage router can control access from the devices connected to the first transport medium to the storage space on the remote storage devices in accordance with the access controls. | 2013-06-13 |
20130151808 | ALLOCATION DEVICE, ALLOCATION METHOD AND STORAGE DEVICE - An allocation device includes a memory which stores a program, and a processor which executes, based on the program, a procedure including determining an allocation of partial memory spaces to physical memory spaces included in each of N number of physical memory devices when number of the physical memory devices is changed, the physical memory space allocation determined based on one or more sets of N partial memory spaces, each partial memory space in a set of N partial memory spaces allocated to each of the N physical memory devices. | 2013-06-13 |
20130151809 | ARITHMETIC PROCESSING DEVICE AND METHOD OF CONTROLLING ARITHMETIC PROCESSING DEVICE - An arithmetic processing device includes: an processing unit configured to execute threads and output a memory request including a virtual address; a buffer configured to register some of address translation pairs stored in a memory, each of the address translation pairs including a virtual address and a physical address; a controller configured to issue requests for obtaining the corresponding address translation pairs to the memory for individual threads when an address translation pair corresponding to the virtual address included in the memory request output from the processing unit is not registered in the buffer; table fetch units configured to obtain the corresponding address translation pairs from the memory for individual threads when the requests for obtaining the corresponding address translation pairs are issued; and a registration controller configured to register one of the obtained address translation pairs in the buffer. | 2013-06-13 |
20130151810 | HYBRID HASH TABLES - A hash table system having a first hash table and a second hash table is provided. The first hash table may be in-memory and the second hash table may be on-disk. Inserting an entry to the hash table system comprises inserting the entry into the first hash table, and, when the first hash table reaches a threshold load factor, flushing entries into the second hash table. Flushing the first hash table into the second hash table may comprise sequentially flushing the first hash table segments into corresponding second hash table segments. When looking up a key/value pair corresponding to a selected key in the hash table system, the system checks both the first and second hash tables for values corresponding to the selected key. The first and second hash tables may be divided into hash table segments and collision policies may be implemented within the hash table segments. | 2013-06-13 |
20130151811 | Optimized Deletion And Insertion For High-Performance Resizable RCU-Protected Hash Tables - Concurrent resizing and modification of a first RCU-protected hash table includes allocating a second RCU-protected hash table, populating it by linking each hash bucket of the second hash table to all hash buckets of the first hash table containing elements that hash to the second hash table bucket, and publishing the second hash table. If the modifying comprises insertion, a new element is inserted at the head of a corresponding bucket in the second hash table. If the modifying comprises deletion, then within an RCU read-side critical section: (1) all pointers in hash buckets of the first and second hash tables that reference the element being deleted are removed or redirected, and (2) the element is freed following a grace period that protects reader references to the deleted element. The first table is freed from memory after awaiting a grace period that protects reader references to the first hash table. | 2013-06-13 |
20130151812 | NODE INTERCONNECT ARCHITECTURE TO IMPLEMENT HIGH-PERFORMANCE SUPERCOMPUTER - Node Interconnect architectures to implement a high performance supercomputer are provided. For example, a node interconnect architecture for connecting a multitude of nodes (or processors) of a supercomputer is implemented using an all-to-all electrical and optical connection network which provides two independent communication paths between any two processors of the supercomputer, wherein a communication path includes at most two electrical links and one optical link. | 2013-06-13 |
20130151813 | SWITCH SYSTEM FOR DUAL CENTRAL PROCESSING UNITS - An exemplary switch system includes a first central processing unit (CPU), a second CPU, a first switch unit, a second switch unit, and a microcontroller. The first CPU provides an identification signal to the first switch unit and the second switch unit when the first CPU is associated with a motherboard of an electronic device. Both the first switch unit and the second switch unit selectably and electronically connect to the first CPU or the second CPU according to whether or not both the first switch unit and the second switch unit detect the identification signal. The microcontroller is electronically connected between the first switch unit and the second switch unit, and accordingly communicates with the first CPU or the second CPU via the first switch unit and the second switch unit. | 2013-06-13 |
20130151814 | MULTI-CORE PROCESSOR - A multi-core processor includes a monitored processor core whose process result is to be monitored; a monitoring processor core group including two or more monitoring processors which can perform a process for monitoring the monitored processor core; an evaluating part configured to evaluate a processing load of the monitoring processor core group; and a controlling part configured to make the monitoring processor core group perform the process for monitoring the monitored processor core in a distributed manner if the processing load of the monitoring processor core group evaluated by the evaluating part is low, and make the monitoring processor of the monitoring processor core group perform the process for monitoring the monitored processor core if the processing load of the monitoring processor core group evaluated by the evaluating part is high, the monitoring processor performing a process whose priority is relatively low. | 2013-06-13 |
20130151815 | RECONFIGURABLE PROCESSOR AND MINI-CORE OF RECONFIGURABLE PROCESSOR - A reconfigurable processor includes a plurality of mini-cores and an external network to which the mini-cores are connected. Each of the mini-cores includes a first function unit including a first group of operation elements, a second function unit including a second group of operation elements that is different from the first group of operation elements, and an internal network to which the first function unit and the second function unit are connected. | 2013-06-13 |
20130151816 | DELAY IDENTIFICATION IN DATA PROCESSING SYSTEMS - Methods, systems, and computer program products may provide delay-identification in data processing systems. An apparatus may include a delay-identification unit having a delay counter, a threshold register, a delay register, and a delay detector. The delay detector may be configured to start the delay counter in response to detecting that one group of instructions is delayed, and stop the delay counter in response to detecting that the one group of instructions is no longer delayed. The delay detector may additionally be configured to compare the number of cycles counted by the delay counter with a threshold number of cycles in the threshold register, and store at least one effective address of one of the instructions of the one group of instructions when the number of cycles counted by the delay counter is greater than the threshold number of cycles stored in the threshold register. | 2013-06-13 |
20130151817 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR PARALLEL FUNCTIONAL UNITS IN MULTICORE PROCESSORS - Method, apparatus, and computer program product embodiments of the invention maximize the use of functional processing units in a multicore processor integrated circuit architecture. Example embodiments of the invention determine that instructions to be executed in a functional processor of a local processor core of a multicore processor, are capable of execution in a functional processor of a neighbor processor core of the multicore processor. A compute request is sent to the neighbor processor core to initiate execution of the instructions in the functional processor. A compute response is received from the neighbor processor core, if the functional processor has been able to execute the instructions. | 2013-06-13 |
20130151818 | MICRO ARCHITECTURE FOR INDIRECT ACCESS TO A REGISTER FILE IN A PROCESSOR - A method and system for improving performance and latency of instruction execution within an execution pipeline in a processor. The method includes finding, while decoding an instruction, a pointer register used by the instruction; reading the pointer register; validating a pointer register entry; reading, if the pointer register entry is valid, a register file entry; validating a register file entry; validating, if the register file entry is invalid, a valid register file entry wherein the valid register file entry is in the register file's future file; bypassing, if the valid register file entry is valid, a valid register file value from the register file's future file to the execution pipeline wherein the valid register file value is in the valid register file entry; and executing the instruction using the valid register file value; wherein at least one of the steps is carried out using a computer device. | 2013-06-13 |
20130151819 | RECOVERING FROM EXCEPTIONS AND TIMING ERRORS - A data processing apparatus with a processing pipeline, the pipeline including exception control circuitry and error detection circuitry. An exception storage unit is configured to maintain an age-ordered list of entries corresponding to instructions issued to the processing pipeline for execution. The unit is configured to store, in association with each entry, an exception indicator indicating whether the instruction is an exception instruction and whether it has generated an exception and an error indicator indicating whether the instruction has generated an error. The apparatus is configured to indicate to the exception storage unit that an instruction is resolved when processing of the instruction has reached a stage such that it is known whether the instruction will generate an error and whether the instruction will generate an exception; and the exception control circuitry is configured to sequentially retire oldest resolved entries from the list in the exception storage unit. | 2013-06-13 |
20130151820 | METHOD AND APPARATUS FOR ROTATING AND SHIFTING DATA DURING AN EXECUTION PIPELINE CYCLE OF A PROCESSOR - A method and apparatus are described for processing data during an execution pipeline cycle of a processor. Valid bits of the data are generated according to a designated data size. Each of the valid bits is inserted into at least one of a plurality of bit positions. The valid bits are rotated in a predetermined direction (i.e., left or right rotation) by a designated number of bit positions. Valid bits are removed from a portion of the plurality of bit positions after being rotated. Zeros or most significant bits (MSBs) of the data may be inserted in the bit positions from which the valid bits were removed. The number of bit positions to rotate the valid bits by may be designated by a first bit subset and a second bit subset. The first bit subset may indicate a number of bytes, and the second bit subset may indicate a number of bits. | 2013-06-13 |
20130151821 | METHOD AND INSTRUCTION SET INCLUDING REGISTER SHIFTS AND ROTATES FOR DATA PROCESSING - A method includes identifying at least one first register with M bits and identifying at least one second register with N bits. The process also includes shifting K bits, where K≦N, from the second register into the first register. The shifting operation executes a left shift or a right shift operation. For a left shift operation, bits K . . . N−1 from the first register are read, the bits K . . . N−1 are written into bit positions 0 . . . N-k−1 of the first register, the K bits from the second register are read, and the K bits from the second register are written into bit positions N-K . . . N−1 of first register. The right shift includes reading bits 0 . . . N-K−1 from the first register, writing the bits 0 . . . N-K−1 into bit position K . . . N−1 of the first register, reading the K bits from the second register, and writing the K bits from second register into bit positions 0 . . . K−1 of first register. | 2013-06-13 |
20130151822 | Efficient Enqueuing of Values in SIMD Engines with Permute Unit - Mechanisms, in a data processing system having a processor, for generating enqueued data for performing computations of a conditional branch of code are provided. Mask generation logic of the processor operates to generate a mask representing a subset of iterations of a loop of the code that results in a condition of the conditional branch being satisfied. The mask is used to select data elements from an input data element vector register corresponding to the subset of iterations of the loop of the code that result in the condition of the conditional branch being satisfied. Furthermore, the selected data elements are used to perform computations of the conditional branch of code. Iterations of the loop of the code that do not result in the condition of the conditional branch being satisfied are not used as a basis for performing computations of the conditional branch of code. | 2013-06-13 |