22nd week of 2013 patent applcation highlights part 58 |
Patent application number | Title | Published |
20130138830 | METHOD AND NETWORK DEVICE FOR CONTROLLING TRANSMISSION RATE OF COMMUNICATION INTERFACE - A method for controlling a transmission rate of a communication interface includes detecting, for a plurality of times, data traffic that passes through a first communication interface of a first device within a preset period; when the traffic rates at which the data traffic passes through the first communication interface within the preset period are lower than a first threshold, sending a rate reduction request message to a second device that includes a second communication interface, so that the second device configures a rate of the second communication interface as a first transmission rate that is lower than a current transmission rate of the second communication interface and that is supported by both communication interfaces after receiving the rate reduction request message. In this way, power consumption of the communication interface may be reduced. | 2013-05-30 |
20130138831 | METHODS AND APPARATUS TO CHANGE PEER DISCOVERY TRANSMISSION FREQUENCY BASED ON CONGESTION IN PEER-TO-PEER NETWORKS - A method, a computer program product, and an apparatus are provided. The apparatus determines a resource congestion level based on signals received on a plurality of resources of a peer discovery channel. In addition, the apparatus adjusts a duty cycle of a peer discovery transmission based on the determined congestion level. Furthermore, the apparatus transmits peer discovery signals at the adjusted duty cycle. | 2013-05-30 |
20130138832 | METHOD, ROUTER BRIDGE, AND SYSTEM FOR TRILL NETWORK PROTECTION - Embodiments of the present invention disclose a method, a router bridge, and a system for TRILL network protection. An active RB node and a standby RB node share a virtual Nickname and a virtual MAC address, and construct a protection group. The active RB node, through the TRILL protocol, obtains a network topology and generates a forwarding path to perform forwarding of a data packet. When the active RB node is faulty, the standby RB node is raised to be active and the data packet is forwarded through the standby RB node, so that the time for fault recovery is shortened, thereby solving the problem that in an existing TRILL network when a root RB node is faulty, a long time for the fault recovery causes a service interruption and affects network performance. | 2013-05-30 |
20130138833 | METHOD, APPARATUS AND SYSTEM TO DYNAMICALLY MANAGE LOGICAL PATH RESOURCES - System, apparatus, and methods for dynamically managing logical path resources are provided. The logical path resources are managed by adding, removing, and establishing logic paths based on specified priority schemes associated with the logical path resources. Information associated with the logical path resources is updated in a logical path resource table. | 2013-05-30 |
20130138834 | SERVER DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND METHOD OF ASSURING DATA ORDER - A stop unit stops transmitting data to a plurality of nodes for every predetermined period. An acquisition unit acquires versions of routing tables, which are updated in accordance with movement of a query, from the plurality of nodes when the transmission of the data is stopped. A comparison unit compares the versions of the routing tables of the plurality of nodes that are acquired. when there is a node in which the routing table of an old version is stored as a result of the comparison, an update unit updates the routing table of the node. | 2013-05-30 |
20130138835 | Masking of deceptive indicia in a communication interaction - Systems, methods, computer-readable storage mediums including computer-readable instructions and/or circuitry for masking deceptive indicia in communications content may implement operations including, but not limited to: receiving one or more signals associated with communications content provided by a first participant in a communications interaction; modifying the communications content according to at least one indicia of deception; and providing communications content modified according to the at least one indicia of deception to a second participant in the communications interaction. | 2013-05-30 |
20130138836 | Remote Shared Server Peripherals Over an Ethernet Network For Resource Virtualization - Provided is a novel approach for connecting servers to peripherals, such as NICs, HBAs, and SAS/SATA controllers. Also provided are methods of arranging peripherals within one or more I/O directors, which are connected to the servers over an Ethernet network. Such arrangement allows sharing the same resource among multiple servers. | 2013-05-30 |
20130138837 | ACCESSING DEVICE - An accessing device communicating with a host device and including a connector, a storage unit and a control unit is disclosed. The connector connects to the host device. The storage unit stores data. The control unit communicates with the storage unit according to a first communication protocol and communicates with the host device via the connector according to a second communication protocol. The control unit determines the kind of the second communication protocol according to selection information. | 2013-05-30 |
20130138838 | APPARATUS AND METHOD FOR CONTROLLING USB SWITCHING CIRCUIT IN PORTABLE TERMINAL - An apparatus and method for automatically switching the operation mode of a switching circuit in a portable terminal are provided. If an external device is connected to a connector interface unit, a signal is detected from the connector interface unit. The type of external device is identified by the detected signal. If the identified external device is a communication device, a communication mode is activated and a signal path is established between an internal module and the external device during the communication mode. A determination is made as to whether an internal event occurs in the portable terminal and an external event occurs in the external device, during the communication mode. The mode of the connector switching circuit is switched to a sleep mode if the internal and external events have not occurred. | 2013-05-30 |
20130138839 | CABLE IDENTIFICATION USING DATA TRAFFIC ACTIVITY INFORMATION - A cable identification system is provided. The cable identification system includes a cable having a plurality of conductors with an electrical connector on at least one end of the cable. The electrical connector is adapted to connect all conductors in the cable to a mating connector. The cable identification system further includes a signal generator connectable between the electrical connector and the mating connector on a network device. The signal generator includes a controller configured to measure and analyze parameters indicative of data traffic in the cable. The cable identification system further includes a cable sleeve adapted to receive the cable therein and coupled to the electrical connector. The cable sleeve has one or more segments which are electrically activatable to change an appearance based on a signal sent by the electrical connector in response to the measurements of the parameters indicative of traffic in the cable. | 2013-05-30 |
20130138840 | Efficient Memory and Resource Management - The present system enables passing a pointer, associated with accessing data in a memory, to an input/output (I/O) device via an input/output memory management unit (IOMMU). The I/O device accesses the data in the memory via the IOMMU without copying the data into a local I/O device memory. The I/O device can perform an operation on the data in the memory based on the pointer, such that I/O device accesses the memory without expensive copies. | 2013-05-30 |
20130138841 | MESSAGE PASSING USING DIRECT MEMORY ACCESS UNIT IN A DATA PROCESSING SYSTEM - A method includes generating, by a first software process of the data processing system, a source partition descriptor for a DMA job which requires access to a first partition of a memory which is assigned to a second software process of the data processing system and not assigned to the first software process. The source partition descriptor comprises a partition identifier which identifies the first partition of the memory. The DMA unit receives the source partition descriptor and generates a destination partition descriptor for the DMA job. Generating the destination partition descriptor includes translating, by the DMA unit, the partition identifier to a buffer pool identifier which identifies a physical address within the first partition of the memory which is assigned to the second software process; and storing, by the DMA unit, the buffer pool identifier in the destination partition descriptor. | 2013-05-30 |
20130138842 | MULTI-PASS SYSTEM AND METHOD SUPPORTING MULTIPLE STREAMS OF VIDEO - Systems and methods are disclosed for performing multiple processing of data in a network. In one embodiment, the network comprises a first display pipeline that is formed in real time from a plurality of possible display pipelines and that performs at least a first processing step on received data. A buffer stores the processed data and a second display pipeline that is formed in real time from a plurality of possible display pipelines performs at least a second processing step on stored data. | 2013-05-30 |
20130138843 | DELEGATING A POLL OPERATION TO ANOTHER DEVICE - In one embodiment, the present invention includes a method for handling a registration message received from a host processor, where the registration message delegates a poll operation with respect to a device from the host processor to another component. Information from the message may be stored in a poll table, and the component may send a read request to poll the device and report a result of the poll to the host processor based on a state of the device. Other embodiments are described and claimed. | 2013-05-30 |
20130138844 | NON-VOLATILE TYPE MEMORY MODULES FOR MAIN MEMORY - A computing system is disclosed that includes a memory controller in a processor socket normally reserved for a processor. A plurality of non-volatile memory modules may be plugged into memory sockets normally reserved for DRAM memory modules. The non-volatile memory modules may be accessed using a data communication protocol to access the non-volatile memory modules. The memory controller controls read and write accesses to the non-volatile memory modules. The memory sockets are coupled to the processor socket by printed circuit board traces. | 2013-05-30 |
20130138845 | INTEGRATED CIRCUIT NETWORK NODE CONFIGURATION - An electronic device for transmitting and/or receiving data through an electronic networking bus-system having a network topology includes an analog input connector for receiving an analog input signal representative for a network location in a network topology, and a processing unit for handling network data traffic transmitted through an electronic networking bus-system having said network topology, wherein the processing unit is further adapted for determining at least one network configuration parameter for the handling of network data traffic taking into account said input signal or digitized version thereof. | 2013-05-30 |
20130138846 | ENHANCED DATA STORAGE DEVICE - A data storage device includes one or more data paths through electrical contacts of the data storage device. The data paths are operably connected to allow bits to be transferred into and out of the data storage device. The data storage device stores an indication of a number of the one or more data paths in a configuration register. A method includes performing, while the data storage device is operatively coupled to a host device, receiving a command of the host device to read the configuration register and providing the indication via at least one of the one or more data paths. Providing the indication enables indicating to the host device the number of the one or more data paths. | 2013-05-30 |
20130138847 | BUS APPARATUS WITH DEFAULT SPECULATIVE TRANSACTIONS AND NON-SPECULATIVE EXTENSION - A bus apparatus is provided, which includes a bus master and a bus slave coupled to the bus master through a bus interface. When the bus master sends a bus transaction to the bus slave, the bus slave executes the bus transaction. The bus transaction is speculative by default. The command of the bus transaction indicates whether the bus transaction is a write transaction or a read transaction. When the bus transaction is a write transaction, the bus slave stores the write data of the bus transaction at the address of the bus transaction. When the bus transaction is a read transaction, the bus slave responds the bus transaction with a read data stored at the address of the bus transaction. The bus slave informs the bus master that the bus slave will not recognize further bus transactions in a specific period of time by asserting a bus wait signal. | 2013-05-30 |
20130138848 | ASYNCHRONOUS BRIDGE - An asynchronous bridge includes a transmission unit and a receiving unit. The transmission unit receives a write valid signal and input data from a master circuit, outputs write addresses increment under control of the write valid signal, sequentially stores the input data in memory cells, as directed by write addresses, and then sequentially outputs the stored input data, as directed by read addresses. The receiving unit receives a read ready signal from a slave circuit, determines whether memory cells are valid, based on the write addresses and the read addresses, and then outputs a read valid signal and the input data, based on the determination. | 2013-05-30 |
20130138849 | MULTICORE PROCESSOR SYSTEM, COMPUTER PRODUCT, ASSIGNING METHOD, AND CONTROL METHOD - A multicore processor system includes core configured to detect a process assignment instruction; acquire a remaining time obtained by subtracting a processing time of interrupt processing assigned to an arbitrary core of a multicore processor from a period that is from a calling time of the interrupt processing to an execution time limit of the interrupt processing, upon detecting the process assignment instruction; judge if the remaining time acquired at the acquiring is greater than or equal to a processing time of processing defined to limit an interrupt in the process; and assign the process to the arbitrary core, upon judging that the remaining time is greater than or equal to the processing time of the processing defined to limit an interrupt in the process. | 2013-05-30 |
20130138850 | INTERRUPT CONTROL METHOD AND MULTICORE PROCESSOR SYSTEM - In an interrupt control method of a multicore processor system including cores, a cache coherency mechanism, and a device, a first core detecting an interrupt signal from the device writes into an area prescribing an interrupt flag in the cache memory of the first core, first data indicating detection of the interrupt signal, and notifies the other cores of an execution request for interrupt processing corresponding to the interrupt signal, consequent to the cache coherency mechanism establishing coherency among at least cache memories of the other cores when the first data is written; and a second core different from the first core, maintaining the first data written as the interrupt flag, and notified of the execution request executes the interrupt processing, and writes over the area prescribing the interrupt flag written in the cache memory of the second core, with second data indicating no-detection of the interrupt signal. | 2013-05-30 |
20130138851 | METHOD AND APPARATUS FOR EXPANDER-BASED DATA DUPLICATION - A data-duplicating expander device attachable to a storage topology and a method. The data-duplicating expander device may include a direct-attached SAS expander configured for direct duplication of data from source disks to destination disks by bypassing transfer to or from a host system. The device may include dedicated expander phys and a processor. The device may be configured to receive instructions from an initiator storage-topology-connected device to configure or start a data transfer. The data-duplicating expander device may be configured to receive source data from source disks by utilizing dedicated expander phys and may be configured to transfer destination data directly and simultaneously to the destination disks by utilizing dedicated expander phys, said destination data being associated with the source data. Directly transferring destination data bypasses transfer of the source data or the destination data to or from a host system. | 2013-05-30 |
20130138852 | ELECTRONIC DEVICE WITH BASEBOARD MANAGEMENT CONTROLLER - An electronic device includes an IOM and a BMC. The IOM includes at least one pair of ports, each port capable of being connected to an external device, and each pair of ports is configured for exchanging data between two external devices connected to two ports of the pair. The BMC is electrically connected to the IOM, to respond to a user input to provide an interface for the user to input commands to control a working mode of each pair of ports of the IOM. | 2013-05-30 |
20130138853 | ADJUSTMENT DEVICE AND METHOD FOR ADJUSTING INTERFACE EXPANDER - An adjustment device for automatically adjusting an interface expander with a signal port and a firmware is provided. The adjustment device includes a MCU connected to the signal port, a serial port, and an analysis unit connected to the MCU via the serial port. The MCU receives signals output by the signal port and convert the received signals to serial digital signals, and transmits the serial digital signals to the analysis unit. The analysis unit stores a digital signal reflecting a signal standard of the interface expander, and compare the received serial digital signals with the stored digital signal to determine whether the received serial digital signals accord with the stored digital signal, and produces an adjustment signal to the firmware to adjust a register value of the firmware when determining the received serial digital signals do not accord with the stored digital signal. | 2013-05-30 |
20130138854 | AUTOMATED NETWORK CONFIGURATION IN A DYNAMIC VIRTUAL ENVIRONMENT - A computer-implemented method, and computer program product, for switching the I/O protocol of a multiprotocol I/O adapter while a computer system including the multiprotocol I/O adapter is running. The method comprises running a multiprotocol I/O adapter using a first I/O protocol while a computer system including the multiprotocol I/O adapter is running, and logically removing the adapter from the system while the computer system continues running. The multiprotocol I/O adapter is then caused to switch to a second I/O protocol while the adapter is logically removed and the computer system continues running. While the computer system still continues to run, the multiprotocol I/O adapter is restarted. After restarting, the multiprotocol I/O adapter runs using the second I/O protocol while the computer system continues running. In a virtualization environment, the method allows a multiprotocol I/O adapter to meet the varying I/O requirements of one or more virtual machines. | 2013-05-30 |
20130138855 | STORAGE DEVICE WITH HOT-SPARE DISK, METHOD FOR STORAGE DEVICE DISK REPLACEMENT, AND DATA TRANSMISSION METHOD FOR DISK REPLACEMENT - A storage device includes a casing, non-hot-pluggable hard disks, and an operating portion. The casing includes a fringe area next to an exterior of the casing, and a central area in an interior of the casing. The non-hot-pluggable hard disks are positioned at the central area of the casing, and the hot-pluggable slots are positioned at the fringe area of the casing. The operating portion is located at the fringe area. Hot-pluggable slots are defined in the operating portion and are adapted for insertion of hot-pluggable hard disks therein. The disclosure also provides a related method storage device disk replacement, and a related data transmission method for storage device disk replacement. | 2013-05-30 |
20130138856 | METHOD AND APPARATUS FOR NODE HOT-SWAPPING - The present invention discloses a method and an apparatus for node hot-swapping, which simplify the node hot-adding procedure, and improve the operation efficiency of the hot-adding procedure. The present invention includes: obtaining, by a server from a baseboard management controller BMC of a node device to be added, static hardware information of the node device to be added, and storing it into a storage device of the server, where the static hardware information is obtained through an out-band channel by the BMC of the node device to be added; receiving, by the server, a node hot-adding command sent by a user, where the hot-adding command carries identifier information of the node device to be added; obtaining the static hardware information of the node device to be added corresponding to the identifier information from the storage device; and adding, according to the static hardware information, the node device to be added. | 2013-05-30 |
20130138857 | Extensive Battery management system - Disclosed is an extensive battery management system for a second-used application automobile use batteries. The extensive battery management system includes a process unit, a record unit, an input interface and an output interface. The record unit is connected to the process unit so that the former stores data from the latter. The input interface is connected to the process unit. The output interface is connected to the process unit. | 2013-05-30 |
20130138858 | Providing A Sideband Message Interface For System On A Chip (SoC) - According to one embodiment, a system on a chip includes multiple agents each corresponding to an intellectual property (IP) logic and a fabric to couple the agents. The fabric can include a primary message interface and a sideband message interface. The fabric further includes one or more routers to provide out-of-band communications between the agents via this sideband message interface. To effect such communication, the router can perform a subset of ordering rules of a personal computer (PC)-based specification for sideband messages. Other embodiments are described and claimed. | 2013-05-30 |
20130138859 | SYSTEMS AND METHODS FOR INTEGRATING UNIVERSAL SERIAL BUS DEVICES - A mechanism for integrating Universal Serial Bus (USB) devices is disclosed. A method of the invention includes retrieving an identifier of the USB device connected to a computer system, matching the identifier with a device identification stored in a systems library of the computer system. The systems library includes an application identifier corresponding to the device identification and an attribute corresponding to the application identifier. The method also includes executing instructions associated with attribute corresponding to the application identifier associated with the device identification matched to the identifier. | 2013-05-30 |
20130138860 | USB CLASS PROTOCOL MODULES - A computer system includes USB class protocol-aware modules for USB devices as part of a xHCI host controller. The protocol-aware modules serve as accelerators by implementing critical portions of the device class protocols, which includes fetching higher level protocol data directly from client buffers for transmission and delivering decoded data to client buffers on receipt; and emulating a register-based interface for the benefit of system software on the host computer. | 2013-05-30 |
20130138861 | ADAPTER FOR ELECTRONIC DEVICES - An adapter for connecting an accessory to a portable electronic device includes a first connector compatible with a connector of the portable electronic device and a second connector compatible with a connector of the accessory. The connectors of the accessory and the portable electronic device are otherwise incompatible with each other. The adapter provides two levels of authentication. First, the adapter authenticates itself to the portable electronic device. If this first authentication is successful, then the adapter authenticates the accessory to the adapter. | 2013-05-30 |
20130138862 | Transferring Encoded Data Slices in a Distributed Storage Network - A method begins by a distributed storage (DS) processing module identifying encoded data slices of stored encoded data slices to transfer, wherein the stored encoded data slices are assigned addresses within a local distributed storage network (DSN) address range, wherein a global DSN address space is divided into a plurality of address sectors, and wherein the local DSN address range is a portion of an address sector. The method continues with the DS processing module determining whether another local DSN address range in the address sector exists and when the other local DSN address range in the address sector exists, determining whether to transfer identified encoded data slices into the other local DSN address range. When the at least some of the identified encoded data slices are to be transferred, the method continues with the DS processing module initiating a data transfer protocol to transfer the identified encoded data slices. | 2013-05-30 |
20130138863 | MECHANISM FOR ON-DEMAND HYPERVISOR MEMORY MAPPING - A mechanism for on-demand hypervisor memory mapping is disclosed. A method of the invention includes trapping an access instruction to a memory location from a virtual machine (VM) managed by a hypervisor of a host machine, determining whether a number of accesses to the memory location by the VM exceeds a threshold, if the number of accesses to the memory location by the VM does not exceed the threshold, then emulating the access instruction to the memory location on behalf of the VM, and if the number of accesses to the memory location by the VM exceeds the threshold, then allocating guest physical memory for the VM associated with the memory location. | 2013-05-30 |
20130138864 | SYSTEM AND METHOD TO REDUCE TRACE FAULTS IN SOFTWARE MMU VIRTUALIZATION - A system for identifying an exiting process and removing traces and shadow page table pages corresponding to the process' page table pages. An accessed minimum virtual address is maintained corresponding to an address space. In one embodiment, whenever a page table entry corresponding to the accessed minimum virtual address changes from present to not present, the process is determined to be exiting and removal of corresponding trace and shadow page table pages is begun. In a second embodiment, consecutive present to not-present PTE transitions are tracked for guest page tables on a per address space basis. When at least two guest page tables each has at least four consecutive present to not-present PTE transitions, a next present to not-present PTE transition event in the address space leads to the corresponding guest page table trace being dropped and the shadow page table page being removed. | 2013-05-30 |
20130138865 | SYSTEMS, METHODS, AND DEVICES FOR RUNNING MULTIPLE CACHE PROCESSES IN PARALLEL - Certain embodiments of the present disclosure related to systems, methods, and devices for increasing data access speeds. | 2013-05-30 |
20130138866 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR PROVIDING BASIC INPUT/OUTPUT SYSTEM (BIOS) DATA AND NON-BIOS DATA ON THE SAME NON-VOLATILE MEMORY - Methods, systems, and computer readable media for providing BIOS data and non-BIOS data on the same non-volatile memory. According to one aspect, a system for providing BIOS data and non-BIOS data on the same non-volatile memory includes a controller for controlling access by a host to a non-volatile memory for storing data, the data including BIOS data and non-BIOS data. The controller includes a first bus interface for communicating data to and from the host via a first bus of a first bus protocol, a second bus interface for communicating data to and from the host via a second bus of a second bus protocol, and a third interface for communicating data to and from the non-volatile memory. The first bus comprises a bus that is operable after power-on reset and before BIOS is accessed. | 2013-05-30 |
20130138867 | Storing Multi-Stream Non-Linear Access Patterns in a Flash Based File-System - Accesses to logical pages of memory are monitored. Each logical page corresponds to a logical memory address and the accesses defining an access pattern. The logical memory addresses are logged in ordered pairs of consecutive logical pages in the access pattern. Upon receipt of a request to write data to a given logical page, a given ordered pair of consecutive logical pages containing the logical memory address of the given logical page as a first logical memory address in the ordered pair of logical memory addresses associated with that consecutive pair is obtained. A first physical memory address mapping to the first logical memory address is identified, and a second logical memory address from that identified consecutive pair. A second physical memory address mapping to the second logical memory address is identified, and the data and the second physical memory address are written to the first physical memory address. | 2013-05-30 |
20130138868 | SYSTEMS AND METHODS FOR IMPROVED COMMUNICATIONS IN A NONVOLATILE MEMORY SYSTEM - Systems and methods are provided for improved communications in a nonvolatile memory (“NVM”) system. The system can toggle between multiple communications channels to provide point-to-point communications between a host device and NVM dies included in the system. The host device can toggle between multiple communications channels that extend to one or more memory controllers of the system, and the memory controllers can toggle between multiple communications channels that extend to the NVM dies. Power islands may be incorporated into the system to electrically isolate system components associated with inactive communications channels. | 2013-05-30 |
20130138869 | NONVOLATILE MEMORY AND MEMORY DEVICE INCLUDING THE SAME - A nonvolatile memory has a first memory block including a plurality of sub memory blocks stacked in a direction perpendicular to a substrate, and a second memory block including a plurality of sub memory blocks stacked in a direction perpendicular to the substrate, the second memory block being parallel to the first memory block. Management data unchanged after it is programmed once is stored in at least one sub memory block of the first memory block and main data is stored in sub memory blocks of the second memory block. Meta data may be stored in a sub memory block of the first memory block of in any memory block that does not contain the management data. | 2013-05-30 |
20130138870 | MEMORY SYSTEM, DATA STORAGE DEVICE, MEMORY CARD, AND SSD INCLUDING WEAR LEVEL CONTROL LOGIC - Disclosed is a memory system which includes a nonvolatile memory having a user area and a buffer area; and wear level control logic managing a mode change operation in which memory blocks of the user area are partially changed into the buffer area, based on wear level information of the nonvolatile memory. | 2013-05-30 |
20130138871 | Flash Memory Device and Data Access Method for Same - The invention provides a flash memory device. In one embodiment, the flash memory device is coupled to a host, and comprises a flash memory, a controller, and a random access memory. The flash memory comprises a plurality of blocks for data storage. The random access memory stores a read count table for recording read counts of the blocks. When the read counts of a plurality of original blocks are greater than a threshold according to the read count table, the controller obtains a plurality of spare blocks from the flash memory as mirror blocks respectively corresponding to the original blocks, and copies a portion of a plurality of data pages of the original blocks to the mirror blocks whenever the original blocks are read until all of the data pages of the original blocks have been copied to the mirror blocks. | 2013-05-30 |
20130138872 | APPARATUS WITH A MEMORY CONTROLLER CONFIGURED TO CONTROL ACCESS TO RANDOMLY ACCESSIBLE NON-VOLATILE MEMORY - An apparatus includes a printed circuit board with a plurality of printed circuit board traces, a memory controller mounted on the printed circuit board coupled to one or more of the plurality of printed circuit board traces, a plurality of non-volatile type of memory integrated circuits coupled to the printed circuit board, and a plurality of support integrated circuits coupled between the memory controller and the plurality of non-volatile type of memory integrated circuits. | 2013-05-30 |
20130138873 | MEMORY SYSTEMS - Memory systems having a volatile memory, a non-volatile memory arranged in blocks, and a controller coupled to the volatile memory and to the non-volatile memory. The controller is configured to maintain, in the volatile memory, a list of addresses of erased blocks of the non-volatile memory. The list of addresses of erased blocks of the non-volatile memory is limited to a maximum number of list entries. The controller is further configured to transfer the list of addresses of erased blocks of the non-volatile memory from the volatile memory to the non-volatile memory in response to the list containing its maximum number of list entries and/or in response to an operation that would increase the number of list entries to a number equal to or greater than the maximum number of list entries. | 2013-05-30 |
20130138874 | SYSTEMS WITH PROGRAMMABLE HETEROGENEOUS MEMORY CONTROLLERS FOR MAIN MEMORY - A translating memory module is disclosed including a printed circuit board, at least one memory integrated circuit coupled to the printed board, and at least one support chip coupled to the printed circuit board and coupled between the edge connector and the at least one memory integrated circuit. The at least one support chip includes a bi-directional translator to translate between a first memory communication protocol for the at least one memory integrated circuit and a second memory communication protocol for a memory channel differing from the first memory communication protocol. The second memory communication protocol to communicate data, address, and control signals over the memory channel bus to read and write data into the memory of the translating memory module. | 2013-05-30 |
20130138875 | STORING/READING SEVERAL DATA STREAMS INTO/FROM AN ARRAY OF MEMORIES - High speed mass storage devices using NAND flash memories (MDY.X) are suitable for recording and playing back a video data stream under real-time conditions, wherein the data are handled page-wise in the flash memories and are written in parallel to multiple memory buses (MBy). However, for operating with multiple independent data streams a significant buffer size is required. According to the invention, data from different data streams are collected in corresponding different buffers (FIFO | 2013-05-30 |
20130138876 | COMPUTER SYSTEM WITH MEMORY AGING FOR HIGH PERFORMANCE - A memory manager in a computer system that ages memory for high performance. The efficiency of operation of the computer system can be improved by dynamically setting an aging schedule based on a predicted time for trimming pages from a working set. An aging schedule that generates aging information that better discriminates among pages in a working set based on activity level enables selection of pages to trim that are less likely to be accessed following trimming. As a result of being able to identify and trim less active pages, inefficiencies arising from restoring trimmed pages to the working set are avoided. | 2013-05-30 |
20130138877 | METHOD AND APPARATUS FOR DISTRIBUTED DIRECT MEMORY ACCESS FOR SYSTEMS ON CHIP - A distributed direct memory access (DMA) method, apparatus, and system is provided within a system on chip (SOC). DMA controller units are distributed to various functional modules desiring direct memory access. The functional modules interface to a systems bus over which the direct memory access occurs. A global buffer memory, to which the direct memory access is desired, is coupled to the system bus. Bus arbitrators are utilized to arbitrate which functional modules have access to the system bus to perform the direct memory access. Once a functional module is selected by the bus arbitrator to have access to the system bus, it can establish a DMA routine with the global buffer memory. | 2013-05-30 |
20130138878 | Method for Scheduling Memory Refresh Operations Including Power States - A method for performing refresh operations on a rank of memory devices is disclosed. After the completion of a memory operation, a determination is made whether or not a refresh backlog count value is less than a predetermined value and the rank of memory devices is being powered down. If the refresh backlog count value is less than the predetermined value and the rank of memory devices is being powered down, an Idle Count threshold value is set to a maximum value such that a refresh operation will be performed after a maximum delay time. If the refresh backlog count value is not less than the predetermined value or the rank of memory devices is not in a powered down state, the Idle Count threshold value is set based on the slope of an Idle Delay Function such that a refresh operation will be performed accordingly. | 2013-05-30 |
20130138879 | CIRCUIT FOR AND METHOD OF ENABLING THE TRANSFER OF DATA BY AN INTEGRATED CIRCUIT - A circuit for enabling the transfer of data by an integrated circuit device is described. The circuit comprises a non-volatile memory array coupled to receive a clock signal and having a plurality of memory elements storing data; and a control circuit coupled to the non-volatile memory array, the control circuit enabling uni-directional transfer of data on a plurality of signal lines between the non-volatile memory array and the control circuit in a first mode and bi-directional transfer of data in a second mode; wherein the control circuit controls the transfer of data on the plurality of signal lines between the non-volatile memory array and the control circuit in the first mode on both the rising and falling edges of the clock signal. A method of enabling the transfer of data by an integrated circuit device is also described. | 2013-05-30 |
20130138880 | STORAGE SYSTEM AND METHOD FOR CONTROLLING STORAGE SYSTEM - In a storage system | 2013-05-30 |
20130138881 | Parallel Reed-Solomon RAID (RS-RAID) Architecture, Device, and Method - The parallel RS-RAID data storage architecture can aggregate that data and checksums within each cluster into intermediate or partial sums that are transferred or distributed to other clusters. The use of intermediate data symbols, intermediate checksum symbols, cluster configuration information on the assignment of data storage devices to clusters and the operational status of data storage devices, and the like, can reduce the computational burden and latency for the error correction calculations while increasing the scalability and throughput of the parallel RS-RAID distributed data storage architecture. | 2013-05-30 |
20130138882 | VERIFY BEFORE PROGRAM RESUME FOR MEMORY DEVICES - A method of programming data into a memory device including an array of memory cells is disclosed. The method comprises receiving at least one program command that addresses a number of the memory cells for a programming operation to program data in the memory cells. The at least one program command is executed by iteratively carrying out at least one program/verify cycle to incrementally program the addressed memory cells with the program data. A secondary command may be selectively received after initiating but before completing the programming operation. The programming operation may be selectively resumed by first verifying the memory cells, then carrying out at least one program/verify cycle. | 2013-05-30 |
20130138883 | Optimizing Migration/Copy of De-Duplicated Data - A mechanism is provided for optimizing migration/copying of de-duplicated data from an internal storage system to a removable storage system. A preliminary number of clusters to be generated are determined for sets of data objects stored on the internal storage system based on a number of the sets of data objects. The preliminary number of clusters is generated based on shortest distances between the sets of data objects, each cluster comprising one or more sets of data objects and each set of data objects comprising one or more chunks of data. A chosen cluster is identified from a set of clusters by identifying a cluster having a greatest number of common chunks within as few sets of data objects. Responsive to an export-size of the chosen cluster failing to exceed the available storage capacity of the removable storage system, the chosen cluster is exported to the removable storage system. | 2013-05-30 |
20130138884 | LOAD DISTRIBUTION SYSTEM - Exemplary embodiments of the invention provide load distribution among storage systems using solid state memory (e.g., flash memory) as expanded cache area. In accordance with an aspect of the invention, a system comprises a first storage system and a second storage system. The first storage system changes a mode of operation from a first mode to a second mode based on load of process in the first storage system. The load of process in the first storage system in the first mode is executed by the first storage system. The load of process in the first storage system in the second mode is executed by the first storage system and the second storage system. | 2013-05-30 |
20130138885 | DYNAMIC PROCESS/OBJECT SCOPED MEMORY AFFINITY ADJUSTER - An apparatus, method, and program product for optimizing a multiprocessor computing system by sampling memory reference latencies and adjusting components of the system in response thereto. During execution of processes the computing system, memory reference sampling of memory locations from shared memory of the computing system referenced in the executing processes is performed. Each sampled memory reference collected from sampling is associated with a latency and a physical memory location in the shared memory. Each sampled memory reference is analyzed to identify segments of memory locations in the shared memory corresponding to a sub-optimal latency, and based on the analyzed sampled memory references, the physical location of the one or more identified segments, the processor on which one or more processes referencing the identified segments, and/or a status associated with the one or more identified segments is dynamically adjusted to thereby optimize memory access for the multiprocessor computing system. | 2013-05-30 |
20130138886 | SCHEDULER, MULTI-CORE PROCESSOR SYSTEM, AND SCHEDULING METHOD - A scheduler that causes a given core in a multi-core processor to determine if a priority level of a process that is to be executed by a core of the multi-core processor is greater than or equal to a threshold; save to a cache memory of each core that executes a process having a priority level greater than or equal to the threshold, data that is accessed by the process upon execution; save to a memory area different from the cache memory and to which access is relatively slower, data that is accessed by a process having a priority level not greater than or equal to the threshold; and save the data saved in the memory area, to a cache memory of a requesting core, when the requesting core issues an access request for the data saved in the memory area. | 2013-05-30 |
20130138887 | SELECTIVELY DROPPING PREFETCH REQUESTS BASED ON PREFETCH ACCURACY INFORMATION - The disclosed embodiments relate to a system that selectively drops a prefetch request at a cache. During operation, the system receives the prefetch request at the cache. Next, the system identifies a prefetch source for the prefetch request, and then uses accuracy information for the identified prefetch source to determine whether to drop the prefetch request. In some embodiments, the accuracy information includes accuracy information for different prefetch sources. In this case, determining whether to drop the prefetch request involves first identifying a prefetch source for the prefetch request, and then using accuracy information for the identified prefetch source to determine whether to drop the prefetch request. | 2013-05-30 |
20130138888 | STORING A TARGET ADDRESS OF A CONTROL TRANSFER INSTRUCTION IN AN INSTRUCTION FIELD - A control transfer instruction (CTI), such as a branch, jump, etc., may have an offset value for a control transfer that is to be performed. The offset value may be usable to compute a target address for the CTI (e.g., the address of a next instruction to be executed for a thread or instruction stream). The offset may be specified relative to a program counter. In response to detecting a specified offset value, the CTI may be modified to include at least a portion of a computed target address. Information indicating this modification has been performed may be stored, for example, in a pre-decode bit. In some cases, CTI modification may be performed only when a target address is a “near” target, rather than a “far” target. Modifying CTIs as described herein may eliminate redundant address calculations and produce a savings of power and/or time in some embodiments. | 2013-05-30 |
20130138889 | Cache Optimization Via Predictive Cache Size Modification - Systems and methods for cache optimization, the method comprising monitoring cache access rate for one or more cache tenants in a computing environment, wherein a first cache tenant is allocated a first cache having a first cache size which may be adjusted; determining a cache profile for at least the first cache over one or more time intervals according to data collected during the monitoring, analyzing the cache profile for the first cache to determine an expected cache usage model for the first cache; and analyzing the cache usage model and factors related to cache efficiency for the one or more cache tenants to dictate one or more constraints that define boundaries for the first cache size. | 2013-05-30 |
20130138890 | METHOD AND APPARATUS FOR PERFORMING DYNAMIC CONFIGURATION - A method for performing dynamic configuration includes: freezing a bus between a dynamic configurable cache and a plurality of cores/processors by rejecting a request from any of the cores/processors during a bus freeze period, wherein the dynamic configurable cache is implemented with an on-chip memory; and adjusting a size of a portion of the dynamic configurable cache, wherein the portion of the dynamic configurable cache is capable of caching/storing information for one of the cores/processors. An associated apparatus is also provided. In particular, the apparatus includes the plurality of cores/processors, the dynamic configurable cache, and a dynamic configurable cache controller, and can operate according to the method. | 2013-05-30 |
20130138891 | ALLOCATION ENFORCEMENT IN A MULTI-TENANT CACHE MECHANISM - Systems and methods for cache optimization are provided. The method comprises monitoring cache access rate for a plurality of cache tenants sharing same cache mechanism having an amount of data storage space, wherein a first cache tenant having a first cache size is allocated a first cache space within the data storage space, and wherein a second cache tenant having a second cache size is allocated a second cache space within the data storage space. The method further comprises determining cache profiles for at least the first cache tenant and the second cache tenant according to data collected during the monitoring; analyzing the cache profiles for the plurality of cache tenants to determine an expected cache usage model for the cache mechanism; and analyzing the cache usage model and factors related to cache efficiency or performance for the one or more cache tenants to dictate one or more occupancy constraints. | 2013-05-30 |
20130138892 | DRAM CACHE WITH TAGS AND DATA JOINTLY STORED IN PHYSICAL ROWS - A system and method for efficient cache data access in a large row-based memory of a computing system. A computing system includes a processing unit and an integrated three-dimensional (3D) dynamic random access memory (DRAM). The processing unit uses the 3D DRAM as a cache. Each row of the multiple rows in the memory array banks of the 3D DRAM stores at least multiple cache tags and multiple corresponding cache lines indicated by the multiple cache tags. In response to receiving a memory request from the processing unit, the 3D DRAM performs a memory access according to the received memory request on a given cache line indicated by a cache tag within the received memory request. Rather than utilizing multiple DRAM transactions, a single, complex DRAM transaction may be used to reduce latency and power consumption. | 2013-05-30 |
20130138893 | STORAGE DEVICE, COMPUTER-READABLE RECORDING MEDIUM, AND STORAGE CONTROL METHOD - A storage device being one of a plurality of storage devices storing data includes a memory and a processor coupled to the memory. The processor executes determining, when having received a new request and a new priority information during a preparation for an execution of another update processing, whether a new priority indicated by the new priority information is higher than a priority of the update processing in the preparation. The process including canceling the update processing in the preparation when having determines at the determining that the new priority is higher than the priority of the update processing in the preparation. The process includes forwarding the new request and the new priority information to another storage device when having determined at the determining that the new priority is higher than the priority of the update processing in the preparation. | 2013-05-30 |
20130138894 | HARDWARE FILTER FOR TRACKING BLOCK PRESENCE IN LARGE CACHES - A system and method for efficiently determining whether a requested memory location is in a large row-based memory of a computing system. A computing system includes a processing unit that generates memory requests on a first chip and a cache (LLC) on a second chip connected to the first chip. The processing unit includes an access filter that determines whether to access the cache. The cache is fabricated on top of the processing unit. The processing unit determines whether to access the access filter for a given memory request. The processing unit accesses the access filter to determine whether given data associated with a given memory request is stored within the cache. In response to determining the access filter indicates the given data is not stored within the cache, the processing unit generates a memory request to send to off-package memory. | 2013-05-30 |
20130138895 | METHOD AND SYSTEM FOR MAINTAINING A POINTER'S TYPE - A processing device implements a sandbox that provides an isolated execution environment, a memory structure. The processing device generates a pointer to a data item, the pointer having a type. The processing device generates a key for the pointer based on the type of the pointer. The processing device designates a name for the pointer based on the key. The processing device then inserts the pointer having the designated name into the memory structure, causing the pointer to become a private pointer. | 2013-05-30 |
20130138896 | Reader-Writer Synchronization With High-Performance Readers And Low-Latency Writers - Data writers desiring to update data without unduly impacting concurrent readers perform a synchronization operation with respect to plural processors or execution threads. The synchronization operation is parallelized using a hierarchical tree having a root node, one or more levels of internal nodes and as many leaf nodes as there are processors or threads. The tree is traversed from the root node to a lowest level of the internal nodes and the following node processing is performed for each node: (1) check the node's children, (2) if the children are leaf nodes, perform the synchronization operation relative to each leaf node's associated processor or thread, and (3) if the children are internal nodes, fan out and repeat the node processing with each internal node representing a new root node. The foregoing node processing is continued until all processors or threads associated with the leaf nodes have performed the synchronization operation. | 2013-05-30 |
20130138897 | METHOD AND APPARATUS FOR DYNAMICALLY CONTROLLING DEPTH AND POWER CONSUMPTION OF FIFO MEMORY - A method and apparatus are described for controlling depth and power consumption of a first-in first-out (FIFO) memory including a data storage, a FIFO top register, a FIFO bottom register and control logic. The data storage may be segmented into a plurality of data storage segments. The FIFO top register may be configured to generate a first value indicating where a first entry in the data storage is stored. The FIFO bottom register may be configured to generate a second value indicating where a last entry in the data storage is stored. The control logic may be configured to determine which of the data storage segments to activate or deactivate based at least in part on the first and second values, and to monitor an available capacity and a write/read rate of the FIFO memory as data is read from and written to the activated data storage segments. | 2013-05-30 |
20130138898 | MEMORY MODULE INCLUDING PLURAL MEMORY DEVICES AND COMMAND ADDRESS REGISTER BUFFER - Disclosed herein is a memory module that includes a plurality of command address connectors formed on the module substrate, a plurality of memory devices mounted on the module substrate, and a plurality of command address register buffers mounted on the module substrate. The command address connectors receive a command address signal from outside. The memory devices include a plurality of first memory devices and a plurality of second memory devices. The command address register buffers include a first command address register buffer that supplies the command address signal to the first memory devices and a second command address register buffer that supplies the command address signal to the second memory devices. | 2013-05-30 |
20130138899 | MANAGEMENT METHOD AND MANAGEMENT SYSTEM FOR COMPUTER SYSTEM - The present invention provides a technique to efficiently rearrange data in actual regions. A management system acquires load lumped region variation information (access distribution variation amount) indicative of a variation in the position of a virtual region in a virtual volume in a storage subsystem which region corresponds to a hot spot. Then, based on the load lumped region variation information, the management system determines a load position unvaried hour(s) indicative of a hour(s) in which the position of the virtual region corresponding to the hot spot is almost or perfectly unvaried (the hour(s) in which the position of the hot spot is stable). The management system then displays the load position unvaried hour(s) on a display device. | 2013-05-30 |
20130138900 | INFORMATION PROCESSING DEVICE AND COMPUTER PROGRAM PRODUCT - According to an embodiment, an information processing device that includes a first storage unit and a second storage unit having power consumption different from that of the first storage unit. The information processing device also includes a control unit configured to make a control to determine a priority of information that is to be stored in the first storage unit or the second storage unit. The control unit is configured to store the information into the first storage unit or into the second storage unit based on the determined priority. | 2013-05-30 |
20130138901 | IIMPLEMENTING MEMORY PERFORMANCE MANAGEMENT AND ENHANCED MEMORY RELIABILITY ACCOUNTING FOR THERMAL CONDITIONS - A method, system and computer program product implement memory performance management and enhanced memory reliability of a computer system accounting for system thermal conditions. When a primary memory temperature reaches an initial temperature threshold, reads are suspended to the primary memory and reads are provided to a mirrored memory in a mirrored memory pair, and writes are provided to both the primary memory and the mirrored memory. If the primary memory temperature reaches a second temperature threshold, write operations to the primary memory are also stopped and the primary memory is turned off with DRAM power saving modes such as self timed refresh (STR), and the reads and writes are limited to the mirrored memory in the mirrored memory pair. When the primary memory temperature decreases to below the initial temperature threshold, coherency is recovered by writing a coherent copy from the mirrored memory to the primary memory. | 2013-05-30 |
20130138902 | Optimizing Migration/Copy of De-Duplicated Data - A mechanism is provided for optimizing migration/copying of de-duplicated data from an internal storage system to a removable storage system. A preliminary number of clusters to be generated are determined for sets of data objects stored on the internal storage system based on a number of the sets of data objects. The preliminary number of clusters is generated based on shortest distances between the sets of data objects, each cluster comprising one or more sets of data objects and each set of data objects comprising one or more chunks of data. A chosen cluster is identified from a set of clusters by identifying a cluster having a greatest number of common chunks within as few sets of data objects. Responsive to an export-size of the chosen cluster failing to exceed the available storage capacity of the removable storage system, the chosen cluster is exported to the removable storage system. | 2013-05-30 |
20130138903 | COMPUTER SYSTEM - In an aspect of the invention, a primary storage system (P system) manages write times of write data of one or more primary volumes (P volumes). The P system sequentially sends journals including write data of the P volumes and values indicating order of writing the write data of the P volumes to the secondary storage system (S system). The S system sequentially stores the write data in the journals from the P system to one or more secondary volumes (S volumes) according to the values. The S system sends identification information on the latest write data of the S volumes to the P system. The P system sends a management system information to indicate the latency between the write time of the data identified with the identification information and the write time of the latest data of the P volumes at the time of receipt of the identification information. | 2013-05-30 |
20130138904 | STORAGE DEVICE, CONTROLLER, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN PROGRAM - A storage device including a copy processor that carries out a copying process of storing the copy of the data stored in the copy-source volume into the copy-destination volume; a copying manager that prepares copying related to the copying process and sets the copying process to a stand-by state; an activation manager that sets activation target data representing a target to be activated for the copying process in response to an activation instruction from a superior device; and a copy controller that cancels the stand-by state of the copying process, being set the activation target data for, and causes the copy processor to carry out the copying process. This configuration makes it possible to back up data of multiple copy sessions, ensuring integrity in timing of data to be backed up without lowering the capability of an I/O process. | 2013-05-30 |
20130138905 | MANAGING MEMORY DATA RECOVERY UPON POWER LOSS - A single segment data structure and method for storing data objects employing a single segment data object having a header and a data record. The header includes a segment length field describing the length of memory reserved for the data record and the data record contains at least one data instance object. Each of the data instance objects has a data instance header and data field. The header includes a data instance state field and a data instance length field. The data instance length field contains data representing the length of the data instance data field allowing for variable length “in place” updating. The data instance state field contains data representing an object state of the instance data. Only one of the data instance objects of the data record of the single segment data object has a valid object state. The state field facilitates a power loss recovery process. | 2013-05-30 |
20130138906 | ELECTRONIC DEVICE SYSTEM AND STORAGE DEVICE - When an SD card is connected to an SD socket of an electronic device, a control unit of the SD card obtains permission/inhibition information (an output signal) outputted from a setting unit of the electronic device. Based on the obtained permission/inhibition information, the control unit starts the operation of a DC-DC converter corresponding to a memory unit from which reading-out of data is permitted. By virtue of this, reading-out of data from the memory unit is achieved in correspondence to the permission/inhibition information. | 2013-05-30 |
20130138907 | USE OF TEST PROTECTION INSTRUCTION IN COMPUTING ENVIRONMENTS THAT SUPPORT PAGEABLE GUESTS - Management of storage used by pageable guests of a computing environment is facilitated. A query instruction is provided that details information regarding the storage location indicated in the query. It specifies whether the storage location, if protected, is protected by host-level protection or guest-level protection. | 2013-05-30 |
20130138908 | STORAGE SYSTEM AND POOL CAPACITY SCALE-DOWN CONTROL METHOD - The present invention provides a storage system, which efficiently uses a storage system internal storage area related to an internal volume in a case where an internal volume and an external volume are allocated to a pool. The storage system comprises a controller which manages a pool to which an internal volume and an external volume are allocated, and upon receiving a write request, provides a virtual volume to which one or more real pages inside the pool are allocated. In a case where the storage capacity of the pool is to be reduced, the allocation to the pool of the external volume is preferentially canceled over the internal volume. | 2013-05-30 |
20130138909 | STORAGE APPARATUS AND ITS CONTROL METHOD - A storage apparatus obtains a snapshot(s) of a first logical volume when needed and saves old data to a second logical volume when data is written to the first logical volume after obtaining the snapshot. A storage area of the second logical volume is allocated in chunk units to the first logical volume and a tendency of order to delete snapshots is judged when the chunks have been completely used up. If there is a tendency to delete the snapshots in their acquisition order, a new chunk is allocated to the first logical volume and a storage area in that chunk is determined to be a save location of the old data. If there is a tendency to delete the snapshots randomly, an unused storage area in a chunk already allocated to the first logical volume is determined to be the save location of the old data. | 2013-05-30 |
20130138910 | Information Processing Apparatus and Write Control Method - According to one exemplary embodiment, an information processing apparatus includes: a first memory including a first plurality of areas each having a first memory capacity; a second memory including a second plurality of areas each having a second memory capacity that is larger than the first memory capacity; a selector configured to select a first area of the first memory; and a writing module that is configured to (i) write data from area(s) of the first memory to a first area of the second plurality of areas of the second memory if the area(s) of the first plurality of areas is in a first data storage state, and (ii) refrain from writing data from a remaining area(s) of the first memory to the second memory if the remaining area(s) is in a second data storage state different from the first data storage state. | 2013-05-30 |
20130138911 | MEMORY CONTROLLER WITH RECONFIGURABLE HARDWARE - Memory controller concepts are disclosed in which hardware resources of a memory controller can be re-used or re-configured to accommodate various different memory configurations. The memory configuration may be stored in mode register bits ( | 2013-05-30 |
20130138912 | SCHEDULING REQUESTS IN A SOLID STATE MEMORY DEVICE - An apparatus and method for a memory controller for managing scheduling requests in a solid state memory device. The memory includes a set of units wherein a unit within the set of units is erasable as a whole by a unit reclaiming process resulting in a free unit available for writing data to. The memory controller further includes a first queue for queuing user requests for reading and/or writing data from/to the memory, and a second queue for queuing unit reclaiming requests for executing the unit reclaiming process. A scheduler is provided for selecting user requests from the first queue and unit reclaiming requests from the second queue for execution according to a defined ratio. The defined ratio is a variable ratio, is dependent on the current number of free units, and permits the memory controller to select requests from both the first queue and the second queue. | 2013-05-30 |
20130138913 | Reconfigurable Integrated Circuit Architecture With On-Chip Configuration and Reconfiguration - The exemplary embodiments provide a reconfigurable integrated circuit capable of on-chip configuration and reconfiguration, comprising: a plurality of configurable composite circuit elements; a configuration and control bus; a memory; and a sequential processor. Each composite circuit element comprises: a configurable circuit; and an element interface and control circuit, the element interface and control circuit comprising an element controller and at least one configuration and control register to store one or more configuration and control words. The configuration and control bus is coupled to the plurality of configurable composite circuit elements, and comprises a plurality of address and control lines and a plurality of data lines. The sequential processor can write configurations to the configuration and control registers of an addressed configurable composite circuit element to configure or reconfigure the configurable circuit. | 2013-05-30 |
20130138914 | INFORMATION PROCESSING APPARATUS AND PROGRAM AND METHOD FOR ADJUSTING INITIAL ARRAY SIZE - An adjustment apparatus includes a storage device, an execution target program, an execution unit, a first API, a second API, a profiler, and a dynamic compiler. The execution unit interprets the program, and calls and executes a function of an API in response to the API description. The first and second API are callable by the execution unit, to respectively allocate an array of a predetermined size, and extend the array. The first and second APIs are converted into code to store an array allocation call context of the pre-extension array into a profile information storage area of the allocated array. The profiler profiles access to arrays. The dynamic compiler inline-expands an array allocation call context included in a code part to be dynamically compiled and embeds an array size determined based on context based access information, as an allocation initial size of the array, into the code part. | 2013-05-30 |
20130138915 | DATA PROCESSING SYSTEM, DATA PROCESSING METHOD, AND PROGRAM - A data processing system ( | 2013-05-30 |
20130138916 | STORAGE APPARATUS AND ITS CONTROL METHOD - A controller for the storage apparatus: creates a second logical volume in a storage area provided by one or more storage devices; stores management information of a snapshot of a first logical volume, which is to be provided to a host computer, in the second logical volume; and reads the management information of a necessary snapshot from the second logical volume to a memory when needed, executes processing using the read management information, and returns the management information, which becomes no longer necessary, from the memory to the second logical volume. When reading the management information of the necessary snapshot from the second logical volume to the memory when needed, the controller changes the number of generations and address range of the snapshot of the management information to be read to the memory according to a generation and address of the snapshot whose management information is required. | 2013-05-30 |
20130138917 | Bitstream Buffer Manipulation With A SIMD Merge Instruction - Method, apparatus, and program means for performing bitstream buffer manipulation with a SIMD merge instruction. The method of one embodiment comprises determining whether any unprocessed data bits for a partial variable length symbol exist in a first data block is made. A shift merge operation is performed to merge the unprocessed data bits from the first data block with a second data block. A merged data block is formed. A merged variable length symbol comprised of the unprocessed data bits and a plurality of data bits from the second data block is extracted from the merged data block. | 2013-05-30 |
20130138918 | DIRECT INTERTHREAD COMMUNICATION DATAPORT PACK/UNPACK AND LOAD/SAVE - A circuit arrangement, method, and program product for compressing and decompressing data in a node of a system including a plurality of nodes interconnected via an on-chip network. Compressed data may be received and stored at an input buffer of a node, and in parallel with moving the compressed data to an execution register of the node, decompression logic of the node may decompress the data to generate uncompressed data, such that uncompressed data is stored in the execution register for utilization by an execution unit of the node. Uncompressed data may be output by the execution unit into the execution register, and in parallel with moving the uncompressed data to an output buffer of the node connected to the on-chip network, compression logic may compress the uncompressed data to generate compressed data, such that compressed data is stored at the output buffer. | 2013-05-30 |
20130138919 | HIERARCHICAL MULTI-CORE PROCESSOR AND METHOD OF PROGRAMMING FOR EFFICIENT DATA PROCESSING - A multi-core processor includes a tree-like structure having a plurality of computing cores arranged in hierarchical levels, the cores all having the same logical architecture. Each core can include computing, interconnecting, and/or storage elements. The functionality of an individual element can be supplied by an entire core in a lower level. A method for programming the processor includes hierarchically decomposing an application into interconnected sub-functions, mapping the sub-functions onto groups of cores at appropriate levels of the processor, and interconnecting the mapped sub-functions so as to hierarchically compose the complete application. Sub-functions can be sequential, concurrent, and/or pipelined. Interconnections can be static or dynamically switchable under program control. Interconnect elements can also be used to implement flow control as needed in pipelined operations to maintain data coherency. The decomposing and mapping process can be iterated on sub-functions so as to optimize load balancing, software performance, and hardware efficiency. | 2013-05-30 |
20130138920 | METHOD AND APPARATUS FOR PACKET PROCESSING AND A PREPROCESSOR - An apparatus for packet processing is provided. The apparatus is to be implemented in a server and includes: a preprocessor and at least two processors which are respectively connected with the preprocessor. The preprocessor is to classify packets received externally from the server, and to distribute the classified packets to the respective processors, wherein packets in a same flow are distributed to a same processor. Each of the processors is to receive and process a packet distributed by the preprocessor. | 2013-05-30 |
20130138921 | DE-COUPLED CO-PROCESSOR INTERFACE - A de-coupled co-processor interface (CPIF) is provided. The de-coupled CPIF transfers endian information along with the dispatching of co-processor (COP) instructions. The de-coupled CPIF divides the status report provided by a COP into an early status report and a late status report. The de-coupled CPIF may disable the late status report in order to improve the performance. The de-coupled CPIF further provides multiple early flush interfaces (EFIs) to transfer early flush events from a main processor (MP) to a corresponding COP. As a result, the de-coupled CPIF can improve the performance of the processing of data endian, status reports and early flush events between an MP and a COP. | 2013-05-30 |
20130138922 | REGISTER MANAGEMENT IN AN EXTENDED PROCESSOR ARCHITECTURE - Systems and methods are disclosed for enhancing the throughput of a processor by minimizing the number of transfers of data associated with data transfer between a register file and a memory stack. The register file used by a processor running an application is partitioned into a number of blocks. A subset of the blocks of the register file is defined in an application binary interface enabling the subset to be pre-allocated and exposed to the application binary interface. Optionally, blocks other than the subset are not exposed to the application binary interface so that the data relating to application function switch or a context switch is not transferred between the unexposed blocks and a memory stack. | 2013-05-30 |
20130138923 | MULTITHREADED DATA MERGING FOR MULTI-CORE PROCESSING UNIT - Described herein are methods, systems, apparatuses and products for multithreaded data merging for multi-core central and graphical processing units. An aspect provides for executing a plurality of threads on at least one central processing unit comprising a plurality of cores, each thread comprising an input data set (IDS) and being executed on one of the plurality of cores; initializing at least one local data set (LDS) comprising a size and a threshold; inserting IDS data elements into the at least one LDS such that each inserted IDS data element increases the size of the at least one LDS; and merging the at least one LDS into a global data set (GDS) responsive to the size of the at least one LDS being greater than the threshold. Other aspects are disclosed herein. | 2013-05-30 |
20130138924 | EFFICIENT MICROCODE INSTRUCTION DISPATCH - An apparatus and method for avoiding bubbles and maintaining a maximum instruction throughput rate when cracking microcode instructions. A lookahead pointer scans the newest entries of a dispatch queue for microcode instructions. A detected microcode instruction is conveyed to a microcode engine to be cracked into a sequence of micro-ops. Then, the sequence of micro-ops is placed in a queue, and when the original microcode instruction entry in the dispatch queue is selected for dispatch, the sequence of micro-ops is dispatched to the next stage of the processor pipeline. | 2013-05-30 |
20130138925 | PROCESSING CORE WITH SPECULATIVE REGISTER PREPROCESSING - A method and circuit arrangement speculatively preprocess data stored in a register file during otherwise unused cycles in an execution unit, e.g., to prenormalize denormal floating point values stored in a floating point register file, to decompress compressed values stored in a register file, to decrypt encrypted values stored in a register file, or to otherwise preprocess data that is stored in an unprocessed form in a register file. | 2013-05-30 |
20130138926 | INDIRECT FUNCTION CALL INSTRUCTIONS IN A SYNCHRONOUS PARALLEL THREAD PROCESSOR - An indirect branch instruction takes an address register as an argument in order to provide indirect function call capability for single-instruction multiple-thread (SIMT) processor architectures. The indirect branch instruction is used to implement indirect function calls, virtual function calls, and switch statements to improve processing performance compared with using sequential chains of tests and branches. | 2013-05-30 |
20130138927 | DATA PROCESSING APPARATUS ADDRESS RANGE DEPENDENT PARALLELIZATION OF INSTRUCTIONS - A data processing apparatus has an instruction memory system arranged to output an instruction word addressed by an instruction address. An instruction execution unit, processes a plurality of instructions from the instruction word in parallel. A detection unit, detects in which of a plurality of ranges the instruction address lies. The detection unit is coupled to the instruction execution unit and/or the instruction memory system, to control a way in which the instruction execution unit parallelizes processing of the instructions from the instruction word, dependent on a detected range. In an embodiment the instruction execution unit and/or the instruction memory system adjusts a width of the instruction word that determines a number of instructions from the instruction word that is processed in parallel, dependent on the detected range. | 2013-05-30 |
20130138928 | VLIW PROCESSOR, INSTRUCTION STRUCTURE, AND INSTRUCTION EXECUTION METHOD - A first operation unit | 2013-05-30 |
20130138929 | PROCESS MAPPING IN PARALLEL COMPUTING - A method of mapping processes to processors in a parallel computing environment where a parallel application is to be run on a cluster of nodes wherein at least one of the nodes has multiple processors sharing a common memory, the method comprising using compiler based communication analysis to map Message Passing Interface processes to processors on the nodes, whereby at least some more heavily communicating processes are mapped to processors within nodes. Other methods, apparatus, and computer readable media are also provided. | 2013-05-30 |