Entries |
Document | Title | Date |
20080215700 | FIREFIGHTING VEHICLE AND METHOD WITH NETWORK-ASSISTED SCENE MANAGEMENT - A method comprises acquiring information pertaining to a scene of a fire. The acquiring step is performed by a sensor connected to a first computer. The method further comprises transmitting the information from the first computer to a second computer by way of a wireless communication network. The second computer is mounted to a fire fighting vehicle and is connected to a display. The method further comprises displaying the information at the fire fighting vehicle using the display. | 09-04-2008 |
20080228896 | Pipelined buffer interconnect with trigger core controller - A method and system to transfer a data stream from a data source to a data sink are described herein. The system comprises a trigger core, a plurality of dedicated buffers and a plurality of dedicated buses coupled to the plurality of buffers, trigger core, the data source and the data sink. In response to receiving a request for a data transfer from a data source to a data sink, the trigger core assigns a first buffer and a first bus to the data source for writing data, locks the first buffer and first bus, releases the first buffer and the first bus upon indication from the data source of completion of data transfer to the first buffer, assigns the first buffer and first bus to the data sink for reading data and assigns a second buffer and second bus to the data source for writing data thereby pipelining the data transfer from the data source to the data sink. | 09-18-2008 |
20080263171 | Peripheral device that DMAS the same data to different locations in a computer - A method is disclosed comprising: receiving,.by a network interface, data and a corresponding header; storing, by the network interface, the data in a first memory buffer of a computer that is coupled to the network interface; and storing, by the network interface, the data in a second memory buffer of the computer. For example, the network interface can first store the data in a part of the computer memory that is accessible by a device driver for the network interface. If the application provides to the driver a pointer to a location in memory for storing the data, the driver can pass this pointer to the network interface, which can write the data directly to that location without copying by the CPU. If, however, the application does not provide a pointer, the data controlled by the driver can be copied by the CPU into the application's memory space. | 10-23-2008 |
20080270563 | Message Communications of Particular Message Types Between Compute Nodes Using DMA Shadow Buffers - Message communications of particular message types between compute nodes using DMA shadow buffers includes: receiving a buffer identifier specifying an application buffer having a message of a particular type for transmission to a target compute node through a network; selecting one of a plurality of shadow buffers for a DMA engine on the compute node for storing the message, each shadow buffer corresponding to a slot of an injection FIFO buffer maintained by the DMA engine; storing the message in the selected shadow buffer; creating a data descriptor for the message stored in the selected shadow buffer; injecting the data descriptor into the slot of the injection FIFO buffer corresponding to the selected shadow buffer; selecting the data descriptor from the injection FIFO buffer; and transmitting the message specified by the selected data descriptor through the data communications network to the target compute node. | 10-30-2008 |
20080270564 | Virtual machine migration - Virtual machine migration is described. In embodiment(s), a virtual machine can be migrated from one host computer to another utilizing LUN (logic unit number) masking. A virtual drive of the virtual machine can be mapped to a LUN mask associates the LUN with a host computer. The LUN mask can be changed to unmask the LUN to a second computer to migrate the virtual machine from the host computer to the second computer. | 10-30-2008 |
20080301254 | METHOD AND SYSTEM FOR SPLICING REMOTE DIRECT MEMORY ACCESS (RDMA) TRANSACTIONS IN AN RDMA-AWARE SYSTEM - Aspects of a system for splicing RDMA transactions in an RDMA system may include a main processor within a main server that may receive read requests from a client device. The main processor may translate a data reference contained in each read request to generate a physical buffer list (PBL). The processor | 12-04-2008 |
20090024714 | Method And Computer System For Providing Remote Direct Memory Access - A method for providing remote direct memory access (RDMA) between two computers, preferably between central processing units (CPUs) and a functional subsystem of a computer system as part of their network communication, e.g. using TCP/IP. Tasks of analyzing network protocol data and the actual RDMA operations can be offloaded to the functional subsystem with this method. Further, the functional subsystem cannot compromise the status of the first computer system as only access to certain allowed memory locations is granted by a memory protection unit during phases of actual data transfer between the functional subsystem and the CPUs. | 01-22-2009 |
20090031001 | Repeating Direct Memory Access Data Transfer Operations for Compute Nodes in a Parallel Computer - Methods, apparatus, and products are disclosed for repeating DMA data transfer operations for nodes in a parallel computer that include: receiving, by a DMA engine on an origin node, a RGET data descriptor that specifies a DMA transfer operation data descriptor and a second RGET data descriptor, the second RGET data descriptor also specifying the DMA transfer operation data descriptor; creating, in dependence upon the RGET data descriptor, an RGET packet that contains the DMA transfer operation data descriptor and the second RGET data descriptor; processing the DMA transfer operation data descriptor included in the RGET packet, including performing a DMA data transfer operation between the origin node and a target node in dependence upon the DMA transfer operation data descriptor; and processing the second RGET data descriptor included in the RGET packet, thereby performing again the DMA transfer operation in dependence upon the DMA transfer operation data descriptor. | 01-29-2009 |
20090031002 | Self-Pacing Direct Memory Access Data Transfer Operations for Compute Nodes in a Parallel Computer - Methods, apparatus, and products are disclosed for self-pacing DMA data transfer operations for nodes in a parallel computer that include: transferring, by an origin DMA on an origin node, a RTS message to a target node, the RTS message specifying an message on the origin node for transfer to the target node; receiving, in an origin injection FIFO for the origin DMA from a target DMA on the target node in response to transferring the RTS message, a target RGET descriptor followed by a DMA transfer operation descriptor, the DMA descriptor for transmitting a message portion to the target node, the target RGET descriptor specifying an origin RGET descriptor on the origin node that specifies an additional DMA descriptor for transmitting an additional message portion to the target node; processing, by the origin DMA, the target RGET descriptor; and processing, by the origin DMA, the DMA transfer operation descriptor. | 01-29-2009 |
20090063651 | System And Method For Saving Dump Data Of A Client In A Network - A system and method for saving memory dump data from an operating system of a client in a network. The method includes configuring the client to allocate client system memory according to system memory classifications, configuring the client to transfer dump data to at least one dump server, saving said dump data periodically during client system run-time based on the system memory classifications, and saving dump data in the event of a client system crash to at least complement the dump data sent periodically during client system run-time. | 03-05-2009 |
20090083392 | SIMPLE, EFFICIENT RDMA MECHANISM - A server interconnect system for sending data includes a first server node and a second server node. Each server node is operable to send and receive data. The interconnect system also includes a first and second interface unit. The first interface unit is in communication with the first server node and has one or more RDMA doorbell registers. Similarly, the second interface unit is in communication with the second server node and has one or more RDMA doorbell registers. The system also includes a communication switch that is operable to receive and route data from the first or second server nodes using a RDMA read and/or an RDMA write when either of the first or second RDMA doorbell registers indicates that data is ready to be sent or received. | 03-26-2009 |
20090125604 | THIRD PARTY, BROADCAST, MULTICAST AND CONDITIONAL RDMA OPERATIONS - In a multinode data processing system in which nodes exchange information over a network or through a switch, the mechanism which enables out-of-order data transfer via Remote Direct Memory Access (RDMA) also provides a corresponding ability to carry out broadcast operations, multicast operations, third party operations and conditional RDMA operations. In a broadcast operation a source node transfers data packets in RDMA fashion to a plurality of destination nodes. Multicast operation works similarly except that distribution is selective. In third party operations a single central node in a cluster or network manages the transfer of data in RDMA fashion between other nodes or creates a mechanism for allowing a directed distribution of data between nodes. In conditional operation mode the transfer of data is conditioned upon one or more events occurring in either the source node or in the destination node. | 05-14-2009 |
20090177755 | SCRIPT SERVING APPARATUS AND METHOD - A computer system comprising a processor operably connected to a memory device. The memory device stores an application providing functionality and a plug-in augmenting that functionality. In selected embodiments, the plug-in includes a request module configured to generate a request for a script, a communication module configured to contact a server and submit the request thereto, an input module configured to receive the script from the server, and an execution module configured to load the script directly into application memory corresponding to the application. | 07-09-2009 |
20090198788 | FAST PATH MESSAGE TRANSFER AGENT - A method of providing a fast path message transfer agent is provided. The method includes receiving bytes of a message over a network connection and determining whether the number of bytes exceeds a predetermined threshold. If the number of bytes is less than a predetermined threshold, then the message is written only to memory. However, if the number of bytes exceeds the predetermined threshold, then some of the bytes (e.g. up to the predetermined threshold) are written to memory, wherein the remainder of the bytes are stored onto the non-volatile storage. If the message was received successfully by each destination, then the message is removed from the memory/non-volatile storage. If not, all failed destinations are identified and the message (with associated failed destinations) is stored on the non-volatile storage for later sending. | 08-06-2009 |
20090248830 | REMOTE DIRECT MEMORY ACCESS FOR iSCSI - A storage networking device provides remote direct memory access to its buffer memory, configured to store storage networking data. The storage networking device may be particularly adapted to transmit and receive iSCSI data, such as iSCSI input/output operations. The storage networking device comprises a controller and a buffer memory. The controller manages the receipt of storage networking data and buffer locational data. The storage networking data advantageously includes at least one command for at least partially controlling a device attached to a storage network. Advantageously, the storage networking data may be transmitted using a protocol adapted for the transmission of storage networking data, such as, for example, the iSCSI protocol. The buffer memory advantageously is configured to at least temporarily store at least part of the storage networking data at a location within the buffer memory that is based at least in part on the locational data. | 10-01-2009 |
20090271491 | SYSTEM AND METHOD TO CONTROL WIRELESS COMMUNICATIONS - A method of controlling wireless communications is provided. A first call is received at a first distributed mobile architecture (DMA) server from a first mobile communication device. The first DMA server communicates with the first mobile communication device via a first wireless communication protocol. A second call is received at the first DMA server from a second mobile communication device. The first DMA server communicates with the second mobile communication device via a second wireless communication protocol. Voice information associated with the first call is converted to first packet data and voice information associated with the second call to second packet data. The first packet data and the second packet data are routed via a private Internet Protocol (IP) network to at least one second DMA device, where the first call is accessible to a first destination device and the second call is accessible to a second destination device via the at least one second DMA device. | 10-29-2009 |
20090271492 | REAL-TIME COMMUNICATIONS OVER DATA FORWARDING FRAMEWORK - Methods and apparatus, including computer program products, for real-time communications over data forwarding framework. A framework includes a group of interconnected computer system nodes each adapted to receive data and continuously forward the data from computer memory to computer memory without storing on any physical storage device in response to a request from a client system to store data from a requesting system and retrieve data being continuously forwarded from computer memory to computer memory in response to a request to retrieve data from the requesting system, and at least two client systems linked to the group, each of the client systems executing a real-time communications client program. | 10-29-2009 |
20090287792 | METHOD OF PROVIDING SERVICE RELATING TO CONTENT STORED IN PORTABLE STORAGE DEVICE AND APPARATUS THEREFOR - Provided are a method of providing a service relating to content stored in a portable storage device to an external device, and an apparatus therefor. The method includes outputting a user interface to manage information relating to contents stored in the portable storage device through a display unit associated with the external device, receiving a command to select content from among the contents through the output user interface, executing a service corresponding to the content selected based on the command, and providing a result of executing the service to the external device. | 11-19-2009 |
20090307328 | REMOTE MANAGEMENT INTERFACE FOR A MEDICAL DEVICE - A method and system for remote management of a hand held medical device of a type which does not include a physical keyboard or a large display screen including connectable hardware providing a communications channel between the device and a remote computer system to provide a fully featured interface, with a full sized screen and keyboard, for use when manipulating data from the medical device. | 12-10-2009 |
20090313347 | System and Method to Integrate Measurement Information within an Electronic Laboratory Notebook Environment - Capability to record relevant aggregated data via a test and measurement instrument interface through a software agent. The agent resides within the test and measurement instrument and gathers the information when activated. The information can be measurement data; measurement setup parameters; test system topology; user notes, brief descriptions, audio recordings or pen input; pictures; or attached documents. The agent can communicate directly to an electronic laboratory notebook server or can store the information on a portable computer readable media (CRM). A user can upload the information from the portable CRM to the server. The user can access the information via a PC workstation. | 12-17-2009 |
20090319634 | Mechanism for enabling memory transactions to be conducted across a lossy network - A network interface is disclosed for enabling remote programmed I/O to be carried out in a “lossy” network (one in which packets may be dropped). The network interface: (1) receives a plurality of memory transaction messages (MTM's); (2) determines that they are destined for a particular remote node; (3) determines a transaction type for each MTM; (4) composes, for each MTM, a network packet to encapsulate at least a portion of that MTM; (5) assigns a priority to each network packet based upon the transaction type of the MTM that it is encapsulating; (6) sends the network packets into a lossy network destined for the remote node; and (7) ensures that at least a subset of the network packets are received by the remote node in a proper sequence. By doing this, the network interface makes it possible to carry out remote programmed I/O, even across a lossy network. | 12-24-2009 |
20090327444 | Dynamic Network Link Selection For Transmitting A Message Between Compute Nodes Of A Parallel Comput - Methods, apparatus, and products are disclosed for dynamic network link selection for transmitting a message between nodes of a parallel computer. The nodes are connected using a data communications network. Each node connects to adjacent nodes in the data communications network through a plurality of network links. Each link provides a different data communication path through the network between the nodes of the parallel computer. Such dynamic link selection includes: identifying, by an origin node, a current message for transmission to a target node; determining, by the origin node, whether transmissions of previous messages to the target node have completed; selecting, by the origin node from the plurality of links for the origin node, a link in dependence upon the determination and link characteristics for the plurality of links for the origin node; and transmitting, by the origin node, the current message to the target node using the selected link. | 12-31-2009 |
20100005150 | IMAGE DISPLAY DEVICE, STORAGE DEVICE, IMAGE DISPLAY SYSTEM AND NETWORK SETUP METHOD - An image display system | 01-07-2010 |
20100011084 | ADVERTISEMENT FORWARDING STORAGE AND RETRIEVAL NETWORK - Methods and apparatus, including computer program products, for to advertisement forwarding storage and retrieval network. A method includes, in a network of interconnected computer system nodes, directing advertisement to a computer memory, directing data to a computer memory, continuously forwarding each of the unique data, independent of each other, from one computer memory to another computer memory in the network of interconnected computer system nodes without storing on any physical storage device in the network, continuously forwarding each of the unique advertisements, independent of each other, from one computer memory to another computer memory in the network of interconnected computer system nodes without storing on any physical storage device in the network, and retrieving one of the advertisements in response to an activity. | 01-14-2010 |
20100017496 | METHOD AND SYSTEM FOR USING SHARED MEMORY WITH OPTIMIZED DATA FLOW TO IMPROVE INPUT/OUTPUT THROUGHOUT AND LATENCY - The data path in a network storage system is streamlined by sharing a memory among multiple functional modules (e.g., N-module and D-module) of a storage server that facilitates symmetric access to data from multiple clients. The shared memory stores data from clients or storage devices to facilitate communication of data between clients and storage devices and/or between functional modules, and reduces redundant copies necessary for data transport. It reduces latency and improves throughput efficiencies by minimizing data copies and using hardware assisted mechanisms such as DMA directly from host bus adapters over an interconnection, e.g. switched PCI-e “network”. This scheme is well suited for a “SAN array” architecture, but also can be applied to NAS protocols or in a unified protocol-agnostic storage system. The storage system can provide a range of configurations ranging from dual module to many modules with redundant switched fabrics for I/O, CPU, memory, and disk connectivity. | 01-21-2010 |
20100023595 | SYSTEM AND METHOD OF MULTI-PATH DATA COMMUNICATIONS - In a particular embodiment, a multi-path bridge circuit includes a backplane input/output (I/O) interface to couple to a local backplane having at least one communication path to a processing node and includes at least one host interface adapted to couple to a corresponding at least one processor. The multi-path bridge circuit further includes logic adapted to identify two or more communication paths through the backplane interface to a destination memory, to divide a data block stored at a source memory into data block portions, and to transfer the data block portions in parallel from the source memory to the destination node via the identified two or more communication paths. | 01-28-2010 |
20100030866 | METHOD AND SYSTEM FOR REAL-TIME CLOUD COMPUTING - A system for providing real-time cloud computing. The system includes a plurality of computing nodes, each node including a CPU, a memory, and a hard disk. The system includes a central intelligence manager for real-time assigning of tasks to the plurality of computing nodes. The central intelligence manager is configured to provide CPU scaling in parallel. The central intelligence manager is configured to provide a concurrent index. The central intelligence manager is configured to provide a multi-level cache. The central intelligence manager is configured to provide direct disk reads to the hard disks. The central intelligence manager is configured to utilize UDP for peer-to-peer communication between the computing nodes. | 02-04-2010 |
20100036930 | APPARATUS AND METHODS FOR EFFICIENT INSERTION AND REMOVAL OF MPA MARKERS AND RDMA CRC DIGEST - The invention relates to insertion and removal of MPA markers and RDMA CRCs in RDMA data streams, after determining the locations for these fields. An embodiment of the invention comprises a host interface, a transmit interface connected to the host interface, and a processor interface connected to both transmit and host interfaces. The host interface operates under the direction of commands received from the processor interface when processing inbound RDMA data. The host interface calculates the location of marker locations and removes the markers. The transmit interface operates under the direction of commands received from the processor interface when processing outbound RDMA data. The transmit interface calculates the positions in the outbound data where markers are to be inserted. The transmit interface then places the markers accordingly. | 02-11-2010 |
20100049821 | DEVICE, SYSTEM, AND METHOD OF DISTRIBUTING MESSAGES - Device, system, and method of distributing messages. For example, a data publisher capable of communication with a plurality of subscribers via a network fabric, the data publisher comprising: a memory allocator to allocate a memory area of a local memory unit of the data publisher to be accessible for Remote Direct Memory Access (RDMA) read operations by one or more of the subscribers; and a publisher application to create a message log in said memory area, to send a message to one or more of the subscribers using a multicast transport protocol, and to store in said memory area a copy of said message. A subscriber device handles recovery of lost messages by directly reading the lost messages from the message log of the data publisher using RDMA read operation(s). | 02-25-2010 |
20100094948 | WORKLOAD MIGRATION USING ON DEMAND REMOTE PAGING - In one embodiment a method for migrating a workload from one processing resource to a second processing resource of a computing platform is disclosed. The method can include a command to migrate a workload that is processing and the process can be interrupted and some memory processes can be frozen in response to the migration command. An index table can be created that identifies memory locations that determined where the process was when it is interrupted. Table data, pinned page data, and non-private process data can be sent to the second processing resource. Contained in this data can be restart type data. The second resource or target resource can utilize this data to restart the process without the requirement of bulk data transfers providing an efficient migration process. Other embodiments are also disclosed. | 04-15-2010 |
20100146068 | DEVICE, SYSTEM, AND METHOD OF ACCESSING STORAGE - Device, system, and method of accessing storage. For example, a server includes: a Solid-State Drive (SSD) to store data; a memory mapper to map at least a portion of a storage space of the SSD into a memory space of the server; and a network adapter to receive a Small Computer System Interface (SCSI) read command incoming from a client device, to map one or more parameters of the SCSI read command into an area of the memory space of the server from which data is requested to be read by the client device, said area corresponding to a storage area of the SSD, and to issue a Remote Direct Memory Access (RDMA) write command to copy data directly to the client device from said area of the memory space corresponding to the SSD. | 06-10-2010 |
20100146069 | METHOD AND SYSTEM FOR COMMUNICATING BETWEEN MEMORY REGIONS - A method and system are provided for transferring data in a networked system between a local memory in a local system and a remote memory in a remote system. A RDMA request is received and a first buffer region is associated with a first transfer operation. The system determines whether a size of the first buffer region exceeds a maximum transfer size of the networked system. Portions of the second buffer region may be associated with the first transfer operation based on the determination of the size of the first buffer region. The system subsequently performs the first transfer operation. | 06-10-2010 |
20100153513 | WIRELESS DATA CHANNELING DEVICE AND METHOD - A wireless data channeling device associated with a host computer is provided, the host computer comprising data that is deemed to be visible with respect to the data channeling device. The data channeling device comprises a connector to allow the device to be fitted to a device that is remote from the host computer, a device transceiver to enable the device to receive/transmit data from/to a host transceiver of the host computer, and a controller for functionally connecting the connector to the host transceiver. Thus, when in a data channeling mode, the visible data on the host computer can be wirelessly transmitted to the remote device via the data channeling device and/or data on the remote device can be transmitted to the host computer via the data channeling device. In an example embodiment, the connector is similar to a connector of a traditional USB flash memory disk so that, when connected to the remote device, the data channeling mimics a standard, off-the-shelf memory stick. | 06-17-2010 |
20100161750 | IP STORAGE PROCESSOR AND ENGINE THEREFOR USING RDMA - An IP Storage processor and processing engines for use in the IP storage processor is disclosed. The IP Storage processor uses an architecture that may provide capabilities to transport and process Internet Protocol (IP) packets from Layer 2 through transport protocol layer and may also perform packet inspection through Layer 7. The engines may perform pass-through packet classification, policy processing and/or security processing enabling packet streaming through the architecture at nearly the full line rate. A scheduler schedules packets to packet processors for processing. An internal memory or local session database cache may store a transport protocol session information database and/or store a storage information session database, for a certain number of active sessions. The session information that is not in the internal memory is stored and retrieved to/from an additional memory. An application running on an initiator or target can in certain instantiations register a region of memory, which is made available to its peer(s) for access directly without substantial host intervention through ROMA data transfer. | 06-24-2010 |
20100180004 | APPARATUS AND METHODS FOR NETWORK ANALYSIS - Embodiments of methods, systems and apparatus for analysis and capture of network data items are described herein. Some embodiments include a receiving module which may receive a network data item from a network and which may then duplicate the network data item into two network data items. A capture module may receive one of the network data items for storage in storage device. A statistics or analysis module may in parallel receive the other network data item and may then perform network analysis on that network data item. Other embodiments are described and claimed. | 07-15-2010 |
20100185743 | Encoding Method and Apparatus for Frame Synchronization Signal - An encoding method for a frame synchronization signal includes: encoding a predetermined intermediate variable corresponding to a cell ID or cell group ID to obtain short codes corresponding to the cell ID or cell group ID; and generating SCH codewords according to the said short codes, instead of directly encoding the cell ID or cell group ID, thereby ensuring that a first short code in each generated S-SCH codeword is larger than a second short code, or a first short code in each generated S-SCH codeword is smaller than a second short code, and a short code distance thereof is relatively small, so as to enhance the reliability of the frame synchronization. An encoding apparatus for a frame synchronization signal is further provided. | 07-22-2010 |
20100198936 | STREAMING MEMORY CONTROLLER - A memory controller (SMC) is provided for coupling a memory (MEM) to a network (N). The memory controller (SMC) comprises a first interface (PI), a streaming memory unit (SMU) and a second interface (MI). The first interface (PI) is used for connecting the memory controller (SMC) to the network (N) for receiving and transmitting data streams (ST | 08-05-2010 |
20100241725 | ISCSI RECEIVER IMPLEMENTATION - A method for communication is disclosed and may include, in a network interface device, parsing a portion of a TCP segment into one or more portions of Internet Small Computer Systems Interface (iSCSI) Protocol Data Units (PDUs). A header and/or a payload for one or more of the parsed iSCSI PDUs may be recovered. Concurrent with parsing of a remaining portion of the TCP segment to recover a remaining portion of PDUs, the recovered header may be evaluated and/or the recovered payload may be routed external to the network interface device for processing. The evaluating and the routing may occur independently of the parsing within the network interface device. Respective separate physical processors may be used for the parsing and the recovering. The respective separate processors for recovering may be used for the evaluating and the routing. | 09-23-2010 |
20100274868 | Direct Memory Access In A Hybrid Computing Environment - DMA in a computing environment that includes several computers and DMA engines, the computers adapted to one another for data communications by an data communications fabric, each computer executing an application, where DMA includes pinning, by a first application, a memory region, including providing, to all applications, information describing the memory region; effecting, by a second application in dependence upon the information describing the memory region, DMA transfers related to the memory region, including issuing DMA requests to a particular DMA engine for processing; and unpinning, by the first application, the memory region, including insuring, prior to unpinning, that no additional DMA requests related to the memory region are issued, that all outstanding DMA requests related to the memory region are provided to a DMA engine, and that processing of all outstanding DMA requests related to the memory region and provided to a DMA engine has been completed. | 10-28-2010 |
20100306336 | Communication System - The invention relates to a data communication method which is based on a layer model, the layer model ( | 12-02-2010 |
20100332611 | FILE SHARING SYSTEM - To realize efficient processing regarding accesses to files. A remote controlling processing apparatus | 12-30-2010 |
20110035459 | Network Direct Memory Access - In one embodiment, a system comprises at least a first node and a second node coupled to a network. The second node comprises a local memory and a direct memory access (DMA) controller coupled to the local memory. The first node is configured to transmit at least a first packet to the second node to access data in the local memory and at least one other packet that is not coded to access the local memory. The second node is configured to capture the packet from a data link layer of a protocol stack, and wherein the DMA controller is configured to perform one more transfers with the local memory to access the data specified by the first packet responsive to the first packet received from the data link layer. The second node is configured to process the other packet to a top of the protocol stack. | 02-10-2011 |
20110055346 | DIRECT MEMORY ACCESS BUFFER MANAGEMENT - Disclosed are systems and methods for reclaiming posted buffers during a direct memory access (DMA) operation executed by an input/output device (I/O device) in connection with data transfer across a network. During the data transfer, the I/O device may cancel a buffer provided by a device driver thereby relinquishing ownership of the buffer. A condition for the I/O device relinquishing ownership of a buffer may be provided by a distance vector that may be associated with the buffer. The distance vector may specify a maximum allowable distance between the buffer and a buffer that is currently fetched by the I/O device. Alternatively, a condition for the I/O device relinquishing ownership of a buffer may be provided by a timer. The timer may specify a maximum time that the I/O device may maintain ownership of a particular buffer. In other implementations, a mechanism is provided to force the I/O device to relinquish some or all of the buffers that it controls. | 03-03-2011 |
20110099243 | APPARATUS AND METHOD FOR IN-LINE INSERTION AND REMOVAL OF MARKERS - An apparatus is provided, for performing a direct memory access (DMA) operation between a host memory in a first server and a network adapter. The apparatus includes a host frame parser and a protocol engine. The host frame parser is configured to receive data corresponding to the DMA operation from a host interface, and is configured to insert markers on-the-fly into the data at a prescribed interval and to provide marked data for transmission to a second server over a network fabric. The protocol engine is coupled to the host frame parser. The protocol engine is configured to direct the host frame parser to insert the markers, and is configured to specify a first marker value and an offset value, whereby the host frame parser is enabled to locate and insert a first marker into the data. | 04-28-2011 |
20110106905 | DIRECT SENDING AND ASYNCHRONOUS TRANSMISSION FOR RDMA SOFTWARE IMPLEMENTATIONS - Exemplary embodiments include RDMA methods and systems for sending application data to a computer memory destination in a direct but non-blocking fashion. The method can include posting a new work request for an RDMA connection or association, determining if there is a prior work request for the same connection or association enqueued for processing, in response to a determination that no prior work request is enqueued for processing, processing the new work request directly by sending RDMA frames containing application data referred to by the work request to the computer memory destination, performing direct sending while there is sufficient send space to process the new work request, and delegating the new work request to asynchronous transmission if a prior work request is already enqueued for processing or lack of send space would block a subsequent transmission operation. | 05-05-2011 |
20110106906 | METHOD AND SYSTEM FOR OFFLINE DATA ACCESS ON COMPUTER SYSTEMS - While a computer system is in operational state, a network interface controller (NIC) in the computer system may be operable to copy select data to a secondary storage device. The secondary storage device is accessible by the NIC while the computer system is in an offline state or not operational. The NIC may be operable to provide remote accessibility to the copy of the select data stored in the secondary storage device over a network while the computer system is in the offline state and the NIC is supplied with electrical power and active. While the computer system is in the operational state and whenever a change is made to the select data, the NIC is operable to replace the copy of the select data stored in the secondary storage device with an updated copy of the select data based on the change. | 05-05-2011 |
20110125865 | METHOD FOR OPERATING AN ELECTRONIC CONTROL UNIT DURING A CALIBRATION PHASE - A method for operating an electronic control unit during a calibration phase; the method contemplating the steps of: dividing an area of a FLASH storage memory connected to a microprocessor in two pages between them identical and redundant, each of which is aimed at storing all the calibration parameters used by a control software; and using the two pages alternatively so that a first page contains the values of the calibration parameters and is queried by the microprocessor, while a second page is cleared and made available to store the updated values of the calibration parameters. | 05-26-2011 |
20110138008 | Deterministic Communication Between Graphical Programs Executing on Different Computer Systems Using Variable Nodes - A system and method for enabling deterministic or time-triggered data exchange between a first graphical program and a second graphical program. A first variable is assigned to a first time slot in a network cycle. A first graphical program may be configured to write data to the first variable. A second graphical program may be configured to read data from the first variable. The first graphical program may be executed on a first computer system, where executing the first graphical program comprises writing data to the first variable. Writing data to the first variable may cause the data to be delivered over a network to a second computer system when the first time slot occurs. The second graphical program may be executed on the second computer system, where executing the second graphical program comprises reading from the first variable the data sent from the first computer system. | 06-09-2011 |
20110153771 | DIRECT MEMORY ACCESS WITH MINIMAL HOST INTERRUPTION - Data received over a shared network interface is directly placed by the shared network interface in a designated memory area of a host. In providing this direct memory access, the incoming data packets are split, such that the headers are separated from the data. The headers are placed in a designated area of a memory buffer of the host. Additionally, the data is stored in contiguous locations within the buffer. This receive and store is performed without interruption to the host. Then, at a defined time, the host is interrupted to indicate the receipt and direct storage of the data. | 06-23-2011 |
20110161456 | Apparatus and Method for Supporting Memory Management in an Offload of Network Protocol Processing - A number of improvements in network adapters that offload protocol processing from the host processor are provided. Specifically, mechanisms for handling memory management and optimization within a system utilizing an offload network adapter are provided. The memory management mechanism permits both buffered sending and receiving of data as well as zero-copy sending and receiving of data. In addition, the memory management mechanism permits grouping of DMA buffers that can be shared among specified connections based on any number of attributes. The memory management mechanism further permits partial send and receive buffer operation, delaying of DMA requests so that they may be communicated to the host system in bulk, and expedited transfer of data to the host system. | 06-30-2011 |
20110167127 | MEASUREMENT IN DATA FORWARDING STORAGE - Methods and apparatus, including computer program products, for measurement in data forwarding storage. A method includes, in a network of interconnected computer system nodes, receiving a request from a source system to store data, directing the data to a computer memory, storing information about the data in a store associated with a central server, and continuously forwarding the data and the store from one computer memory to another computer memory in the network of interconnected computer system nodes without storing on any physical storage device in the network. | 07-07-2011 |
20110173287 | PREVENTING MESSAGING QUEUE DEADLOCKS IN A DMA ENVIRONMENT - Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed. | 07-14-2011 |
20110173288 | NETWORK STORAGE SYSTEM AND RELATED METHOD FOR NETWORK STORAGE - A network storage system includes a first data buffer, a second data buffer, a pre-allocating module and a control module. The first data buffer is utilized for storing a storage data received from a network-base. The second data buffer is coupled to the first data buffer and includes a plurality of data buffering units. The pre-allocating module is coupled to the second data buffer and utilized for allocating the plurality of data buffering units to the second data buffer in advance. The control module controls the first data buffer to write the stored storage data into the plurality of data buffering units. | 07-14-2011 |
20110173289 | NETWORK SUPPORT FOR SYSTEM INITIATED CHECKPOINTS - A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity. | 07-14-2011 |
20110173290 | ROTATING ENCRYPTION IN DATA FORWARDING STORAGE - A method includes receiving a request from a source system to store data, directing the data to a computer memory, the computer memory employing an encryption scheme, and continuously forwarding the data from one computer memory to another computer memory in the network of interconnected computer system nodes without storing on any physical storage device in the network, each computer memory employing the encryption scheme. The continuously forwarding includes determining an address of a node available to receive the data based on one or more factors, sending a message to the source system with the address of a specific node for the requester to forward the data, detecting a presence of the data in memory of the specific node, and forwarding the data to another computer memory of a node in the network of interconnected computer system nodes without storing any physical storage device. | 07-14-2011 |
20110179131 | MEDIA DELIVERY IN DATA FORWARDING STORAGE NETWORK - Methods and apparatus, including computer program products, for media delivery in data forwarding storage network. A method includes, in a network of interconnected computer system nodes, directing unique data items to a computer memory, ory, and continuously forwarding each of the unique data items, independent of each other, from one computer memory to another computer memory in the network of interconnected computer system nodes without storing on any physical storage device in the network. | 07-21-2011 |
20110185031 | Device and method for controlling dissemination of contents between peers having wireless communication capacities, depending on vote vectors - A method is intended for controlling dissemination of content in a peer-to-peer mode between peers having wireless communication capacities and comprising a cache memory for storing contents. This method consists, each time a peer, having a group of variable values each associated to a content it can store into its cache memory and representative of utility that storing of this content represents for it and for other peers, accesses to a wireless network or to another peer offering access to these contents, in downloading N contents having the N highest variable values into its group, N being a number depending on the storage capacity the peer is ready to use into its cache memory to store contents to be downloaded. | 07-28-2011 |
20110185032 | COMMUNICATION APPARATUS, INFORMATION PROCESSING APPARATUS, AND METHOD FOR CONTROLLING COMMUNICATION APPARATUS - A communication apparatus including: a receiving portion that receives alignment specifying information, the alignment specifying information indicating which of main memories included in a first information processing apparatus and a second information processing apparatus to align the requested data; a division location calculating portion that calculates a divisional location of the requested data so that the divisional location of the requested data becomes an alignment boundary on the main memory included in any one of the first and the second information processing apparatuses specified by the received alignment specifying information, the alignment boundary being integral multiples of a given data width; and a transmitting portion that divides the requested data stored into the main memory in the second information processing apparatus based on the calculated divisional location, and transmits the divided data to the first information processing apparatus. | 07-28-2011 |
20110213854 | Device, system, and method of accessing storage - Device, system, and method of accessing storage. For example, a server includes: a Solid-State Drive (SSD) to store data; a memory mapper to map at least a portion of a storage space of the SSD into a memory space of the server; and a network adapter to receive a Small Computer System Interface (SCSI) read command incoming from a client device, to map one or more parameters of the SCSI read command into an area of the memory space of the server from which data is requested to be read by the client device, said area corresponding to a storage area of the SSD, and to issue a Remote Direct Memory Access (RDMA) write command to copy data directly to the client device from said area of the memory space corresponding to the SSD. | 09-01-2011 |
20110246597 | REMOTE DIRECT STORAGE ACCESS - Embodiments of the present disclosure include systems, apparatuses, and methods that relate to remote, direct access of solid-state storage. In some embodiments, a network interface component (NIC) of a server may access a solid-state storage module of the server by a network storage access link that bypasses a central processing unit (CPU) and main memory of the server. Other embodiments may be described and claimed. | 10-06-2011 |
20110246598 | MAPPING RDMA SEMANTICS TO HIGH SPEED STORAGE - Embodiments described herein are directed to extending remote direct memory access (RDMA) semantics to enable implementation in a local storage system and to providing a management interface for initializing a local data store. A computer system extends RDMA semantics to provide local storage access using RDMA, where extending the RDMA semantics includes the following: mapping RDMA verbs of an RDMA verbs interface to a local data store and altering RDMA ordering semantics to allow out-of-order processing and/or out-of-order completions. The computer system also accesses various portions of the local data store using the extended RDMA semantics. | 10-06-2011 |
20110246599 | STORAGE CONTROL APPARATUS AND STORAGE CONTROL METHOD - An apparatus includes a first storage unit for storing data received from the upper-layer apparatus in the first storage unit, a second storage unit, a data transmitting unit for transmitting the data stored in the first storage unit to the second storage apparatus based on an order that the data is stored in the first storage unit, a transferring unit for transferring and storing transfer data stored in the first storage unit into the second storage unit when an amount of the data stored in the first storage unit is larger than a predetermined amount, the transfer data being at least part of the data stored in the first storage unit; and, a staging unit for transferring the transfer data stored in the second storage unit into the first storage unit if an amount of the data stored in the first storage unit is smaller than a predetermined amount. | 10-06-2011 |
20110258282 | OPTIMIZED UTILIZATION OF DMA BUFFERS FOR INCOMING DATA PACKETS IN A NETWORK PROTOCOL - A method, system and computer program product for facilitating network data packet management. In one embodiment, a controller is configured to receive data packets. Incoming data packets are stored in DMA mapped packet buffers. A time stamp is associated with the packet buffers. When the associated time stamp exceeds a defined threshold, the controller is configured to copy the packet buffers stored in DMA memory to non-DMA memory. Once copied, the DMA memory previously used to store the packet buffers is available to receive new data packets. The controller is configured to continue copying aged packet buffers to non-DMA memory until an unallocated amount DMA memory is reached. | 10-20-2011 |
20110264758 | USER-LEVEL STACK - A method for transmitting data by means of a data processing system, the system being capable of supporting an operating system and at least one application and having access to a memory and a network interface device capable of supporting a communication link over a network with another network interface device, the method comprising the steps of: forming by means of the application data to be transmitted; requesting by means of the application a non-operating-system functionality of the data processing system to send the data to be transmitted; responsive to that request: writing the data to be transmitted to an area of the memory; and initiating by means of direct communication between the non-operating-system functionality and the network interface device a transmission operation of at least some of the data over the network; and subsequently accessing the memory by means of the operating system and performing at least part of a transmission operation of at least some of the data over the network by means of the network interface device. | 10-27-2011 |
20110270942 | COMBINING MULTIPLE HARDWARE NETWORKS TO ACHIEVE LOW-LATENCY HIGH-BANDWIDTH POINT-TO-POINT COMMUNICATION - Systems, methods and articles of manufacture are disclosed for performing a collective operation on a parallel computing system that includes multiple compute nodes and multiple networks connecting the compute nodes. Each of the networks may have different characteristics. A source node may broadcast a DMA descriptor over a first network to a target node, to initialize the collective operation. The target node may perform the collective operation over a second network and using the broadcast DMA descriptor. | 11-03-2011 |
20110270943 | ZERO COPY DATA TRANSMISSION IN A SOFTWARE BASED RDMA NETWORK STACK - A method for data transmission on a device without intermediate buffering is provided. An application request is received to transmit data from the device to a second device over a network. The data from application memory is formatted for transmitting to the second device. The data are transmitted from the device to the second device without intermediate buffering. A send state is retrieved. The send state is compared to expected send state. If the send state meets the expected send state, a completion of the data transmit request is generated. | 11-03-2011 |
20110270944 | NETWORKING SYSTEM CALL DATA DIVISION FOR ZERO COPY OPERATIONS - A method for sending data over a network from a host computer. The host computer includes an operating system comprising at least a user space and a kernel space. The amount of data provided from the user space to the kernel space within one system call exceeds the size of an IP packet. A loop function in an application in the user space sends multiple packets to the kernel space within a single system call containing IO vectors which contain pointers to the data in the user space. A last data unit being processed may be designated using a flag included in the message header. In the kernel space a second loop function is used to reassemble the vector groups and pass them down the network stack. The data may then be passed to the network hardware using a direct memory access transfer directly from the user space to the network hardware. | 11-03-2011 |
20110289177 | Effecting Hardware Acceleration Of Broadcast Operations In A Parallel Computer - Compute nodes of a parallel computer organized for collective operations via a network, each compute node having a receive buffer and establishing a topology for the network; selecting a schedule for a broadcast operation; depositing, by a root node of the topology, broadcast data in a target node's receive buffer, including performing a DMA operation with a well-known memory location for the target node's receive buffer; depositing, by the root node in a memory region designated for storing broadcast data length, a length of the broadcast data, including performing a DMA operation with a well-known memory location of the broadcast data length memory region; and triggering, by the root node, the target node to perform a next DMA operation, including depositing, in a memory region designated for receiving injection instructions for the target node, an instruction to inject the broadcast data into the receive buffer of a subsequent target node. | 11-24-2011 |
20110295967 | Accelerator System For Remote Data Storage - Data processing and an accelerator system therefor are described. An embodiment relates generally to a data processing system. In such an embodiment, a bus and an accelerator are coupled to one another. The accelerator has an application function block. The application function block is to process data to provide processed data to storage. A network interface is coupled to obtain the processed data from the storage for transmission. | 12-01-2011 |
20120005300 | SELF CLOCKING INTERRUPT GENERATION IN A NETWORK INTERFACE CARD - A network interface card may issue interrupts to a host in which the determination of when to issue an interrupt to the host may be based on the incoming packet rate. In one implementation, an interrupt controller of the network interface card may issue interrupts to that informs a host of the arrival of packets. The interrupt controller may issue the interrupts in response to arrival of a predetermined number of packets, where the interrupt controller re-calculates the predetermined number based on an arrival rate of the incoming packets. | 01-05-2012 |
20120016949 | DISTRIBUTED PROCESSING SYSTEM, INTERFACE, STORAGE DEVICE, DISTRIBUTED PROCESSING METHOD, DISTRIBUTED PROCESSING PROGRAM - A distributed processing system which distributes a load of a request from a client without being restricted by a processing status and processing performance of transfer processing means is provided: | 01-19-2012 |
20120023186 | OUTPUTTING CONTENT FROM MULTIPLE DEVICES - Technologies are generally described for outputting content from multiple devices. In some examples, a method includes receiving content from a first content output device at a processor. In some examples, the method further includes recording at least a portion of the content by the processor. In some examples, the method further includes determining an identifier of the content by the processor based on the portion. In some examples, the method further includes determining a source of the content by the processor based on the identifier. In some examples, the method further includes requesting that the content be sent from the source to a second content output device. | 01-26-2012 |
20120059899 | Communications-Network Data Processing Methods, Communications-Network Data Processing Systems, Computer-Readable Storage Media, Communications-Network Data Presentation Methods, and Communications-Network Data Presentation Systems - Communications-network data processing methods include receiving a request to perform an action involving data associated with a configuration of a communications network or a behavior of the communications network and in response to the receiving of the request, performing the action. Communications-network data presentation methods include receiving information indicating a source of data characterizing a communications network and a desired presentation format of the data, accessing the source to obtain the data characterizing the communications network, and presenting the data according to the desired presentation format. | 03-08-2012 |
20120066333 | ABSTRACTING SPECIAL FILE INTERFACES TO CONCURRENTLY SUPPORT MULTIPLE OPERATING SYSTEM LEVELS - Some embodiments of the inventive subject matter are directed to detecting a request to access a symbol via a special file that accesses kernel memory directly. The request can come from an application from a first instance of an operating system (OS) running a first version of the OS. A second instance of the OS, which manages the first OS, receives the request. The second instance of the OS includes a kernel shared between the first and second instances of the OS. The second instance of the OS runs a second version of the OS. Some embodiments are further directed to detecting data associated with the symbol, where the data is in a first data format that is compatible with the second version of the OS but is incompatible with the first version of the OS. Some embodiments are further directed to reformatting the data from the first data format to a second data format compatible with the second version of the OS. | 03-15-2012 |
20120066334 | INFORMATION PROCESSING SYSTEM, STORAGE MEDIUM STORING AN INFORMATION PROCESSING PROGRAM AND INFORMATION PROCESSING METHOD - A game system includes game apparatuses and a server. The one game apparatus has a first memory storing a software program being made up of a program and save data, the other game apparatus has a second memory capable of additionally storing a software program, and the server has a third memory storing a program. The one game apparatus transmits the save data in the first memory to the other game apparatus by utilizing a local communication, and the server transmits the program in the third memory to the other game apparatus by utilizing Wi-Fi communications. The other game apparatus receives the save data and then the program, and additionally stores them in the second memory as a software program. | 03-15-2012 |
20120072523 | NETWORK DEVICES WITH MULTIPLE DIRECT MEMORY ACCESS CHANNELS AND METHODS THEREOF - A method, computer readable medium, and a system for communicating with networked clients and servers through a network device is disclosed. A first network data packet is received at a first port of a network device. The first network data packet is destined for a first executing application of a plurality of executing applications operating in the network device. The plurality of executing applications are associated with corresponding application drivers utilizing independent and unique direct memory access (DMA) channels. A first DMA channel is identified, wherein the first DMA channel is mapped to the first port and associated with a first application driver corresponding to the first executing application. The first network data packet is transmitted to the first executing application over the first identified DMA channel. | 03-22-2012 |
20120084380 | METHOD AND SYSTEM FOR COMMUNICATING BETWEEN MEMORY REGIONS - A method and system are provided for transferring data in a networked system between a local memory in a local system and a remote memory in a remote system. A RDMA request is received and a first buffer region is associated with a first transfer operation. The system determines whether a size of the first buffer region exceeds a maximum transfer size of the networked system. Portions of the second buffer region may be associated with the first transfer operation based on the determination of the size of the first buffer region. The system subsequently performs the first transfer operation. | 04-05-2012 |
20120089694 | TCP/IP PROCESSOR AND ENGINE USING RDMA - A TCP/IP processor and data processing engines for use in the TCP/IP processor is disclosed. The TCP/IP processor can transport data payloads of Internet Protocol (IP) data packets using an architecture that provides capabilities to transport and process Internet Protocol (IP) packets from Layer 2 through transport protocol layer and may also provide packet inspection through Layer 7. The engines may perform pass-through packet classification, policy processing and/or security processing enabling packet streaming through the architecture at nearly the full line rate. An application running on an initiator or target can in certain instantiations register a region of memory, which is made available to its peer(s) for access directly without substantial host intervention through RDMA data transfer. | 04-12-2012 |
20120096104 | ELECTRONIC DEVICE WITH CUSTOMIZABLE EMBEDDED SOFTWARE AND METHODS THEREFOR - An electronic device comprising: a central processing unit; memory in data communication with the central processing unit; a network connector in data communication with the central processing unit; a firmware image stored in a compressed format within the memory, wherein the firmware image includes a plurality of software components; and an update agent stored within the memory and configured to provide a list of software components for communication out the network connector, wherein the electronic device is configured to communicate the list of software components out the network connector and receive a modified firmware image in a compressed format that includes at least one additional software component. | 04-19-2012 |
20120096105 | DEVICE, SYSTEM, AND METHOD OF DISTRIBUTING MESSAGES - Device, system, and method of distributing messages. For example, a data publisher capable of communication with a plurality of subscribers via a network fabric, the data publisher comprising: a memory allocator to allocate a memory area of a local memory unit of the data publisher to be accessible for Remote Direct Memory Access (RDMA) read operations by one or more of the subscribers; and a publisher application to create a message log in said memory area, to send a message to one or more of the subscribers using a multicast transport protocol, and to store in said memory area a copy of said message. A subscriber device handles recovery of lost messages by directly reading the lost messages from the message log of the data publisher using RDMA read operation(s). | 04-19-2012 |
20120110107 | DIRECT MEMORY ACCESS (DMA) TRANSFER OF NETWORK INTERFACE STATISTICS - In general, in one aspect, the disclosure describes a method that includes maintaining statistics, at a network interface, metering operation of the network interface. The statistics are transferred by direct memory access from the network interface to a memory accessed by at least one processor. | 05-03-2012 |
20120131124 | RDMA READ DESTINATION BUFFERS MAPPED ONTO A SINGLE REPRESENTATION - A computer-implemented method, system, and article of manufacture for data communication between a requester and a responder in a remote direct memory access (RDMA) network, where each of the requester and the responder is an RDMA-enabled host of the network. The method includes: sending a request for the responder to provide data, where the request includes a mapped steering tag that is obtained by mapping a set of memory buffers of the requester onto a single representation that allows for identifying each of the memory buffers of the set; and receiving the requested data together with the mapped steering tag and assigning the data being received to the memory buffers of the set consistently with the mapping. | 05-24-2012 |
20120131125 | METHODS AND SYSTEMS OF DYNAMICALLY MANAGING CONTENT FOR USE BY A MEDIA PLAYBACK DEVICE - Some embodiments provide systems and/or methods of managing content in providing a playback experience associated with a portable storage medium by detecting access to a first portable storage medium with multimedia content recorded on the first portable storage medium; evaluating content on the first portable storage medium; evaluating local memory of the multimedia playback device; determining, in response to the evaluation of the content on the first portable storage medium and the evaluation of the local memory, whether memory on the local memory needs to be freed up in implementing playback of multimedia content in association with the first portable storage medium; and moving one or more contents stored on the local memory of the multimedia playback device to a virtual storage accessible by the multimedia playback device over a distributed network in response to determining that memory on the local memory needs to be freed up. | 05-24-2012 |
20120136958 | METHOD FOR ANALYZING PROTOCOL DATA UNIT OF INTERNET SMALL COMPUTER SYSTEMS INTERFACE - A method for analyzing a Protocol Data Unit (PDU) of an internet Small Computer Systems Interface (iSCSI) is used for processing a data write request of the iSCSI. The method includes sending the data write request to a target; the target generating a Ready to Transfer (R2T) PDU according to the data write request, and transferring the R2T PDU to an initiator; the initiator generating multiple groups of Data Out PDUs, and writing a scatter/gather block in a target transfer tag of each Data Out PDU; the target finding the corresponding scatter/gather block according to the target transfer tag, and obtaining a host buffer from the scatter/gather block; the target executing a Direct Memory Access command, so as to directly write a payload content received by the target in the host buffer; and after the target completes the write request, the target sending out an RSP PDU to the initiator. | 05-31-2012 |
20120150986 | Method and System for Extending Memory Capacity of a Mobile Device Using Proximate Devices and Unicasting - An improved download capability for mobile devices, without requiring increasing of the local memory of such devices, by providing a set of multimedia devices with the capability to create a cooperative download grid where multiple instrumented devices can be aggregated together according to predefined profiles. This capability is useful in at least two different scenarios. The first is when a SIP enabled device must download a large file having a capacity that is larger than the available memory of the SIP device. The second is when a SIP enabled device must download a file but cannot be connected for a long enough time to accomplish the download. If the SIP device is in proximity to other compatible devices such as Voice over Internet Protocol (VoIP) or Session Initiation Protocol (SIP), these devices are operable to be dynamically aggregated to provide a download grid with multiprotocol support that allows optimized downloading. | 06-14-2012 |
20120185554 | DATA TRANSFER DEVICE AND DATA TRANSFER METHOD - An object of the present invention is to efficiently perform a data transfer by using a plurality of data transfer devices. A storage apparatus | 07-19-2012 |
20120191800 | METHODS AND SYSTEMS FOR PROVIDING DIRECT DMA - A method and system for efficient direct DMA for processing connection state information or other expediting data packets. One example is the use of a network interface controller to buffer TCP type data packets that may contain connection state information. The connection state information is extracted from a received packet. The connection state information is stored in a special DMA descriptor that is stored in a ring buffer area of a buffer memory that is accessible by a host processor when an interrupt signal is received. The packet is then discarded. The host processor accesses the ring buffer memory only to retrieve the stored connection state information from the DMA descriptor without having to access a packet buffer area in the memory. | 07-26-2012 |
20120209939 | MEMORY SYSTEM CAPABLE OF ADDING TIME INFORMATION TO DATA OBTAINED VIA NETWORK - According to one embodiment, a memory system includes a non-volatile semiconductor memory device, a control unit, a memory, an extension register, and a timer. The control unit controls the non-volatile semiconductor memory device. The memory as a work area is connected to the control unit. The extension register is provided in the memory and time information is set therein. The timer updates the time information. When the control unit records a file obtained via a network in the non-volatile semiconductor memory device, the control unit adds the time information updated by the timer to the file. | 08-16-2012 |
20120209940 | METHOD FOR SWITCHING TRAFFIC BETWEEN VIRTUAL MACHINES - Methods for switching traffic include a physical machine running source and destination virtual machines (VMs). The source VM issues a data unit addressed to the destination VM. The physical machine has a physical network interface in communication with the VMs. The physical network interface transmits a sub-packet, which includes a partial portion of the data unit, over a network while a majority portion of the data unit remains at the physical machine. A network switch on the network receives the sub-packet transmitted by the physical network interface. The network switch performs one or more OSI Layer 2 through Layer 7 switching functions on the sub-packet and returns that sub-packet to the physical network interface. The physical network interface identifies the data unit stored in the memory in response to the sub-packet returned from the network switch and forwards the identified data unit to the destination VM. | 08-16-2012 |
20120209941 | COMMUNICATION APPARATUS, AND APPARATUS AND METHOD FOR CONTROLLING COLLECTION OF STATISTICAL DATA - In a communication apparatus, a collector collects a plurality of statistical data values describing communication activities, based on given user data frames, and produces statistics transmission data including the collected statistical data values and management data tags added thereto. A controller transfers, by using a direct memory access technique, the statistics transmission data produced by the collector. A memory stores the statistics transmission data transferred by the controller. | 08-16-2012 |
20120221668 | CLOUD STORAGE ACCESS DEVICE AND METHOD FOR USING THE SAME - A cloud storage access device includes a data fetching unit, a user management unit, and a data link unit. The data fetching unit collects private data of each user of the cloud storage access device. The user management unit creates a home directory corresponding to each user in the cloud storage access device. The data link unit connects each of the home directories to both the cloud and a local storage of a network terminal, such that the cloud storage access device communicates with both the cloud and the local storage. Each user of the cloud storage access device stores data to the cloud or the local storage and accesses data stored in the cloud or the local storage through the home directory corresponding to the user. | 08-30-2012 |
20120221669 | COMMUNICATION METHOD FOR PARALLEL COMPUTING, INFORMATION PROCESSING APPARATUS AND COMPUTER READABLE RECORDING MEDIUM - A communication method includes reporting information that indicates disposition of communication data in a communication buffer from a first node to second nodes by a multi-destination delivery using a barrier synchronization or a reduction to all nodes. The communication data is transferred between the first node and the second nodes by at least one of collective communication methods as a node-to-node communication method used in parallel computing. The communication method transfers the communication data by the second nodes using the information that indicates the disposition of the communication data in the communication buffer. | 08-30-2012 |
20120233282 | Method and System for Transferring a Virtual Machine - A virtual machine management system is used to instantiate, wake, move, sleep, and destroy individual operating environments in a cloud or cluster. In various embodiments, there is a method and system for transferring an operating environment from a first host to a second host. The first host contains an active environment, with a disk and memory. The disk is snapshotted while the operating environment on the first host is still live, and the snapshot is transferred to the second host. After the initial snapshot is transferred, a differential update using rsync or a similar mechanism can be used to transfer just the changes from the snapshot from the first to the second host. In a further embodiment, the contents of the memory are also transferred. This memory can be transferred as a snapshot after pausing the active environment, or by synchronizing the memory spaces between the two hosts. | 09-13-2012 |
20120233283 | DETERMINING SERVER WRITE ACTIVITY LEVELS TO USE TO ADJUST WRITE CACHE SIZE - Provided are a computer program product, system, and method for determining server write activity levels to use to adjust write cache size. Information on server write activity to the cache is gathered. The gathered information on write activity is processed to determine a server write activity level comprising one of multiple write activity levels indicating a level of write activity. The determined server write activity level is transmitted to a storage server having a write cache, wherein the storage server uses the determined server write activity level to determine whether to adjust a size of the storage server write cache. | 09-13-2012 |
20120246256 | Administering An Epoch Initiated For Remote Memory Access - Methods, systems, and products are disclosed for administering an epoch initiated for remote memory access that include: initiating, by an origin application messaging module on an origin compute node, one or more data transfers to a target compute node for the epoch; initiating, by the origin application messaging module after initiating the data transfers, a closing stage for the epoch, including rejecting any new data transfers after initiating the closing stage for the epoch; determining, by the origin application messaging module, whether the data transfers have completed; and closing, by the origin application messaging module, the epoch if the data transfers have completed. | 09-27-2012 |
20120254339 | METHODS AND APPARATUS TO TRANSMIT DEVICE DESCRIPTION FILES TO A HOST - Example methods and apparatus to transmit device description files to a host are disclosed. A disclosed example method includes communicatively coupling a field device to the host to provision the field device within a process control system, receiving an indication that the host does not include a version of a device description file that corresponds to a version of the field device, accessing the device description file from a memory of the field device, and transmitting the device description file from the field device to the host. | 10-04-2012 |
20120259940 | RDMA (REMOTE DIRECT MEMORY ACCESS) DATA TRANSFER IN A VIRTUAL ENVIRONMENT - In an embodiment, a method is provided. In an embodiment, the method provides determining that a message has been placed in a send buffer; and transferring the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space from which the application can retrieve the message. | 10-11-2012 |
20120259941 | SERVER AND METHOD FOR THE SERVER TO ACCESS A VOLUME - Embodiments of the present technical solution relate to the technique field of storage, and disclose a server and a method for the server to access a volume. The method comprises: determining, from a first list, a block that needs to be accessed according to an access offset of a volume that needs to be accessed; determining, from a second list, a storage controller corresponding to the block that needs to be accessed according to the determined block; and sending a data reading request or a data writing request to the storage controller corresponding to the block that needs to be accessed to process. Embodiments of the present invention can reduce time delay when the data reading request or the data writing request of the server reaches the block that needs to be accessed. | 10-11-2012 |
20120265837 | REMOTE DIRECT MEMORY ACCESS OVER DATAGRAMS - A communication stack for providing remote direct memory access (RDMA) over a datagram network is disclosed. The communication stack has a user level interface configured to accept datagram related input and communicate with an RDMA enabled network interface card (NIC) via an NIC driver. The communication stack also has an RDMA protocol layer configured to supply one or more data transfer primitives for the datagram related input of the user level. The communication stack further has a direct data placement (DDP) layer configured to transfer the datagram related input from a user storage to a transport layer based on the one or more data transfer primitives by way of a lower layer protocol (LLP) over the datagram network. | 10-18-2012 |
20120265838 | PROGRAMMABLE LOGIC CONTROLLER - A PLC is enabled to determine whether device data has been completely transferred to an FTP server and is improved in the flexibility of setting for a completed-transfer notice code. To this end, the PLC includes a logging section for logging device data and outputting a log file describing the results of logging of the device data to a memory card; and a file transfer section for transferring the log file delivered to the memory card to the FTP server. After having completely transferred all the data that constitutes the log file (Yes in step S | 10-18-2012 |
20120265839 | RESPONSE DEVICE, INTEGRATED CIRCUIT OF SAME, RESPONSE METHOD, AND RESPONSE SYSTEM - The present invention performs efficient data transfer between devices. In particular, the present invention can reduce processing loads and power consumption of a response device | 10-18-2012 |
20120265840 | Providing a Memory Region or Memory Window Access Notification on a System Area Network - A system and method for providing a memory region/memory window (MR/MW) access notification on a system area network are provided. Whenever a previously allocated MR/MW is accessed, such as via a remote direct memory access (RDMA) read/write operation, a notification of the access is generated and written to a queue data structure associated with the MR/MW. In one illustrative embodiment, this queue data structure may be a MR/MW event queue (EQ) data stricture that is created and used for all consumer processes and all MR/MWs. In other illustrative embodiments, the EQ is associated with a protection domain. In yet another illustrative embodiment, an event record may be posted to an asynchronous event handler in response to the accessing of the MR/MW. In another illustrative embodiment, a previously posted queue element may be used to generate a completion queue element in response to the accessing of the MR/MW. | 10-18-2012 |
20120278422 | LIVE OBJECT PATTERN FOR USE WITH A DISTRIBUTED CACHE - A live object pattern is described that enables a distributed cache to store live objects as data entries thereon. A live object is a data entry stored in the distributed cache which represents a particular function or responsibility. When a live object arrives to the cache on a particular cluster server, a set of interfaces are called back which inform the live object that it has arrived at that server and that it should initiate to perform its functions. A live object is thus different from “dead” data entries because a live object performs a set of function, can be started/stopped and can interact with other live objects in the distributed cache. Because live objects are backed up across the cluster just like normal data entries, the functional components of the system are more highly available and are easily transferred to another server's cache in case of failures. | 11-01-2012 |
20120284355 | METHOD AND SYSTEM FOR COMMUNICATING BETWEEN MEMORY REGIONS - A method and system are provided for transferring data in a networked system between a local memory in a local system and a remote memory in a remote system. A RDMA request is received and a first buffer region is associated with a first transfer operation. The system determines whether a size of the first buffer region exceeds a maximum transfer size of the networked system. Portions of the second buffer region may be associated with the first transfer operation based on the determination of the size of the first buffer region. The system subsequently performs the first transfer operation. | 11-08-2012 |
20120290675 | SYSTEM AND METHOD FOR A MOBILE DEVICE TO USE PHYSICAL STORAGE OF ANOTHER DEVICE FOR CACHING - Systems and methods for a mobile device to use physical storage of another device for caching are disclosed. In one embodiment, a mobile device is able to receive over a cellular or IP network a response or content to be cached and wirelessly access the physical storage of the other device via a wireless network to cache the response or content for the mobile device. | 11-15-2012 |
20120303735 | DOMAIN NAME SERVICE RESOLVER - A domain name service (DNS) resolver returns Internet protocol (IP) addresses. A connection with an Internet application or device receives domain name resolution requests that originate outside of the Internet. A direct DNS resolver identifies IP addresses without referring to the Internet or using other DNS resolvers. An address store includes a predetermined list of domain names and corresponding IP addresses specified from a point remote to the DNS resolver. The DNS resolver processes the domain name resolutions for the predetermined list of domain names differently than domain name resolutions for other domain names not on the predetermined list of domain names. At least part of the predetermined list is pushed to a destination upon receiving a resolution request for a domain name in the predetermined list of domain names, the request being of a type other than an authoritative resolution request to be performed by the direct DNS resolver. | 11-29-2012 |
20120311062 | METHODS, CIRCUITS, DEVICES, SYSTEMS AND ASSOCIATED COMPUTER EXECUTABLE CODE FOR DISTRIBUTED CONTENT CACHING AND DELIVERY - Disclosed are methods, circuits, devices, systems and associated computer executable code for distributed content caching and delivery. An access or gateway network may include two or more gateway nodes integral or otherwise functionally associated with a caching unit. Each of the caching units may include: (a) a caching repository, (b) caching/delivery logic and (c) an inter-cache communication module. Caching logic of a given caching unit may include content characterization functionality for generating one or more characterization parameters associated with and/or derived from content entering a gateway node with which the given caching unit is integral or otherwise functionally associated. Content characterization parameters generated by a characterization module of a given caching unit may be compared with content characterization parameters of content already cached in: one or more cache repositories of the given caching unit, and one or more cache repositories of other caching units. | 12-06-2012 |
20120311063 | METHOD AND APPARATUS FOR USING A SINGLE MULTI-FUNCTION ADAPTER WITH DIFFERENT OPERATING SYSTEMS - A flexible arrangement allows a single arrangement of Ethernet channel adapter (ECA) hardware functions to appear as needed to conform to various operating system deployment models. A PCI interface presents a logical model of virtual devices appropriate to the relevant operating system. Mapping parameters and values are associated with the packet streams to allow the packet streams to be properly processed according to the presented logical model and needed operations. Mapping occurs at both the host side and at the network side to allow the multiple operations of the ECA to be performed while still allowing proper delivery at each interface. | 12-06-2012 |
20120324034 | PROVIDING ACCESS TO SHARED STATE DATA - Methods, systems, and computer-readable media for manipulating in-memory data entities are provided. Embodiments of the present invention use a Representational State Transfer (“REST”) web service to manipulate the in-memory data entities. In one embodiment, a “snap shot” is taken of the in-memory data entities at a point in time to create representations of the entities. A hierarchy of the representations is built. The hierarchy is used to make the entities addressable via a URI. Embodiments of the invention may then map the entity representations in the hierarchy to the entities. An embodiment of the invention uses handlers to process a REST style request addressed to an entity representation. The handler reads the command and determines whether the command is authorized for performance on the entity and performs that command, if appropriate. | 12-20-2012 |
20120331083 | RECEIVE QUEUE MODELS TO REDUCE I/O CACHE FOOTPRINT - A method according to one embodiment includes the operations of configuring a primary receive queue to designate a first plurality of buffers; configuring a secondary receive queue to designate a second plurality of buffers, wherein said primary receive queue is sized to accommodate a first network traffic data rate and said secondary receive queue is sized to provide additional accommodation for burst network traffic data rates; selecting a buffer from said primary receive queue, if said primary receive queue has buffers available, otherwise selecting a buffer from said secondary receive queue; transferring data from a network controller to said selected buffer; indicating that said transferring to said selected buffer is complete; reading said data from said selected buffer; and returning said selected buffer, after said reading is complete, to said primary receive queue if said primary receive queue has space available for the selected buffer, otherwise returning said selected buffer to said secondary receive queue. | 12-27-2012 |
20130007180 | TRANSPORTING OPERATIONS OF ARBITRARY SIZE OVER REMOTE DIRECT MEMORY ACCESS - The embodiments described herein generally relate to a protocol for implementing data operations, e.g., a version of SMB, atop RDMA transports. In embodiments, systems and methods use the protocol definition, which specifies new messages for negotiating an RDMA connection and for transferring SMB2 data using the negotiated communication. A new protocol message may include new header information to determine message size, number of messages, and other information for sending the SMB2 data over RDMA. The header information is used to accommodate differences in message size requirements between RDMA and SMB2. The SMB Direct protocol allows SMB2 data to be fragmented into multiple individual RDMA messages that a receiver may then logically concatenate into a single SMB2 request or SMB2 response. The SMB Direct protocol also may allow SMB2 to transfer application data via efficient RDMA direct placement and to signal the application data's availability when the transfer is complete. | 01-03-2013 |
20130007181 | METHOD AND SYSTEM FOR OFFLOADING COMPUTATION FLEXIBLY TO A COMMUNICATION ADAPTER - A method for offloading computation flexibly to a communication adapter includes receiving a message that includes a procedure image identifier associated with a procedure image of a host application, determining a procedure image and a communication adapter processor using the procedure image identifier, and forwarding the first message to the communication adapter processor configured to execute the procedure image. The method further includes executing, on the communication adapter processor independent of a host processor, the procedure image in communication adapter memory by acquiring a host memory latch for a memory block in host memory, reading the memory block in the host memory after acquiring the host memory latch, manipulating, by executing the procedure image, the memory block in the communication adapter memory to obtain a modified memory block, committing the modified memory block to the host memory, and releasing the host memory latch. | 01-03-2013 |
20130036185 | METHOD AND APPARATUS FOR MANAGING TRANSPORT OPERATIONS TO A CLUSTER WITHIN A PROCESSOR - A method and corresponding apparatus of managing transport operations between a first memory cluster and one or more other memory clusters, include receiving, in the first cluster, information related to one or more transport operations with related data buffered in an interface device, the interface device coupling the first cluster to the one or more other clusters, selecting at least one transport operation, from the one or more transport operations, based at least in part on the received information, and executing the selected at least one transport operation. | 02-07-2013 |
20130041969 | SYSTEM AND METHOD FOR PROVIDING A MESSAGING APPLICATION PROGRAM INTERFACE - A system and method for providing a message bus component or version thereof (referred to herein as an implementation), and a messaging application program interface, for use in an enterprise data center, middleware machine system, or similar environment that includes a plurality of processor nodes together with a high-performance communication fabric (or communication mechanism) such as InfiniBand. In accordance with an embodiment, the messaging application program interface enables features such as asynchronous messaging, low latency, and high data throughput, and supports the use of in-memory data grid, application server, and other middleware components. | 02-14-2013 |
20130046844 | ADMINISTERING CONNECTION IDENTIFIERS FOR COLLECTIVE OPERATIONS IN A PARALLEL COMPUTER - Administering connection identifiers for collective operations in a parallel computer, including prior to calling a collective operation, determining, by a first compute node of a communicator to receive an instruction to execute the collective operation, whether a value stored in a global connection identifier utilization buffer exceeds a predetermined threshold; if the value stored in the global ConnID utilization buffer does not exceed the predetermined threshold: calling the collective operation with a next available ConnID including retrieving, from an element of a ConnID buffer, the next available ConnID and locking the element of the ConnID buffer from access by other compute nodes; and if the value stored in the global ConnID utilization buffer exceeds the predetermined threshold: repeatedly determining whether the value stored in the global ConnID utilization buffer exceeds the predetermined threshold until the value stored in the global ConnID utilization buffer does not exceed the predetermined threshold. | 02-21-2013 |
20130054726 | METHOD AND SYSTEM FOR CONDITIONAL REMOTE DIRECT MEMORY ACCESS WRITE - A method for conditional execution of a remote direct memory access (RDMA) write includes a host channel adapter receiving at least one message that includes an atomic operation and the RDMA write. The host channel adapter obtains a descriptor corresponding to the RDMA write, and determines, from the descriptor, that the RDMA write is a conditional RDMA write conditioned on a successful execution of the atomic operation. Based on determining that the RDMA write is the conditional RDMA write, the conditional RDMA write is queued to be conditionally executed based on a success indicator of the atomic operation. After queuing the RDMA write, the atomic operation is executed successfully. In response to the successful execution, the host channel adapter executes the conditional RDMA write to write to the memory location on the host. | 02-28-2013 |
20130054727 | STORAGE CONTROL METHOD AND INFORMATION PROCESSING APPARATUS - A control unit shifts a boundary between a range of hash values allocated to a first node and a range of hash values allocated to a second node from a first hash value to a second hash value to thereby expand the range of hash values allocated to the first node. The control unit moves data which is part of data stored in the second node and in which hash values calculated from associated keys belong to a range between the first hash value and the second hash value, from the second node to the first node. | 02-28-2013 |
20130060880 | Hybrid Content-Distribution System and Method - The present invention discloses a hybrid content-distribution system. It uses two types of memory to distribute contents: re-writable memory (RWM) and three-dimensional mask-programmed read-only memory (3D-MPROM). During a publication period, new contents are transferred from a remote server to the RWM. At the end of the publication period, a user receives a 3D-MPROM, which stores a collection of the transferred contents. To make room for the contents to be released during the next publication period, the contents common to the 3D-MPROM and the RWM are deleted from the RWM afterwards. | 03-07-2013 |
20130067018 | METHODS AND COMPUTER PROGRAM PRODUCTS FOR MONITORING THE CONTENTS OF NETWORK TRAFFIC IN A NETWORK DEVICE - Provided are methods and computer program products monitoring the contents of network traffic in a network device. Methods may include collecting, using a kernel space driver interface, network traffic data sent by and/or received at the network device, parsing the collected network traffic data to extract transaction data corresponding to at least one logical transaction defined by a network protocol and storing an indicator of a quantity of the collected network traffic data that was parsed, and generating an event incorporating the extracted transaction data. | 03-14-2013 |
20130080560 | System and Method for Sharing Digital Data on a Presenter Device to a Plurality of Participant Devices - There is provided a system and method for sharing a plurality of data contents from a presenter device to a plurality of participant devices. There is provided a system comprising a processor configured to execute a data sharing application, wherein the data sharing application is configured to receive a selection of the plurality of data contents, connect to the plurality of participant devices using a hotspot service executing on the presenter device, establish a sharing session with the plurality of participant devices, and present the plurality of data contents onto the plurality of participant devices. Accordingly, the presenter device maintains full control over the plurality of data contents being shared and reduces the time for sharing and the bandwidth consumed for presenting the plurality of data contents. | 03-28-2013 |
20130080561 | USING TRANSMISSION CONTROL PROTOCOL/INTERNET PROTOCOL (TCP/IP) TO SETUP HIGH SPEED OUT OF BAND DATA COMMUNICATION CONNECTIONS - A transport layer connection is established between a first system and a second system. The establishment of the transport layer connection includes identifying a remote direct memory access (RDMA) connection between the first system and the second system. After establishing to transport layer connection, the first and second systems exchange data using the RDMA connection identified in establishing the transport layer connection. | 03-28-2013 |
20130080562 | USING TRANSMISSION CONTROL PROTOCOL/INTERNET PROTOCOL (TCP/IP) TO SETUP HIGH SPEED OUT OF BAND DATA COMMUNICATION CONNECTIONS - A method establishes a transport layer connection between a first system and a second system. The establishment of the transport layer connection includes identifying a remote direct memory access (RDMA) connection between the first system and the second system. After establishing to transport layer connection, the first and second systems exchange data using the RDMA connection identified in establishing the transport layer connection. | 03-28-2013 |
20130080563 | EFFECTING HARDWARE ACCELERATION OF BROADCAST OPERATIONS IN A PARALLEL COMPUTER - Compute nodes of a parallel computer organized for collective operations via a network, each compute node having a receive buffer and establishing a topology for the network; selecting a schedule for a broadcast operation; depositing, by a root node of the topology, broadcast data in a target node's receive buffer, including performing a DMA operation with a well-known memory location for the target node's receive buffer; depositing, by the root node in a memory region designated for storing broadcast data length, a length of the broadcast data, including performing a DMA operation with a well-known memory location of the broadcast data length memory region; and triggering, by the root node, the target node to perform a next DMA operation, including depositing, in a memory region designated for receiving injection instructions for the target node, an instruction to inject the broadcast data into the receive buffer of a subsequent target node. | 03-28-2013 |
20130080564 | MESSAGING IN A PARALLEL COMPUTER USING REMOTE DIRECT MEMORY ACCESS ('RDMA') - Messaging in a parallel computer using remote direct memory access (‘RDMA’), including: receiving a send work request; responsive to the send work request: translating a local virtual address on the first node from which data is to be transferred to a physical address on the first node from which data is to be transferred from; creating a local RDMA object that includes a counter set to the size of a messaging acknowledgment field; sending, from a messaging unit in the first node to a messaging unit in a second node, a message that includes a RDMA read operation request, the physical address of the local RDMA object, and the physical address on the first node from which data is to be transferred from; and receiving, by the first node responsive to the second node's execution of the RDMA read operation request, acknowledgment data in the local RDMA object. | 03-28-2013 |
20130086196 | SYSTEM AND METHOD FOR SUPPORTING DIFFERENT MESSAGE QUEUES IN A TRANSACTIONAL MIDDLEWARE MACHINE ENVIRONMENT - A system and method can support different message queues in a transactional middleware machine environment. The transactional middleware machine environment includes an advertized table that comprises a first queue table and a second queue table, with the first queue table storing address information for a first message queue and the second queue table storing address information for a second message queue. The advertized table is further adaptive to be used by a first transactional client to locate a transactional service provided by a transactional server. The first transactional client operates to look up the first queue table for a key that indicates the address information of the transactional service that is stored in the second queue table. | 04-04-2013 |
20130086197 | MANAGING CACHE AT A COMPUTER - A method and system for managing caching at a computer. A computer receives a file from a storage device on a network in response to a request by a first user. The computer may then determine if other users of the computer are likely to request the file, based upon a type of the file and a type of the network. If other users are likely to request the file, the computer may then cache the file at the computer. In one embodiment, the computer may determine if other users of the computer are likely to request the file based upon access permissions to the file at a source of the file. In another embodiment, the computer may determine if other users of the computer are likely to request the file based upon if the file has been previously cached at the computer. | 04-04-2013 |
20130091235 | DYNAMIC CONTENT INSTALLER FOR MOBILE DEVICES - Apparatus and methods for obtaining a content item in a mobile environment include receiving a content item of a first type and content management information that corresponds to the content item. The content management information specifies a destination storage location for content items of the first type, and the destination storage location is different from a default storage location for the content items of the first type. Further, these aspects include storing the content item on the communication device at the destination storage location based on the content management information, and executing an application on a computing platform of the communication device. The application interacts with the content item at the destination storage location based on the content management information. Additional apparatus and methods relating to distributing content are also disclosed. | 04-11-2013 |
20130091236 | REMOTE DIRECT MEMORY ACCESS ('RDMA') IN A PARALLEL COMPUTER - Remote direct memory access (‘RDMA’) in a parallel computer, the parallel computer including a plurality of nodes, each node including a messaging unit, including: receiving an RDMA read operation request that includes a virtual address representing a memory region at which to receive data to be transferred from a second node to the first node; responsive to the RDMA read operation request: translating the virtual address to a physical address; creating a local RDMA object that includes a counter set to the size of the memory region; sending a message that includes an DMA write operation request, the physical address of the memory region on the first node, the physical address of the local RDMA object on the first node, and a remote virtual address on the second node; and receiving the data to be transferred from the second node. | 04-11-2013 |
20130103777 | NETWORK INTERFACE CONTROLLER WITH CIRCULAR RECEIVE BUFFER - A method for communication includes allocating in a memory of a host device a contiguous, cyclical set of buffers for use by a transport service instance on a network interface controller (NIC). First and second indices point respectively to a first buffer in the set to which the NIC is to write and a second buffer in the set from which a client process running on the host device is to read. Upon receiving at the NIC a message directed to the transport service instance and containing data to be pushed to the memory, the data are written to the first buffer that is pointed to by the first index, and the first index is advanced cyclically through the set. The second index is advanced cyclically through the set when the data in the second buffer have been read by the client process. | 04-25-2013 |
20130110959 | REMOTE DIRECT MEMORY ACCESS ADAPTER STATE MIGRATION IN A VIRTUAL ENVIRONMENT | 05-02-2013 |
20130110960 | METHOD AND SYSTEM FOR ACCESSING STORAGE DEVICE | 05-02-2013 |
20130117403 | Managing Internode Data Communications For An Uninitialized Process In A Parallel Computer - A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory. | 05-09-2013 |
20130117404 | HARDWARE TASK MANAGER - A hardware task manager for managing operations in an adaptive computing system. The task manager indicates when input and output buffer resources are sufficient to allow a task to execute. The task can require an arbitrary number of input values from one or more other (or the same) tasks. Likewise, a number of output buffers must also be available before the task can start to execute and store results in the output buffers. The hardware task manager maintains a counter in association with each input and output buffer. For input buffers, a negative value for the counter means that there is no data in the buffer and, hence, the respective input buffer is not ready or available. Thus, the associated task can not run. Predetermined numbers of bytes, or “units,” are stored into the input buffer and an associated counter is incremented. When the counter value transitions from a negative value to a zero the high-order bit of the counter is cleared, thereby indicating the input buffer has sufficient data and is available to be processed by a task. | 05-09-2013 |
20130124665 | ADMINISTERING AN EPOCH INITIATED FOR REMOTE MEMORY ACCESS - Methods, systems, and products are disclosed for administering an epoch initiated for remote memory access that include: initiating, by an origin application messaging module on an origin compute node, one or more data transfers to a target compute node for the epoch; initiating, by the origin application messaging module after initiating the data transfers, a closing stage for the epoch, including rejecting any new data transfers after initiating the closing stage for the epoch; determining, by the origin application messaging module, whether the data transfers have completed; and closing, by the origin application messaging module, the epoch if the data transfers have completed. | 05-16-2013 |
20130124666 | MANAGING INTERNODE DATA COMMUNICATIONS FOR AN UNINITIALIZED PROCESS IN A PARALLEL COMPUTER - A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory. | 05-16-2013 |
20130138758 | Efficient data transfer between servers and remote peripherals - Methods and apparatus are provided for transferring data between servers and a remote entity having multiple peripherals. Multiple servers are connected to a remote entity over an Remote Direct Memory Access capable network. The remote entity includes peripherals such as network interface cards (NICs) and host bus adapters (HBAs). Server descriptor rings and descriptors are provided to allow efficient and effective communication between the servers and the remote entity. | 05-30-2013 |
20130138759 | NETWORK SUPPORT FOR SYSTEM INITIATED CHECKPOINTS - A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity. | 05-30-2013 |
20130144966 | COORDINATING WRITE SEQUENCES IN A DATA STORAGE SYSTEM - According to one aspect of the present disclosure, a method and technique for coordinating write sequences in a data storage system is disclosed. The method includes: responsive to a primary device receiving a request to write to primary storage, receiving from the primary device a request for a sequence number; generating a current sequence number for the write; generating a first identifier indicating an identity of secondary devices writing to secondary storage based on the current sequence number; generating a second identifier indicating an identity of secondary devices writing to secondary storage based on the current sequence number and a previous sequence number; transmitting the current sequence number and the second identifier to the primary device; and transmitting the current sequence number and the first identifier to the secondary devices writing to secondary storage based on the previous sequence number. | 06-06-2013 |
20130151644 | COPYING DATA ONTO AN EXPANDABLE MEMORY - This document describes a method for synchronizing files on an expandable memory card coupled to a first computing device with an application running on a second computing device, where downloading of files is performed wirelessly without user involvement. | 06-13-2013 |
20130159448 | Optimized Data Communications In A Parallel Computer - A parallel computer includes nodes that include a network adapter that couples the node in a point-to-point network and supports communications in opposite directions of each dimension. Optimized communications include: receiving, by a network adapter of a receiving compute node, a packet—from a source direction—that specifies a destination node and deposit hints. Each hint is associated with a direction within which the packet is to be deposited. If a hint indicates the packet to be deposited in the opposite direction: the adapter delivers the packet to an application on the receiving node; forwards the packet to a next node in the opposite direction if the receiving node is not the destination; and forwards the packet to a node in a direction of a subsequent dimension if the hints indicate that the packet is to be deposited in the direction of the subsequent dimension. | 06-20-2013 |
20130159449 | Method and Apparatus for Low Latency Data Distribution - Various techniques are disclosed for distributing data, particularly real-time data such as financial market data, to data consumers at low latency. Exemplary embodiments include embodiments that employ adaptive data distribution techniques and embodiments that employ a multi-class distribution engine. | 06-20-2013 |
20130159450 | OPTIMIZED DATA COMMUNICATIONS IN A PARALLEL COMPUTER - A parallel computer includes nodes that include a network adapter that couples the node in a point-to-point network and supports communications in opposite directions of each dimension. Optimized communications include: receiving, by a network adapter of a receiving compute node, a packet—from a source direction—that specifies a destination node and deposit hints. Each hint is associated with a direction within which the packet is to be deposited. If a hint indicates the packet to be deposited in the opposite direction: the adapter delivers the packet to an application on the receiving node; forwards the packet to a next node in the opposite direction if the receiving node is not the destination; and forwards the packet to a node in a direction of a subsequent dimension if the hints indicate that the packet is to be deposited in the direction of the subsequent dimension. | 06-20-2013 |
20130166669 | SYSTEM AND METHOD FOR A MOBILE DEVICE TO USE PHYSICAL STORAGE OF ANOTHER DEVICE FOR CACHING - Systems and methods for a mobile device to use physical storage of another device for caching are disclosed. In one embodiment, a mobile device is able to receive over a cellular or IP network a response or content to be cached and wirelessly access the physical storage of the other device via a wireless network to cache the response or content for the mobile device. | 06-27-2013 |
20130179527 | Application Engine Module, Modem Module, Wireless Device and Method - A wireless device has a modem module and an application engine module. A communication and memory sharing interface connects the modem module to the application engine module. The application engine module has an application layer component for providing application layer processing for the wireless device and a modem component for providing, in combination with the modem module, modem processing for the wireless device. The wireless device has a memory and a memory interface for connecting the application engine module directly to the memory. | 07-11-2013 |
20130185375 | CONFIGURING COMPUTE NODES IN A PARALLEL COMPUTER USING REMOTE DIRECT MEMORY ACCESS ('RDMA') - Configuring compute nodes in a parallel computer using remote direct memory access (‘RDMA’), the parallel computer comprising a plurality of compute nodes coupled for data communications via one or more data communications networks, including: initiating, by a source compute node of the parallel computer, an RDMA broadcast operation to broadcast binary configuration information to one or more target compute nodes in the parallel computer; preparing, by each target compute node, the target compute node for receipt of the binary configuration information from the source compute node; transmitting, by each target compute node, a ready message to the target compute node, the ready message indicating that the target compute node is ready to receive the binary configuration information from the source compute node; and performing, by the source compute node, an RDMA broadcast operation to write the binary configuration information into memory of each target compute node. | 07-18-2013 |
20130198310 | CONTROL SYSTEM AND LOG DELIVERY METHOD - Provided is a control system that is provided with a plurality of control devices each of which includes an arithmetic processing device and a storage unit for storing logs of the arithmetic processing device. The control system includes a first generation unit, a second generation unit, and a delivery unit. The first generation unit generates a first log file such that the plurality of logs of the arithmetic processing devices stored in the storage unit of each control device are stored within an upper limit of log capacitance determined based on a total number of the arithmetic processing devices in the control system and in order based on priorities. The second generation unit generates a second log file including a plurality of the first log files of the arithmetic processing devices. The delivery unit delivers the second log file to an external device. | 08-01-2013 |
20130198311 | Techniques for Use of Vendor Defined Messages to Execute a Command to Access a Storage Device - Examples are disclosed for use of vendor defined messages to execute a command to access a storage device maintained at a server. In some examples, a network input/output device coupled to the server may receive the command from a client remote to the server for the client to access the storage device. For these examples, elements or components of the network input/output device may be capable of forwarding the command either directly to a Non-Volatile Memory Express (NVMe) controller that controls the storage device or to a manageability module coupled between the network input/out device and the NVMe controller. Vendor specific information may be forwarded with the command and used by either the NVMe controller or the manageability module to facilitate execution of the command. Other examples are described and claimed. | 08-01-2013 |
20130198312 | Techniques for Remote Client Access to a Storage Medium Coupled with a Server - Examples are disclosed for client access to a storage medium coupled with a server. A network input/output device for the server may receive a remote direct memory access (RDMA) command including a steering tag (S-Tag) from a client remote to the server. For these examples, the network input/output device may forward the RDMA command to a Non-Volatile Memory Express (NVMe) controller and access provided to a storage medium based on an allocation scheme that assigned the S-Tag to the storage medium. In some other examples, an NVMe controller may generate a memory mapping of one or more storage devices controlled by the NVMe controller to addresses for a base address register (BAR) on a Peripheral Component Interconnect Express (PCIe) bus. PCIe memory access commands received by the NVMe controller may be translated based on the memory mapping to provide access to the storage device. Other examples are described and claimed. | 08-01-2013 |
20130212206 | METHOD OF DISCOVERING IP ADDRESSES OF SERVERS - A method of discovering IP addresses of servers includes: (i1) beginning discovery processes of management modules and initialization processes of servers; (i2) the management modules sending network packages to one of the servers; (i3) the server responding with its IP address to the management modules, if the server receives the network packages of the management modules; (i4) ending the initialization processes of the server; (i5) each such management module storing the IP address in a database, if any of the management modules receive the IP addresses of the server; (i6) the management modules sending network packages to a next one of the servers with a next MAC address, if a next one of the servers with a next MAC address exists; (i7) repeating (i2) through (i5) if and as necessary in respect of the next one of the servers; and (i8) ending the discovery processes of the management modules. | 08-15-2013 |
20130219005 | RETRIEVING CONTENT FROM LOCAL CACHE - A network device transmits, to a cache located proximate to the network device, instructions to store content in the cache. The cache stores the content based on the instructions. The network device further receives a request for the content from a mobile communication device; determines, based on the request, that the content is stored in the local cache; and retrieves the content from the local cache. The network device also creates packets based on the retrieved content, and transmits the packets to the mobile communication device. | 08-22-2013 |
20130246552 | METHOD AND APPARATUS FOR MANAGING APPLICATION STATE IN A NETWORK INTERFACE CONTROLLER IN A HIGH PERFORMANCE COMPUTING SYSTEM - Methods related to communication between and within nodes in a high performance computing system are presented. Processing time for message exchange between a processing unit and a network controller interface in a node is reduced. Resources required to manage application state in the network interface controller are minimized. In the network interface controller, multiple contexts are multiplexed into one physical Direct Memory Access engine. Virtual to physical address translation in the network interface controller is accelerated by using a plurality of independent caches, with each level of the page table hierarchy cached in an independent cache. A memory management scheme for data structures distributed between the processing unit and the network controller interface is provided. The state required to implement end-to-end reliability is reduced by limiting the transmit sequence number space to the currently in-flight messages. | 09-19-2013 |
20130254320 | DETERMINING PRIORITIES FOR CACHED OBJECTS TO ORDER THE TRANSFER OF MODIFICATIONS OF CACHED OBJECTS BASED ON MEASURED NETWORK BANDWIDTH - Provided are a computer program product, system, and method for determining priorities for cached objects to order the transfer of modifications of cached objects based on measured network bandwidth. Objects are copied from a primary site to a secondary site to cache at the secondary site. The primary site includes a primary server and primary storage and the secondary site includes a secondary server and a secondary storage. Priorities are received from the secondary server for the objects at the secondary site based on determinations made by the secondary server with respect to the objects cached at the secondary storage. A determination is made of modifications to the objects at the primary storage that are cached at the secondary storage. The received priorities for the objects from the secondary server are used to control a transfer of the determined modifications to the objects to the secondary server. | 09-26-2013 |
20130254321 | SYSTEM AND METHOD FOR SUPPORTING LIVE MIGRATION OF VIRTUAL MACHINES IN A VIRTUALIZATION ENVIRONMENT - A system and method can support virtual machine live migration in a network. A virtual switch can be associated with a plurality of virtual functions (VFs), and wherein each said virtual function (VF) is associated with a separate virtual interface (VI) space. At least one virtual machine that is attached with a said virtual function (VF) can be associated with a virtual interface (VI), e.g. a queue pair (QP) in an Infiniband (IB) architecture. Furthermore, said at least one virtual machine operates to perform a live migration from a first host to a second host with said virtual function (VF) attached. | 09-26-2013 |
20130262613 | EFFICIENT DISTRIBUTION OF SUBNET ADMINISTRATION DATA OVER AN RDMA NETWORK - One embodiment provides a method for receiving subnet administration (SA) data using a remote direct memory access (RDMA) transfer. The method includes formatting, by a network node element, an SA data query with an RDMA-capable flag; configuring, by the network node element, a reliably-connected queue pair (RCQP) to receive an RDMA transfer from a subnet manager in communication with the network node element on an RDMA-capable network; and allocating, by the network node element, an RDMA write target buffer to receive the SA data using an RDMA transfer from the subnet manager in response to the SA data query. | 10-03-2013 |
20130262614 | WRITING MESSAGE TO CONTROLLER MEMORY SPACE - An embodiment may include circuitry that may write a message from a system memory in a host to a memory space in an input/output (I/O) controller in the host. A host operating system may reside, at least in part, in the system memory. The message may include both data and at least one descriptor associated with the data. The data may be included in the at least one descriptor. The circuitry also may signal the I/O controller that the writing has occurred. Many alternatives, variations, and modifications are possible. | 10-03-2013 |
20130297716 | UNIVERSAL WEBSITE PREFERENCE MANAGEMENT - Systems, apparatus, methods, and computer program products for universal user website preference management. The invention provides for a user to define website preferences that will be applied universally across multiple websites. The user preferences may be inputted and stored at a universal user preference website or the like. Such user preferences may include a preferred language, a preferred location, preferred billing information, preferred authentication credentials and the like. Through the use of tag parameters, the user preferences may be retrieved and applied at the onset of a user website session, such that the preferences become active when the user initiates website communication. | 11-07-2013 |
20130304841 | SERVER NODE INTERCONNECT DEVICES AND METHODS - Described are systems and methods for interconnecting devices. A switch fabric is in communication with a plurality of electronic devices. A rendezvous memory is in communication with the switch fabric. Data is transferred to the rendezvous memory from a first electronic device of the plurality of electronic devices in response to a determination that the data is ready for output from a memory at the first electronic device and in response to a location allocated in the rendezvous memory for the data. | 11-14-2013 |
20130325998 | System and Method for Providing Input/Output Functionality by an I/O Complex Switch - An input/output (I/O) device includes a management controller interface, a plurality of network switching interfaces, a storage interface, a component controller interface, and a plurality of multifunction modules. The multifunction modules further include a processing node interface, a first endpoint coupled to the management controller interface, a second endpoint coupled to one of the plurality of network switching interfaces, a third endpoint coupled to a remote direct memory access (RDMA) block, a fourth endpoint coupled to the storage interface, and a fifth endpoint coupled to the component controller interface. | 12-05-2013 |
20130332557 | REDUNDANCY AND LOAD BALANCING IN REMOTE DIRECT MEMORY ACCESS COMMUNICATIONS - A method for managing communications to add a first Remote Direct Memory Access (RDMA) link between a TCP server and a TCP client, where the first RDMA link references first remote memory buffer (RMB) and a second RMB, and further based on a first remote direct memory access network interface card (RNIC) associated with the TCP server and a second RNIC associated with the TCP client. The system determines whether a third RNIC is enabled. The system adds a second RDMA link, responsive to a determination that the third RNIC is enabled. The system detects a failure in the second RDMA link. The system reconfigures the first RDMA link to carry at least one TCP packet of a session formerly assigned to the second RDMA link, responsive to detecting the failure. The system communicates at least one packet of the at least one session on the first RDMA link. | 12-12-2013 |
20130339466 | DEVICES AND METHODS FOR INTERCONNECTING SERVER NODES - Described are aggregation devices and methods for interconnecting server nodes. The aggregation device can include an input region, an output region, and a memory switch. The input region includes a plurality of input ports. The memory switch has a shared through silicon via (TSV) memory coupled to the input ports for temporarily storing data received at the input ports from a plurality of source devices. The output region includes a plurality of output ports coupled to the TSV memory. The output ports provide the data to a plurality of destination devices. A memory allocation system coordinates a transfer of the data from the source devices to the TSV memory. The output ports receive and process the data from the TSV memory independently of a communication from the input ports. | 12-19-2013 |
20130339467 | Wireless Mobile Data Server With Removable Solid-State Memory - A battery operated, wireless mobile data server with an application processing environment and a relational database management system and removable solid-state memory for the distribution and recording of content accessible from clients consisting of multiple mobile devices such as smartphones, tablet computers, notebook computers and other mobile computing devices. The mobile data server can perform processing of application code and perform queries on relational databases and wirelessly provide access to large data sets independently to multiple mobile devices or in coordination to multiple mobile devices, irrespective of the mobile devices' operating systems within a defined geographic range without the use of a cellular infrastructure. | 12-19-2013 |
20130346531 | SYSTEMS AND METHODS FOR INPUT/OUTPUT VIRTUALIZATION - Described is an aggregation device comprising a plurality of virtual network interface cards (vNICs) and an input/output (I/O) processing complex. The vNICs are in communication with a plurality of processing devices. Each processing device has at least one virtual machine (VM). The I/O processing complex is between the vNICs and at least one physical NIC. The I/O processing complex includes at least one proxy NIC and a virtual switch. The virtual switch exchanges data with a processing device of the plurality of processing devices via a communication path established by a vNIC of the plurality of vNICs between the at least one VM and at least one proxy NIC. | 12-26-2013 |
20140006535 | Efficient Storage of ACL Frequent Ranges in a Ternary Memory | 01-02-2014 |
20140006536 | TECHNIQUES TO ACCELERATE LOSSLESS COMPRESSION | 01-02-2014 |
20140019570 | DATA BUFFER EXCHANGE - A method for transferring data between nodes includes receiving in an input buffer of a first node, a direct memory access (DMA) thread that includes a first data element the input buffer associated with a second node, receiving a first message from the second node indicative of an address of the input buffer containing the first data element, and saving the address of the input buffer containing the first data element to a first list responsive to receiving the first message. | 01-16-2014 |
20140019571 | PROCESSING DATA PACKETS FROM A RECEIVE QUEUE IN A REMOTE DIRECT MEMORY ACCESS DEVICE - Processing data packets from a receive queue is provided. It is determined whether packets are saved in a pre-fetched queue. In response to determining that packets are not saved in the pre-fetched queue, a number of packets within the receive queue is determined. In response to determining the number of packets within the receive queue, it is determined whether the number of packets within the receive queue is greater than a number of packets called for by an application. In response to determining that the number of packets within the receive queue is greater than the number of packets called for by the application, an excess number of packets that is above the number of packets called for by the application is saved in the pre-fetched queue. An indication is sent to the application of the excess number of packets. The predetermined number of packets is transferred to the application. | 01-16-2014 |
20140019572 | Remote Direct Memory Access Socket Aggregation - Byte utilization is improved in Remote Direct Memory Access (RDMA) communications by detecting a plurality of concurrent messages on a plurality of application sockets which are destined for the same application, client or computer, intercepting those messages and consolidating their payloads into larger payloads, and then transmitting those consolidated messages to the destination, thereby increasing the payload-to-overhead byte utilization of the RDMA transmissions. At the receiving end, multiplexing information is used to unpack the consolidated messages, and to put the original payloads into a plurality of messages which are then fed into the receiving sockets to the destination application, client or computer, thereby making the consolidation process transparent between the initiator and the target. | 01-16-2014 |
20140019573 | SOURCE REFERENCE REPLICATION IN A DATA STORAGE SUBSYSTEM - A method of data replication from a first data storage device to a second data storage device. According to the method, prior to replicating data from the first data storage device to the second data storage device, metadata relating to data to be replicated may be transmitted to the second data storage device, the metadata including information about the data to be replicated and a path identifier identifying a path through which the second data storage device can remotely access the data at the first data storage device until the data to be replicated is copied to the second data storage device. | 01-16-2014 |
20140019574 | Remote Direct Memory Access Socket Aggregation - Byte utilization is improved in Remote Direct Memory Access (RDMA) communications by detecting a plurality of concurrent messages on a plurality of application sockets which are destined for the same application, client or computer, intercepting those messages and consolidating their payloads into larger payloads, and then transmitting those consolidated messages to the destination, thereby increasing the payload-to-overhead byte utilization of the RDMA transmissions. At the receiving end, multiplexing information is used to unpack the consolidated messages, and to put the original payloads into a plurality of messages which are then fed into the receiving sockets to the destination application, client or computer, thereby making the consolidation process transparent between the initiator and the target. | 01-16-2014 |
20140032695 | NETWORK DEVICES WITH MULTIPLE DIRECT MEMORY ACCESS CHANNELS AND METHODS THEREOF - A method, non-transitory computer readable medium, and a system for communicating with networked clients and servers through a network device includes receiving a first network data packet destined for a first executing traffic management application of a plurality of executing traffic management applications operating in the network device. A first DMA channel is identified to allocate the received first network data packet. Further, the first network data packet is transmitted to the first traffic management executing application over the first identified DMA channel. | 01-30-2014 |
20140032696 | MAPPING RDMA SEMANTICS TO HIGH SPEED STORAGE - Embodiments described herein are directed to extending remote direct memory access (RDMA) semantics to enable implementation in a local storage system and to providing a management interface for initializing a local data store. A computer system extends RDMA semantics to provide local storage access using RDMA, where extending the RDMA semantics includes the following: mapping RDMA verbs of an RDMA verbs interface to a local data store and altering RDMA ordering semantics to allow out-of-order processing and/or out-of-order completions. The computer system also accesses various portions of the local data store using the extended RDMA semantics. | 01-30-2014 |
20140032697 | STORAGE SYSTEM WITH MULTICAST DMA AND UNIFIED ADDRESS SPACE - A system and method for clients, a control module, and storage modules to participate in a unifed address space in order to and read and write data efficiently using direct-memory access. The method for reading data includes determining a first location in a first memory to write a first copy of the data, a second location in a second memory to write a second copy of the data, where the first memory is located in a first storage module including a first persistent storage and the second memory is located in a second storage module including a second persistent storage. The method further includes programming a direct memory access engine to read the data from the client memory and issue a first write request to a multicast address, where the first location, the second location, and a third location are associated with the multicast address. | 01-30-2014 |
20140040409 | Storage Array Reservation Forwarding - A method is provided for a destination storage system to join a storage area network with a source storage system. The method includes discovering a volume on the source storage system when the source storage system exports the volume to the destination storage system and exporting the volume to the host computer systems. When a command to reserve the volume for a host computer system is received, the method includes determining locally if the volume is already reserved. When the volume is not already reserved, the method includes reserving locally the volume for the host computer system and transmitting to the source storage system another command to reserve the volume for the destination storage system. | 02-06-2014 |
20140040410 | Storage Array Reservation Forwarding - A method is provided for a destination storage system to handle SCSI-3 reservations. The method includes discovering a volume on a source storage system when the source storage system exports the volume to the destination storage system, exporting the volume to host computer systems, locally registering keys for first paths to the destination storage system, and registering with the source storage system the keys for second paths to the source storage system. When one of the host computer systems requests to reserve the volume, the method includes locally reserving the volume for paths to the destination storage system with registered keys and performing reservation forwarding to request the source storage system to reserve the volume for paths to the source storage system with registered keys. | 02-06-2014 |
20140040411 | System and Method for Simple Scale-Out Storage Clusters - Systems and associated methods for flexible scalability of storage systems. In one aspect, a storage controller may include an interface to a fabric adapted to permit each storage controller coupled to the fabric to directly access memory mapped components of all other storage controllers coupled to the fabric. The CPU and other master device circuits within a storage controller may directly address memory an I/O devices directly coupled thereto within the same storage controller and may use RDMA features to directly address memory an I/O devices of other storage controllers through the fabric interface. | 02-06-2014 |
20140047058 | DIRECT MEMORY ACCESS OF REMOTE DATA - An apparatus and associated methodology providing a data storage system operably transferring data between a storage space and a remote device via a network. The data storage system includes a first storage controller having top-level control of a first data storage device and a second storage controller having top-level control of a second data storage device that is different than the first data storage device, the first and second data storage devices forming portions of the storage space. Data pathway logic resides in the first storage controller that performs a direct memory access (DMA) transfer to the second data storage device at a DMA data transfer rate in response to the first storage controller receiving, from the external device via the network, an access request for the second data storage device. | 02-13-2014 |
20140052808 | SPECULATION BASED APPROACH FOR RELIABLE MESSAGE COMMUNICATIONS - Described are a system and method for lossless message delivery between two processing devices. Each device includes a remote direct memory access (RDMA) messaging interface. The RDMA messaging interface at the first device generates one or more messages that are processed by the RDMA messaging interface of the second device. The RDMA messaging interface of the first device outputs a notification to the second device that a message of the one or more messages is available at the first device. A determination is made that the second device has resources to accommodate the message. The second device performs an operation in response to determining that the processing device has the resources to accommodate the message. | 02-20-2014 |
20140059155 | Intelligent Network Interface System and Method for Protocol Processing - A system for protocol processing in a computer network has an intelligent network interface card (INIC) or communication processing device (CPD) associated with a host computer. The INIC or CPD provides a fast-path that avoids host protocol processing for most large multipacket messages, greatly accelerating data communication. The INIC or CPD also assists the host for those message packets that are chosen for processing by host software layers. A communication control block (CCB) for a message is defined that allows DMA controllers of the INIC to move data, free of headers, directly to or from a destination or source in the host. The CCB can be passed back to the host for message processing by the host. The INIC or CPD contains hardware circuits configured for protocol processing that can perform that specific task faster than the host CPU. One embodiment includes a processor providing transmit, receive and management processing, with full duplex communication for four fast Ethernet nodes. | 02-27-2014 |
20140082118 | Ultra Low Latency Network Buffer Storage - Buffer designs and write/read configurations for a buffer in a network device are provided. According to one aspect, a first portion of the packet is written into a first cell of a plurality of cells of a buffer in the network device. Each of the cells has a size that is less than a minimum size of packets received by the network device. The first portion of the packet can be read from the first cell while concurrently writing a second portion of the packet to a second cell. | 03-20-2014 |
20140082119 | PROCESSING DATA PACKETS FROM A RECEIVE QUEUE IN A REMOTE DIRECT MEMORY ACCESS DEVICE - Processing data packets from a receive queue is provided. It is determined whether packets are saved in a pre-fetched queue. In response to determining that packets are not saved in the pre-fetched queue, a number of packets within the receive queue is determined. In response to determining the number of packets within the receive queue, it is determined whether the number of packets within the receive queue is greater than a number of packets called for by an application. In response to determining that the number of packets within the receive queue is greater than the number of packets called for by the application, an excess number of packets that is above the number of packets called for by the application is saved in the pre-fetched queue. An indication is sent to the application of the excess number of packets. The predetermined number of packets is transferred to the application. | 03-20-2014 |
20140089444 | METHODS, APPARATUS AND SYSTEMS FOR FACILITATING RDMA OPERATIONS WITH REDUCED DOORBELL RINGS - Methods, apparatus and systems for reducing usage of Doorbell Rings in connection with RDMA operations. A portion of system memory is employed as a Memory-Mapped Input/Output (MMIO) address space configured to be accessed via a hardware networking device. A Send Queue (SQ) is stored in MMIO and is used to facilitate processing of Work Requests (WRs) that are written to SQ entries by software and read from the SQ via the hardware networking device. The software and logic in the hardware networking device employ pointers identifying locations in the SQ corresponding to a next write WR entry slot and last read WR entry slot that are implemented to enable WRs to be written to and read from the SQ during ongoing operations under which the SQ is not emptied such that doorbell rings to notify the hardware networking device that new WRs have been written to the SQ are not required. | 03-27-2014 |
20140089445 | STORAGE APPARATUS, SETTING METHOD, AND COMPUTER PRODUCT - A first storage apparatus to which data stored in a second storage apparatus connected to the first storage apparatus is migrated via a network, includes a memory unit that stores first identification information of a first information processing apparatus connected to the second storage apparatus via the network; and a processor configured to issue to the second storage apparatus, a command that is issued from the first information processing apparatus to the second storage apparatus, using the first identification information stored in the memory unit, and acquire from the second storage apparatus, a response to the command; and associate based on the response and set in the first storage apparatus, the first identification information and parameter information of the first information processing apparatus corresponding to the command. | 03-27-2014 |
20140089446 | ADVANCED CLOUD COMPUTING DEVICE FOR THE CONTROL OF MEDIA, TELEVISION AND COMMUNICATIONS SERVICES - The present invention relates generally a system and apparatus for video entertainment. More specifically, embodiments of the present invention provide a system and apparatus for an advanced cloud computing device for the control of media, television and communications services as well as methods related thereto. | 03-27-2014 |
20140089447 | DATA TRANSFER DEVICE - When a checkpoint comes, the control section selects some of a plurality of small areas which are transfer targets in the memory as small areas to be transferred to the outside of the own computer through the save area (indirect transfer small areas), and selects the others as small areas to be transferred to the outside of the own computer not through the save area (direct transfer small areas). Within a period in which updating from the own computer to the memory is suspended, the control section copies stored data in the small areas selected as the indirect transfer small areas from the memory to the save area with use of the copy section, and in parallel to the copying, transfers stored data in the small areas selected as the direct transfer small areas from the memory to the outside of the own computer with use of the communication section. | 03-27-2014 |
20140095643 | MICROCONTROLLER WITH INTEGRATED MONITORING CAPABILITIES FOR NETWORK APPLICATIONS - A microcontroller has integrated monitoring capabilities for network applications. The disclosed techniques can take advantage, for example, of an unused, duplicate network controller that is present in some microcontrollers by providing selection circuitry and configuration capabilities that allow the unused, duplicate network controller to be used for the purpose of monitoring frames that are transferred between network media and another network controller residing on the microcontroller. The monitored frames can then be used, for example, for debugging or other purposes, such as statistical analyses or security enhancements. | 04-03-2014 |
20140115087 | METHOD AND APPARATUS OF STORAGE VOLUME MIGRATION IN COOPERATION WITH TAKEOVER OF STORAGE AREA NETWORK CONFIGURATION - Systems and methods directed to the automation of Storage Area Network (SAN) configuration when storage volume migration or server virtual machine migration is conducted. Systems and methods described herein may involve the takeover of a SAN network attribute for the migration, and may involve translation of zone object formats to facilitate the migration and ensure compatibility when the takeover is conducted. | 04-24-2014 |
20140115088 | Communication Protocol Placement into Switch Memory - Direct memory transfer of data from the memory of a server to a memory of a switch. A server identifies a block of data in the memory of the server and a corresponding memory address space in the server. The server identifies a block of memory in the switch. The block of memory is at least the same size of the block of data. The switch comprises a network protocol. The server transfers the block of data into the block of memory. Based on the network protocol, the switch maps a network relationship. The mapping indicates a target server for the transferred block of data to be transmitted to. | 04-24-2014 |
20140122633 | Method and Apparatus Pertaining to Sharing Information Via a Private Data Area - Information is shared between processing entities that each have a corresponding private data area by placing data corresponding to information for a first one of the private data areas for a first one of the processing entities directly into a second one of the private data areas for a second one of the processing entities without placing the data in an intervening shared data area and without directly invoking a system administrator-like entity. In addition, these private data areas can be pre-populated with a plurality of directories that each have a one-to-one correspondence to a particular predetermined information recipient and then providing a link to a given one of the recipients as corresponds to a given one of the directories when information is placed in that directory to provide the corresponding predetermined information recipient with at least read access to the information. | 05-01-2014 |
20140122634 | NUMA AWARE NETWORK INTERFACE - Methods, apparatus, and computer platforms and architectures employing node aware network interfaces are disclosed. The methods and apparatus may be implemented on computer platforms such as those employing a Non-uniform Memory Access (NUMA) architecture including a plurality of nodes, each node comprising a plurality of components including a processor having at least one level of memory cache and being operatively coupled to system memory and operatively coupled to a NUMA aware Network Interface Controller (NIC). Under one method, a packet is received from a network at a first NIC comprising a component of a first node, and a determination is made that packet data for the packet is to be forwarded to a second node including a second NIC. The packet data is then forwarded from the first NIC to the second NIC via a NIC-to-NIC interconnect link. Upon being received at the second NIC, processing of the packet (data) is handled as if the packet was received from the network at the second NIC. | 05-01-2014 |
20140129664 | TRANSFERRING DATA OVER A REMOTE DIRECT MEMORY ACCESS (RDMA) NETWORK - Embodiments that provide one-shot remote direct memory access (RDMA) are provided. In one embodiment, a single command for a completion process of a remote direct memory access (RDMA) operation is received in a computing device. The computing device executes the completion process of the RDMA operation in response to the single command being received. | 05-08-2014 |
20140129665 | DYNAMIC DATA PREFETCHING - Technology is disclosed for data prefetching on a computing device utilizing a cloud based file system. The technology can receive a current execution state and a data access pattern associated with an instance of an application executing on a computing device. The technology can further receive a data access pattern associated with another instance of the application executing on another computing device. The technology can utilize the received data access patterns to determine one or more future access requests for a subset of data associated with the application, where the one or more future access requests is a function of the current execution state of the application executing on the computing device. The technology can generate a prefetching profile utilizing the determined subset of data. | 05-08-2014 |
20140143364 | RDMA-OPTIMIZED HIGH-PERFORMANCE DISTRIBUTED CACHE - For remote direct memory access (RDMA) by a client to a data record stored in a cache on a server, a hash map is received by a client from a server. The hash map includes one or more entries associated with a key for the data record stored in the cache on the server that stores a server-side remote pointer referencing the data record stored in the cache on the server. The client, using the key, looks up the server-side remote pointer for the data record from the hash map, and then performs one or more RDMA operations using the server-side remote pointer that allow the client to directly access the data record stored in the cache on the server. | 05-22-2014 |
20140143365 | RDMA-OPTIMIZED HIGH-PERFORMANCE DISTRIBUTED CACHE - For remote direct memory access (RDMA) by a client to a data record stored in a cache on a server, a hash map is received by a client from a server. The hash map includes one or more entries associated with a key for the data record stored in the cache on the server that stores a server-side remote pointer referencing the data record stored in the cache on the server. The client, using the key, looks up the server-side remote pointer for the data record from the hash map, and then performs one or more RDMA operations using the server-side remote pointer that allow the client to directly access the data record stored in the cache on the server. | 05-22-2014 |
20140143366 | System and Method for Reducing Communication Overhead Between Network Interface Controllers and Virtual Machines - Available buffers in the memory space of a guest operating system of a virtual machine are provided to a network interface controller (NIC) for use during direct memory access (DMA) and the guest operating system is notified accordingly when data is written into such available buffers. These capabilities obviate the requirement of using hypervisor memory as a staging area to determine which virtual machine to forward incoming data. | 05-22-2014 |
20140164545 | EXPLICIT FLOW CONTROL FOR IMPLICIT MEMORY REGISTRATION - Methods, apparatus and systems for facilitating explicit flow control for RDMA transfers using implicit memory registration. To setup an RDMA data transfer, a source RNIC sends a request to allocate a destination buffer at a destination RNIC using implicit memory registration. Under implicit memory registration, the page or pages to be registered are not explicitly identified by the source RNIC, and may correspond to pages that are paged out to virtual memory. As a result, registration of such pages result in page faults, leading to a page fault delay before registration and pinning of the pages is completed. In response to detection of a page fault, the destination RNIC returns an acknowledgment indicating that a page fault delay is occurring. In response to receiving the acknowledgment, the source RNIC temporarily stops sending packets, and does not retransmit packets for which ACKs are not received prior to retransmission timeout expiration. | 06-12-2014 |
20140173014 | Communication Protocol Placement Into Switch Memory - Direct memory transfer of data from the memory of a server to a memory of a switch. A server identifies a block of data in the memory of the server and a corresponding memory address space in the server. The server identifies a block of memory in the switch. The block of memory is at least the same size of the block of data. The switch comprises a network protocol. The server transfers the block of data into the block of memory. Based on the network protocol, the switch maps a network relationship. The mapping indicates a target server for the transferred block of data to be transmitted to. | 06-19-2014 |
20140173015 | PERFORMANCE ISOLATION FOR STORAGE CLOUDS - Embodiments of the present invention provide performance isolation for storage clouds. Under one embodiment, workloads across a storage cloud architecture are grouped into clusters based on administrator or system input. A performance isolation domain is then created for each of the clusters, with each of the performance isolation domains comprising a set of data stores associated with a set of storage subsystems and a set of data paths that connect the set of data stores to a set of clients. Thereafter, performance isolation is provided among a set of layers of the performance isolation domains. | 06-19-2014 |
20140173016 | Method and apparatus for implementing secure and selectively deniable file storage - A method for writing data to a memory device comprising a first and a second memory device the first memory device comprises data blocks numbered with block numbers and the second memory device comprises at least one reference calculated from a data block digest and its physical block number. The invention comprises at least the steps: calculating the digest from at least part of the data block content, receiving at least one physical block number, to which the data block contents in the first memory device is stored, encrypting the data block content, storing the data block content to the first memory device to the position pointed by the physical block number, and storing or issuing a command to save the digest, or a number derived from it, and at least one said physical block number to the second memory device. A system, computer program, and server are also presented. | 06-19-2014 |
20140189031 | COMPUTING DEVICE AND METHOD OF CREATING VIRTUAL MACHINES IN HOSTS - A method creates one or more virtual machine (VM) templates, and stores the one or more templates in a storage device. When one host requests to create a VM, the system selects a VM template from the storage device according to a hardware specification of the requested VM, copies hardware configuration information recorded in the selected VM template, and creates the VM in the host according to the copied information. The method requests a dynamic host configuration protocol (DHCP) server to allocate an IP address to the VM via a DHCP agent, and assigns the IP address allocated by the DHCP server to the VM via a host agent. | 07-03-2014 |
20140189032 | COMPUTER SYSTEM AND METHOD OF CONTROLLING COMPUTER SYSTEM - A computer system have: a plurality of servers; a shared storage system for storing data shared by the servers; and a management server, wherein each of the plurality of servers includes: one or more non-volatile memories for storing part of the data stored in the shared storage system; first access history information storing access status of data stored in the non-volatile memories; storage location information storing correspondence between the data stored in the non-volatile memories and the data stored in the shared storage system; and a first management unit for reading and writing data from and to the non-volatile memories, and wherein the management server includes: second access history information of an aggregation of the first access history information acquired from each of the servers; and a second management unit for determining data to be allocated to the non-volatile memories based on the second access history information. | 07-03-2014 |
20140189033 | COMMUNICATION SYSTEM, SEMICONDUCTOR DEVICE, AND DATA COMMUNICATION METHOD - System and method of data communication and a semiconductor device capable of transmitting information data through a communication interface with a specified response duration without degradation in efficiency of operation. In a first information processing unit, when a first controller issues request signals of predetermined data processing, a first communication interface sends a pseudo response to the first controller in response to one of the request signals and transmits the request signal to a second information processing unit which in turn performs predetermined information processing indicated by the request signal and sends back response data. The first communication interface stores the received response data in memory and reads the response data from memory in response to a request signal issued for the second time onward from the first controller after complete storing of the response data, and then supplies the data to the first controller. | 07-03-2014 |
20140195630 | DIRECT MEMORY ACCESS RATE LIMITING IN A COMMUNICATION DEVICE - Rate limiting operations can be implemented at an ingress DMA unit to minimize the probability of dropped packets because of differences between the communication rates of the ingress DMA unit and a packet processing engine. The communication rate associated with each of the software ports of a communication device can be determined and an aggregate software port ingress rate can be calculated by summing the communication rate associated with each of the software ports. The transfer rate associated with the ingress DMA unit can be limited so that packets are transmitted from the ingress DMA unit to the packet processing engine at a communication rate that is at least equal to the aggregate software port ingress rate. If each software port comprises a dedicated rate-limited ingress DMA queue, packets from a rate-limited ingress DMA queue can be transmitted at the at least the communication rate of the corresponding software port. | 07-10-2014 |
20140195631 | ROCE PACKET SEQUENCE ACCELERATION - A method, network device and system for remote direct memory access (RDMA) over Converged Ethernet (RoCE) packet sequence acceleration are disclosed. The network device comprises one or more functionality components for communicating with a host system. The host system is configured for implementing a first set of functionalities of a network communication protocol, such as RoCE. The one or more functionality components are also operable to implement a second set of functionalities of the network communication protocol. | 07-10-2014 |
20140201302 | METHOD, APPARATUS AND COMPUTER PROGRAMS PROVIDING CLUSTER-WIDE PAGE MANAGEMENT - An exemplary method in accordance with embodiments of this invention includes, at a virtual machine that forms a part of a cluster of virtual machines, computing a key for an instance of a memory page that is to be swapped out to a shared memory cache that is accessible by all virtual machines of the cluster of virtual machines; determining if the computed key is already present in a global hash map that is accessible by all virtual machines of the cluster of virtual machines; and only if it is determined that the computed key is not already present in the global hash map, storing the computed key in the global hash map and the instance of the memory page in the shared memory cache. | 07-17-2014 |
20140201303 | Network Overlay System and Method Using Offload Processors - An input-output (IO) virtualization system connectable to a network is disclosed. The system can include a second virtual switch connected to a memory bus and configured to receive network packets from a first virtual switch, and an offload processor module supporting the second virtual switch, the offload processor module further comprising at least one offload processor configured to modify network packets and direct the modified network packets to the first virtual switch through the memory bus. | 07-17-2014 |
20140201304 | Network Overlay System and Method Using Offload Processors - A method for processing data is disclosed. The method can include transporting data to a second virtual switch from a first virtual switch using a memory bus having a defined memory transport protocol, writing the transported data to a target memory location, and processing the data written to the target memory location with at least one offload processor included on an offload processor module. | 07-17-2014 |
20140201305 | Network Overlay System and Method Using Offload Processors - A memory bus connected module, connectable to a first virtual switch for providing input-output (IO) virtualization services is disclosed. The module can include a second virtual switch coupled to the first virtual switch via a memory bus connection, a plurality of offload processors coupled to the memory bus connection, and at least one memory unit connected to, and separately addressable by, the multiple offload processors, and configured to receive data directed to a specific memory address space for processing by at least one of the offload processors. | 07-17-2014 |
20140201306 | REMOTE DIRECT MEMORY ACCESS WITH REDUCED LATENCY - The present disclosure provides systems and methods for remote direct memory access (RDMA) with reduced latency. RDMA allows information to be transferred directly between memory buffers in networked devices without the need for substantial processing. While RDMA requires registration/deregistration for buffers that are not already preregistered, RDMA with reduced latency transfers information to intermediate buffers during registration/deregistration, utilizing time that would have ordinarily been wasted waiting for these processes to complete, and reducing the amount of information to transfer while the source buffer is registered. In this way the RDMA transaction may be completed more quickly. RDMA with reduced latency may be employed to expedite various information transactions. For example, RMDA with reduced latency may be utilized to stream information within a device, or may be used to transfer information for an information source external to the device directly to an application buffer. | 07-17-2014 |
20140207896 | CONTINUOUS INFORMATION TRANSFER WITH REDUCED LATENCY - The present disclosure provides systems and methods for remote direct memory access (RDMA) with reduced latency. RDMA allows information to be transferred directly between memory buffers in networked devices without the need for substantial processing. While RDMA requires registration/deregistration for buffers that are not already preregistered, RDMA with reduced latency transfers information to intermediate buffers during registration/deregistration, utilizing time that would have ordinarily been wasted waiting for these processes to complete, and reducing the amount of information to transfer while the source buffer is registered. In this way the RDMA transaction may be completed more quickly. RDMA with reduced latency may be employed to expedite various information transactions. For example, RMDA with reduced latency may be utilized to stream information within a device, or may be used to transfer information for an information source external to the device directly to an application buffer. | 07-24-2014 |
20140214996 | Distributed Computing Architecture - A system includes a plurality of servers and a border server. The border server receives a request for a transaction that can be accomplished by performing tasks, identifies a first task of the tasks, identifies an initial server of the servers to perform the first task by consulting, based on a type of the first task, routing data stored in memory of the border server, and requests that the initial server perform the first task. Each of the servers will, in response to receiving a task from the border server, perform the received task using related data stored exclusively on the server, determine whether the received task requires an additional task, identify a next server to perform the additional task by consulting routing data stored in memory of the server, and request that the next server perform the additional task. | 07-31-2014 |
20140214997 | METHOD AND DEVICE FOR DATA TRANSMISSIONS USING RDMA - Method and device for data transmissions using RDMA. The present invention is implemented between a first entity using a first data structure type and a second entity using a second data structure type over a third entity. The third entity is coupled to a table caching fingerprints of first data structures of the first data structure type and second data structures of the second data structure type associated therewith. A certain first data structure and the second data structure associated therewith represent a certain, identical RDMA function call. A first data structure representing a certain RDMA function call is sent from the first entity to the third entity; the fingerprint for the sent first data structure is determined; a second data structure associated with the determined fingerprint is looked up in the table; and the looked up second data structure is sent to the second entity. | 07-31-2014 |
20140214998 | SYSTEM AND METHOD FOR A SHARED WRITE ADDRESS PROTOCOL OVER A REMOTE DIRECT MEMORY ACCESS CONNECTION - The present invention provides a system and method for a shared write address protocol (SWAP) that is implemented over a remote direct memory address (RDMA) connection. Each party to a connection establishes a flow control block that is accessible to its partner via a RDMA READ operation. The novel protocol operates so that each module needs to have one outstanding RDMA READ operation at a time, i.e., to obtain the current flow control information from its partner. In operation, if data to be transmitted is less than or equal to a buffer size, an INLINE message data structure of the SWAP protocol is utilized to send the data to be target. However, if the data is greater than the buffer size, a second determination is made as to whether sufficient space exists in the message pool for the data. If insufficient space exists, the sender will wait until sufficient space exists before utilizing a novel WRITE operation of the SWAP protocol to transmit the data | 07-31-2014 |
20140214999 | SYSTEM AND METHOD OF CACHING INFORMATION - A system and method is provided wherein, in one aspect, a currently-requested item of information is stored in a cache based on whether it has been previously requested and, if so, the time of the previous request. If the item has not been previously requested, it may not be stored in the cache. If the subject item has been previously requested, it may or may not be cached based on a comparison of durations, namely (1) the duration of time between the current request and the previous request for the subject item and (2) for each other item in the cache, the duration of time between the current request and the previous request for the other item. If the duration associated with the subject item is less than the duration of another item in the cache, the subject item may be stored in the cache. | 07-31-2014 |
20140222945 | Boosting Remote Direct Memory Access Performance Using Cryptographic Hash Based Approach - A mechanism is provided in a data processing system for performing a remote direct memory access operation. Responsive to receiving in a network interface controller a hash value of data to be copied from a source address in a source node to a destination address in a destination node in the remote direct memory access operation, the network interface controller performs a lookup operation in a translation protection table in the network interface controller to match the hash value to a hash value for data existing in memory of the destination node. Responsive to the network interface controller finding a match in the translation protection table, the network interface controller completes the remote direct memory access operation without transferring the data from the source node to the destination node. | 08-07-2014 |
20140244777 | DISK MIRRORING FOR PERSONAL STORAGE - Embodiments of the present invention provide a system for backing up personal data between two mated (i.e., paired) network attached storage (NAS) devices. The system includes a local storage device and a secondary storage device that communicate over a network (e.g., the Internet) via a network connection. Any data added or modified on the local storage device will be automatically mirrored (i.e., copied) to the secondary storage device, which may be located at a secure remote site, pursuant to a data mirroring technique. | 08-28-2014 |
20140250202 | PEER-TO-PEER INTERRUPT SIGNALING BETWEEN DEVICES COUPLED VIA INTERCONNECTS - Methods and apparatus to provide peer-to-peer interrupt signaling between devices coupled via one or more interconnects are described. In one embodiment, a NIC (Network Interface Card such as a Remote Direct Memory Access (RDMA) capable NIC) transfers data directly into or out of the memory of a peer device that is coupled to the NIC via one or more interconnects, bypassing a host computing/processing unit and/or main system memory. Other embodiments are also disclosed. | 09-04-2014 |
20140258438 | NETWORK INTERFACE CONTROLLER WITH COMPRESSION CAPABILITIES - A method for communication includes receiving in a network interface controller (NIC) from a host processor, which has a local host memory and is connected to the NIC by a local bus, a remote direct memory access (RDMA) compress-and-write command, specifying a source memory buffer in the local host memory and a target memory address. In response to the command, data are read from the specified buffer into the NIC, compressed in the NIC, and conveyed from the NIC to the target memory address. | 09-11-2014 |
20140280663 | Apparatus and Methods for Providing Performance Data of Nodes in a High Performance Computing System - In accordance with one embodiment of the invention, a method of providing performance data for nodes in a high performance computing system receives a request for performance data for a node in the high performance computing system. According to the method, a driver in kernel space causes the performance data for the node to be stored in kernel memory. The kernel memory is accessible in userspace via a first system file. | 09-18-2014 |
20140280664 | CACHING CONTENT ADDRESSABLE DATA CHUNKS FOR STORAGE VIRTUALIZATION - The subject disclosure is directed towards using primary data deduplication concepts for more efficient access of data via content addressable caches. Chunks of data, such as deduplicated data chunks, are maintained in a fast access client-side cache, such as containing chunks based upon access patterns. The chunked content is content addressable via a hash or other unique identifier of that content in the system. When a chunk is needed, the client-side cache (or caches) is checked for the chunk before going to a file server for the chunk. The file server may likewise maintain content addressable (chunk) caches. Also described are cache maintenance, management and organization, including pre-populating caches with chunks, as well as using RAM and/or solid-state storage device caches. | 09-18-2014 |
20140280665 | CELL FABRIC HARDWARE ACCELERATION - An aspect includes a method for providing direct communication between a server and a network switch in a cell-based fabric. A host channel adapter of a cell fabric hardware accelerator is configured to provide the server with direct access to memory within the network switch. A plurality of data packets having a fixed size is received at the host channel adapter from the server. The host channel adapter is coupled to a bus of the server. A direct transmission is performed from the cell fabric hardware accelerator to the memory within the network switch on an interconnect bus to write the data packets directly into the memory. | 09-18-2014 |
20140280666 | REMOTE DIRECT MEMORY ACCESS ACCELERATION VIA HARDWARE CONTEXT IN NON-NATIVE APPLCIATIONS - Provided are techniques generating a data structure, wherein the data structure specifies both a specified size of a memory space to allocate within an application and a virtual address within the application to locate a data path transmission queue; including within a verb for allocating the data path transmission queue the defined data structure; in response to a call of the verb, allocate, within the application, the data path transmission queue of the specified size and at the virtual location; in response to a request to transmit control data, employ a remote direct memory access (RDMA) transmission path; and, in response to a request to transmit data, employ the data path transmission queue rather than an RDMA transmission path. | 09-18-2014 |
20140289354 | Background Migration of Virtual Storage - Described is a technology by which a virtual hard disk is migrated from a source storage location to a target storage location without needing any shared physical storage, in which a machine may continue to use the virtual hard disk during migration. This facilitates use the virtual hard disk in conjunction with live-migrating a virtual machine. Virtual hard disk migration may occur fully before or after the virtual machine is migrated to the target host, or partially before and partially after virtual machine migration. Background copying, sending of write-through data, and/or servicing read requests may be used in the migration. Also described is throttling data writes and/or data communication to manage the migration of the virtual hard disk. | 09-25-2014 |
20140297775 | METHOD AND SYSTEM FOR PROVIDING REMOTE DIRECT MEMORY ACCESS TO VIRTUAL MACHINES - The current document is directed to methods and systems that provide remote direct memory access (“RDMA”) to applications running within execution environments provided by guest operating systems and virtual machines above a virtualization layer. In one implementation, RDMA is accessed by application programs within virtual machines through a paravirtual interface that includes a virtual RDMA driver that transmits RDMA requests through a communications interface to a virtual RDMA endpoint in the virtualization layer. | 10-02-2014 |
20140297776 | EFFICIENT STORAGE OF DATA IN A DISPERSED STORAGE NETWORK - A method begins by a dispersed storage (DS) processing module receiving data for storage and generating a dispersed storage network (DSN) source name for the data. The method continues with the DS processing module determining whether substantially identical data to the data has been previously stored in memory of the DSN. When the substantially identical data has been previously stored in the memory of the DSN, the method continues with the DS processing module generating an object linking file that links the data to the substantially identical data, dispersed storage error encoding the object linking file to produce a set of encoded link file slices, and outputting the set of encoded link file slices for storage in the memory of the DSN. | 10-02-2014 |
20140297777 | VARIABLE PAGE SIZING FOR IMPROVED PHYSICAL CLUSTERING - A data size characteristic of contents of a related unit of data to be written to a storage by an input/output module of a data storage application can be determined, and a storage page size consistent with the data size can be selected from a plurality of storage page sizes. The related unit of data can be assigned to a storage page having the selected storage page size, and the storage page can be passed to the input/output module so that the input/output module physically clusters the contents of the related unit of data when the input/output module writes the contents of the related unit of data to the storage. Related methods, systems, and articles of manufacture are also disclosed. | 10-02-2014 |
20140304353 | RDMA TO STREAMING PROTOCOL DRIVER - Mechanisms for providing data streams are disclosed. A device accesses a mapping stored at the device that maps a desired media content item to a source address and to a range of addresses in a storage device of a remote content distribution server allocated to the desired media content item. The device preforms a direct memory-to-memory transfer of the desired media content item from the remote content distribution server to a local memory of the device using the range of addresses. The device encapsulates the desired media content item from the local memory into a plurality of packets according to a streaming protocol. Encapsulating the desired media content item includes inserting into each of the plurality of packets the source address. The plurality of packets are streamed. | 10-09-2014 |
20140310370 | Network-Displaced Direct Storage - A network-displaced direct storage architecture transports storage commands over a network interface. In one implementation, the architecture maps, at hosts, block storage commands to remote direct memory access operations (e.g., over converged Ethernet). The mapped operations are communicated across the network to a network storage appliance. At the network storage appliance, network termination receives the mapped commands, extracts the operation and data, and passes the operation and data to a storage device that implements the operation on a memory. | 10-16-2014 |
20140317219 | LOCAL DIRECT STORAGE CLASS MEMORY ACCESS - A queued, byte addressed system and method for accessing flash memory and other non-volatile storage class memory, and potentially other types of non-volatile memory (NVM) storage systems. In a host device, e.g., a standalone or networked computer, having attached NVM device storage integrated into a switching fabric wherein the NVM device appears as an industry standard OFED™ RDMA verbs provider. The verbs provider enables communicating with a ‘local storage peer’ using the existing OpenFabrics RDMA host functionality. User applications issue RDMA Read/Write directives to the ‘local peer (seen as a persistent storage) in NVM enabling NVM memory access at byte granularity. The queued, byte addressed system and method provides for Zero copy NVM access. The methods enables operations that establish application private Queue Pairs to provide asynchronous NVM memory access operations at byte level granularity. | 10-23-2014 |
20140317220 | DEVICE FOR EFFICIENT USE OF PACKET BUFFERING AND BANDWIDTH RESOURCES AT THE NETWORK EDGE - The invention relates to a hybrid network device comprising a server interface enabling access to a server system memory, a network switch comprising a packet processing engine configured to process packets routed through the switch and a switch packet buffer configured to queue packets before transmission, at least one network interface; and at least one a bus mastering DMA controller configured to access the data of said server system memory via said at least one server interface and transfer said data to and from said hybrid network device. According to one aspect of the invention, a bus transfer arbiter configured to control the data transfer from the server memory to the packet processing engine of said hybrid network device. | 10-23-2014 |
20140317221 | SYSTEM, COMPUTER-IMPLEMENTED METHOD AND COMPUTER PROGRAM PRODUCT FOR DIRECT COMMUNICATION BETWEEN HARDWARD ACCELERATORS IN A COMPUTER CLUSTER - Systems, methods and computer program products for direct communication between hardware accelerators in a computer cluster are disclosed. The system for direct communication between hardware accelerators in a computer cluster includes: a first hardware accelerator in a first computer of a computer cluster; and a second hardware accelerator in a second computer of the computer cluster. The first computer and the second computer differ from one another and are designed to be able to communicate remotely via a network, and the first accelerator is designed to request data from the second accelerator and/or to retrieve data by means of a direct memory access to a global address space on the second computer and/or to communicate data to the second computer. | 10-23-2014 |
20140325011 | RDMA-OPTIMIZED HIGH-PERFORMANCE DISTRIBUTED CACHE - A server and/or a client stores a metadata hash map that includes one or more entries associated with keys for data records stored in a cache on a server. Each of the entries stores metadata for a corresponding data record, wherein the metadata comprises a server-side remote pointer that references the corresponding data record stored in the cache, as well as a version identifier for the key. A selected data record is accessed using a provided key by: (1) identifying potentially matching entries in the metadata hash map using the provided key; (2) accessing data records stored in the cache using the server-side remote pointers from the potentially matching entries; and (3) determining whether the accessed data records match the selected data record using the provided key and the version identifiers from the potentially matching entries. | 10-30-2014 |
20140325012 | RDMA-OPTIMIZED HIGH-PERFORMANCE DISTRIBUTED CACHE - A server and/or a client stores a metadata hash map that includes one or more entries associated with keys for data records stored in a cache on a server. Each of the entries stores metadata for a corresponding data record, wherein the metadata comprises a server-side remote pointer that references the corresponding data record stored in the cache, as well as a version identifier for the key. A selected data record is accessed using a provided key by: (1) identifying potentially matching entries in the metadata hash map using the provided key; (2) accessing data records stored in the cache using the server-side remote pointers from the potentially matching entries; and (3) determining whether the accessed data records match the selected data record using the provided key and the version identifiers from the potentially matching entries. | 10-30-2014 |
20140325013 | Techniques for Command Validation for Access to a Storage Device by a Remote Client - Examples are disclosed for command validation for access to a storage device maintained at a server. In some examples, a network input/output device coupled to the server may receive the command from a client remote to the server. For these examples, elements or modules of the network input/output device may be capable of validating the command and reporting the status of the received command to the client. Other examples are described and claimed. | 10-30-2014 |
20140337456 | SYSTEMS AND METHODS FOR ENABLING RDMA BETWEEN DIVERSE ENDPOINTS - In accordance with embodiments of the present disclosure, a method may include determining one or more characteristics of each of two endpoints of a data transfer, the one or more characteristics comprising whether the endpoint is Remote Direct Memory Access (RDMA)-capable. The method may also include establishing an RDMA termination between the two endpoints. The method may additionally include configuring a first path between the RDMA termination and a first endpoint of the two endpoints, wherein the first path is RDMA-capable, in response to determining that the first endpoint is RDMA-capable. The method may further include configuring a second path between the RDMA termination and a second endpoint of the two endpoints. | 11-13-2014 |
20140337457 | USING NETWORK ADDRESSABLE NON-VOLATILE MEMORY FOR HIGH-PERFORMANCE NODE-LOCAL INPUT/OUTPUT - Data storage systems and methods for storing data in computing nodes of a super computer or compute cluster are described herein. The super computer storage may be integrated with or coupled with a primary storage system. In addition to a CPU and memory, non-volatile memory is included with the computing nodes as local storage. A high speed interconnect remote direct memory access (HRI) unit is also included with each computing node. When data bursts occur, data may be stored by a first computing node on the local storage of a second computing node through the HRI units of the computing nodes, bypassing the CPU of the second computing node. Further, the local storage of other computing nodes may be used to store redundant copies of data from a first computing node to make the super computer data resilient while not interfering with the CPU of the other computing nodes. | 11-13-2014 |
20140351360 | COMPUTER SYSTEM, SERVER MODULE, AND STORAGE MODULE - An exemplary computer system includes a server module including a first processor and first memory, a storage module including a second processor, a second memory and a storage device, and a transfer module. The transfer module retrieves a first transfer list including an address of a first storage area, which is set on the first memory for a read command, from the server module. The transfer module retrieves a second transfer list including an address of a second storage area in the second memory, in which data corresponding to the read command read from the storage device is stored temporarily, from the storage module. The transfer module sends the data corresponding to the read command in the second storage area to the first storage area by controlling the data transfer between the second storage area and the first storage area based on the first and second transfer lists. | 11-27-2014 |
20140351361 | DEPLOYMENT OF AN UPGRADE TO A STORAGE SYSTEM BASED ON CORRELATION ANALYSIS OF MEASUREMENTS OF THE STORAGE SYSTEM - Described herein are methods, systems and machine-readable media that facilitate an analysis of the contributing factors of storage system latency. The variation over time of the storage system latency is measured, along with the variation over time of the activity of various processes and/or components, the various processes and/or components being potentially contributing factors to the storage system latency. The latency measurements are correlated with the process and/or component measurements. High correlation, while not providing direct evidence of the causation of latency, is nevertheless used to identify likely factors (i.e., processes, components) contributing to latency. The latency measurements are plotted over time, the plot including supplemental information indicating, at any time instant, likely factors contributing to the storage system latency. | 11-27-2014 |
20140359043 | HIGH PERFORMANCE, DISTRIBUTED, SHARED, DATA GRID FOR DISTRIBUTED JAVA VIRTUAL MACHINE RUNTIME ARTIFACTS - A server and/or a client stores a metadata hash map that includes one or more entries associated with keys for data records stored in a cache on a server, wherein the data records comprise Java Virtual Machine (JVM) artifacts or monitoring information. Each of the entries stores metadata for a corresponding data record, wherein the metadata comprises a server-side remote pointer that references the corresponding data record stored in the cache, as well as a version identifier for the key. A selected data record is accessed using a provided key by: (1) identifying potentially matching entries in the metadata hash map using the provided key; (2) accessing data records stored in the cache using the server-side remote pointers from the potentially matching entries; and (3) determining whether the accessed data records match the selected data record using the provided key and the version identifiers from the potentially matching entries. | 12-04-2014 |
20140365596 | USE OF RDMA TO ACCESS NON-VOLATILE SOLID-STATE MEMORY IN A NETWORK STORAGE SYSTEM - A network storage controller uses a non-volatile solid-state memory (NVSSM) subsystem which includes raw flash memory as stable storage for data, and uses remote direct memory access (RDMA) to access the NVSSM subsystem, including to access the flash memory. Storage of data in the NVSSM subsystem is controlled by an external storage operating system in the storage controller. The storage operating system uses scatter-gather lists to specify the RDMA read and write operations. Multiple client-initiated reads or writes can be combined in the storage controller into a single RDMA read or write, respectively, which can then be decomposed and executed as multiple reads or writes, respectively, in the NVSSM subsystem. Memory accesses generated by a single RDMA read or write may be directed to different memory devices in the NVSSM subsystem, which may include different forms of non-volatile solid-state memory. | 12-11-2014 |
20140379833 | Server, Electronic Apparatus, Control Method of Electronic Apparatus, and Control Program of Electronic Apparatus - One embodiment provides a server including: a retriever configured to access a memory of each of plural electronic apparatuses to thereby retrieve status information of a prescribed device of each of the plural electronic apparatuses, the status information having been determined by the electronic apparatus and stored into the memory; a detector configured to detect which of a first status, a second status, and a third status the retrieved status information indicates; and a processor configured to perform first backup processing for the prescribed device if the detector detects that the status information indicates the second status, and to perform second backup processing for the prescribed device if the detector detects that the status information indicates the third status, the second backup processing being heavier in a server load than the first backup processing. | 12-25-2014 |
20140379834 | USING STATUS INQUIRY AND STATUS RESPONSE MESSAGES TO EXCHANGE MANAGEMENT INFORMATION - A status inquiry message is generated at a first machine, wherein the status inquiry message is directed to one of a second machine or a service operating on the second machine. A status inquiry message is transmitted to the second machine. The first machine receives a status response message; the status response message indicating management information from the second machine. | 12-25-2014 |
20150012606 | System and Method to Trap Virtual Functions of a Network Interface Device for Remote Direct Memory Access - An information handling system includes a processor operable to instantiate a virtual machine on the information handling system, a converged network adapter (CNA) operable to provide a virtual function to the virtual machine, and a trapped virtual function module separate from the CAN. The trapped virtual function is operable to receive data from the virtual machine, add a transport layer header and a network layer header to the data to provide a remote direct memory access (RDMA) packet, and send the RDMA packet to the CNA. The CNA is further operable to add an Ethernet header to the RDMA packet to provide an Ethernet packet, and send the Ethernet packet to a peer information handling system. | 01-08-2015 |
20150012607 | Techniques to Replicate Data between Storage Servers - Examples are disclosed for replicating data between storage servers. In some examples, a network input/output (I/O) device coupled to either a client device or to a storage server may exchange remote direct memory access (RDMA) commands or RDMA completion commands associated with replicating data received from the client device. The data may be replicated to a plurality of storage servers interconnect to each other and/or the client device via respective network communication links. Other examples are described and claimed. | 01-08-2015 |
20150019672 | Method and System for Record Access in a Distributed System - A method for record access in a distributed system includes receiving a request for a record, wherein the request comprises a transmitted key and a record identifier, extracting a location identifier and a transmitted pseudorandom portion from the transmitted key, obtaining a stored pseudorandom portion from a location in a key memory specified by the location identifier, and providing access to the record identified by the record identifier when the transmitted pseudorandom portion matches the stored pseudorandom portion. | 01-15-2015 |
20150026286 | IWARP RDMA READ EXTENSIONS - Apparatus, method and system for supporting Remote Direct Memory Access (RDMA) Read V2 Request and Response messages using the Internet Wide Area RDMA Protocol (iWARP). iWARP logic in an RDMA Network Interface Controller (RNIC) is configured to generate a new RDMA Read V2 Request message and generate a new RDMA Read V2 Response message in response to a received RDMA Read V2 Request message, and send the messages to an RDMA remote peer using iWARP implemented over an Ethernet network. The iWARP logic is further configured to process RDMA Read V2 Response messages received from the RDMA remote peer, and to write data contained in the messages to appropriate locations using DMA transfers from buffers on the RNIC into system memory. In addition, the new semantics removes the need for extra operations to grant and revoke remote access rights. | 01-22-2015 |
20150026287 | NETWORK RESOURCE MANAGEMENT SYSTEM UTILIZING PHYSICAL NETWORK IDENTIFICATION FOR CONVERGING OPERATIONS - The disclosed network resource management system employs a hardware configuration management (HCM) information handling system (IHS) that may couple to a single administered IHS or to multiple administered IHSs via an administrative network. An HCM tool in the HCM IHS may generate, modify and store hardware configuration information, including physical network identifications (PNet IDs), in an HCM database and share the HCM database with the administered IHSs. The administered IHS may be a remote direct memory access (RDMA) enabled network interface controller (RNIC) converging IHS. An RNIC converging tool may extract hardware configuration information, including PNet IDs, from the HCM database. The RNIC converging tool may utilize the hardware configuration information, including PNet IDs, to enable the RNIC converging IHS to communicate over a network with RDMA protocols. | 01-22-2015 |
20150032835 | IWARP SEND WITH IMMEDIATE DATA OPERATIONS - Apparatus, methods and systems for supporting Send with Immediate Data messages using Remote Direct Memory Access (RDMA) and the Internet Wide Area RDMA Protocol (iWARP). iWARP logic in an RDMA Network Interface Controller (RNIC) is configured to generate different types of Send with Immediate Data messages, each including a header with a unique RDMA opcode identifying the type of Send with Immediate Data message, and send the message to an RDMA remote peer using iWARP implemented over an Ethernet network. The iWARP logic is further configured to process the Send with Immediate Data messages received from the RDMA remote peer. The Send with Immediate Data messages include a Send with Immediate Data message, a Send with Invalidate and Immediate Data message, a Send with Solicited Event (SE) and Immediate Data message, and a Send with Invalidate and SE and Immediate Data message. | 01-29-2015 |
20150032836 | METHOD AND SYSTEM FOR DETECTING VIRTUAL MACHINE MIGRATION - Method and system for detecting migration of a virtual machine are provided. The method detects that a first identifier for identifying a virtual machine and a second identifier identifying a source computing system hosting the virtual machine that accesses a storage space via a logical object have changed, when the virtual machine is migrated from the source computing system to a destination computing system. Thereafter, a storage device at the destination computing system is initialized to operate as a caching device for the migrated virtual machine. | 01-29-2015 |
20150032837 | Hard Disk and Data Processing Method - An enhanced Ethernet interface is provided that is added to a hard disk and that communicates with a network based on an enhanced Ethernet protocol. The enhanced Ethernet interface is configured to communicate with the network based on the enhanced Ethernet protocol, and process a received message packet according to physical layer and link layer protocols; a first processor processes the received message packet according to transport layer and network layer protocols; a second processor processes the received message packet according to application layer service logic; and a hard disk controller performs an operation on a hard disk drive according to an instruction in the received message packet. | 01-29-2015 |
20150039712 | DIRECT ACCESS PERSISTENT MEMORY SHARED STORAGE - Techniques are described for providing one or more remote nodes with direct access to persistent random access memory (PRAM). In an embodiment, registration information is generated for a remote direct access enabled network interface controller (RNIC). The registration information associates an access key with a target region in PRAM. The access key is sent to a remote node of the one or more nodes. The RNIC may subsequently receive a remote direct memory access (RDMA) message from the remote node that includes the access key. In response to the RDMA message, the RNIC performs a direct memory access within the target region of PRAM. | 02-05-2015 |
20150058434 | STRUCTURED NETWORK TRAFFIC DATA RETRIEVAL IN VAST VOLUME - A method and apparatus are disclosed herein for retrieving network traffic data. In one embodiment, a networking apparatus comprises a memory; a network device; and a processing unit coupled to the network device and the memory. The processing unit is operable to execute a data engine that performs bulk data transfers from the network device periodically into a data buffer in the memory and translates data received from the network device, based on a mapping definition, into a user defined format for export to one or more applications running on networking apparatus. | 02-26-2015 |
20150067085 | AUTOMATIC PINNING AND UNPINNING OF VIRTUAL PAGES FOR REMOTE DIRECT MEMORY ACCESS - In one exemplary embodiment, a computer-implemented method includes receiving, at a remote direct memory access (RDMA) device, a plurality of RDMA requests referencing a plurality of virtual pages. Data transfers are scheduled for the plurality of virtual pages, wherein the scheduling occurs at the RDMA device. The number of the virtual pages that are currently pinned is limited for the RDMA requests based on a predetermined pinned page limit. | 03-05-2015 |
20150067086 | Isolating Clients of Distributed Storage Systems - A distributed storage system that includes memory hosts. Each memory host includes non-transitory memory and a network interface controller in communication with the memory and servicing remote direct memory access requests from clients. The memory receives a data transfer rate from each client in communication with the memory host through remote direct memory access. Each memory host also includes a data processor in communication with the memory and the network interface controller. The data processor executes a host process that reads each received client data transfer rate, determines a throttle data transfer rate for each client, and writes each throttle data transfer rate to non-transitory memory accessible by the clients through remote direct memory access. | 03-05-2015 |
20150067087 | AUTOMATIC PINNING AND UNPINNING OF VIRTUAL PAGES FOR REMOTE DIRECT MEMORY ACCESS - In one exemplary embodiment, a computer-implemented method includes receiving, at a remote direct memory access (RDMA) device, a plurality of RDMA requests referencing a plurality of virtual pages. Data transfers are scheduled for the plurality of virtual pages, wherein the scheduling occurs at the RDMA device. The number of the virtual pages that are currently pinned is limited for the RDMA requests based on a predetermined pinned page limit. | 03-05-2015 |
20150074217 | HIGH PERFORMANCE DATA STREAMING - Methods, systems and computer program products for high performance data streaming are provided. A computer-implemented method may include receiving a data mapping describing an association between one or more fields of a data storage location of a data source and one or more fields of a data storage location of a target destination, generating a data transfer execution plan from the data mapping to transfer data from the data source to the target destination where the data transfer execution plan comprises a determined degree of parallelism to use when transferring the data, and transferring the data from the storage location of the data source to the data storage location of the target destination using the generated data transfer execution plan. | 03-12-2015 |
20150081829 | OUT-OF-BAND REPLICATING BIOS SETTING DATA ACROSS COMPUTERS - Certain aspects of the present disclosure relate to a system for replicating BIOS setting data (BIOSSD) across computers. The system includes a plurality of computers, and each computer is connected to a service processor (SP). Each computer includes a BIOS chip, which stores a first BIOSSD collection. The SP stores a second BIOSSD collection. When the first BIOSSD collection is newer, the SP receives a copy of the first BIOSSD collection from the computer to replace the second BIOSSD collection. When the second BIOSSD collection is newer, the SP transmits a copy of the second BIOSSD collection to the computer to replace the first BIOSSD collection in the BIOS chip. A remote management may request and obtain from the SP the updated second BIOSSD collection such that the remote management computer may send the copy the updated second BIOSSD collection to other SP's for update. | 03-19-2015 |
20150081830 | METHODS, CIRCUITS, DEVICES, SYSTEMS AND ASSOCIATED COMPUTER EXECUTABLE CODE FOR DISTRIBUTED CONTENT CACHING AND DELIVERY - Disclosed are methods, circuits, devices, systems and associated computer executable code for distributed content caching and delivery. An access or gateway network may include two or more gateway nodes integral or otherwise functionally associated with a caching unit. Each of the caching units may include: (a) a caching repository, (b) caching/delivery logic and (c) an inter-cache communication module. Caching logic of a given caching unit may include content characterization functionality for generating one or more characterization parameters associated with and/or derived from content entering a gateway node with which the given caching unit is integral or otherwise functionally associated. Content characterization parameters generated by a characterization module of a given caching unit may be compared with content characterization parameters of content already cached in: one or more cache repositories of the given caching unit, and one or more cache repositories of other caching units. | 03-19-2015 |
20150089009 | REMOTE DIRECT MEMORY ACCESS WITH COPY-ON-WRITE SUPPORT - Systems and methods for implementing remote direct memory access (RDMA) with copy-on-write support. An example method may comprise: registering, with an RDMA adapter, by a first computer system, a mapping of a first virtual address to a first physical address, for transmitting a memory page identified by the first virtual address to a second computer system; registering, with the RDMA adapter, a mapping of a second virtual address to the first physical address; detecting an attempt to modify the memory page; copying the memory page to a second physical address; and registering, with the RDMA adapter, a mapping of a first virtual address to the second physical address. | 03-26-2015 |
20150089010 | RDMA-BASED STATE TRANSFER IN VIRTUAL MACHINE LIVE MIGRATION - Systems and methods for RDMA-based state transfer in virtual machine live migration. An example method may comprise: determining, by a first computer system, that a memory block has been modified by a virtual machine undergoing live migration from the first computer system to a second computer system; designating the modified memory block for transfer via a remote direct memory access (RDMA) adapter to the second computer system; selecting, asynchronously with respect to the designating, a memory block from a plurality of memory blocks designated for RDMA transfer to the second computer system, wherein a sum of an amount of pinned physical memory in the first computer system and a size of the selected memory block does not exceed a pre-defined value; registering the selected memory block with the RDMA adapter; and transmitting the selected memory block to the second computer system via the RDMA adapter. | 03-26-2015 |
20150089011 | Event Driven Remote Direct Memory Access Snapshots - Mechanisms are provided, in a data processing system, for generating a snapshot of a remote direct memory access (RDMA) resource. The mechanisms receive, from an Input/Output (IO) adapter associated with the data processing system, an error event notification and store, in response to the error event notification, a snapshot of a RDMA resource associated with the error event notification. The mechanisms tear down the RDMA resource in response to the error even notification and free memory associated with the RDMA resource in response to tearing down the RDMA resource. The snapshot stores contents of the RDMA resource. | 03-26-2015 |
20150089012 | MULTI-FABRIC SAN BASED DATA MIGRATION - In one embodiment, a network device in a network obtains information identifying first storage and second storage. The network device notifies one or more other network devices in the network that traffic that is received by the other network devices is to be routed to the network device. The network device performs data migration from the first storage to the second storage. When the data migration from the first storage to the second storage is completed, the network device notifies the other network devices in the network that traffic that is received by the other network devices is no longer to be routed to the network device. | 03-26-2015 |
20150095442 | NETWORK STORAGE SYSTEM AND METHOD FOR FILE CACHING - A network storage system and a method for file caching are provided. The network storage system includes a first electronic apparatus and a server. The first electronic apparatus has a first storage space. The server has a network storage space larger than the first storage space. When the first electronic apparatus sends an access request to the server for accessing a first file within the network storage space, the server broadcasts a cache list in response to the access request. The cache list includes the first file and a plurality of neighboring file neighboring to the first file. After receiving the cache list, the first electronic apparatus accesses the first file according to the cache list, and caches at least one of the neighboring files according to a first cache space size of the first storage space. | 04-02-2015 |
20150095443 | METHOD FOR MIGRATING MEMORY DATA OF VIRTUAL MACHINE, AND RELATED APPARATUS AND CLUSTER SYSTEM - A method for migrating memory data of a virtual machine, and a related apparatus, and a cluster system are provided. The method includes: obtaining a data sending request for sending memory data of a first virtual machine, where the request includes an identity of the first virtual machine and a PFN of the memory data that is requested to be sent; querying a correspondence information base according to the identity of the first virtual machine to obtain a correspondence of the first virtual machine; querying the correspondence of the first virtual machine according to the PFN of the memory data that is requested to be sent, so as to obtain a physical memory page address of the memory data; and sending, to a destination physical host by using an RDMA network adapter, memory data stored at the physical memory page address of the memory data. | 04-02-2015 |
20150095444 | APPARATUS, SYSTEM, AND COMPUTER READABLE MEDIUM FOR BOOTING A SERVER FROM A SHARED STORAGE SYSTEM - An apparatus, system, and computer readable medium are disclosed for booting a server from a shared storage system. The present invention teaches at least one server having at least one processor, a storage system having a plurality of storage drives and at least one boot volume corresponding to the at least one server, and a switch fabric having at least one switch; the switch fabric isolates boot traffic form storage traffic and enables communication between the server and the boot volume of the storage system. In some embodiments the switch fabric includes one or more partitionable switches that isolate boot traffic from storage traffic. The boot volumes may be a redundant array of storage devices. In certain embodiments, the present invention also includes devices external to the server, switch fabric, and storage system. | 04-02-2015 |
20150106468 | STORAGE SYSTEM AND DATA ACCESS METHOD - A distributed storage system which achieves high access performance simultaneously with maintaining the flexibility of allocation of data objects is disclosed. A client terminal includes an asynchronous cache that retains an correspondence relationship between an identifier of object data and an identifier of the storage node that is to handle an access request for the object data, and an access unit that determines the storage node that is to handle the access request on the basis of the correspondence relationship stored in the asynchronous cache, and that transmits the access request to the determined storage node, wherein the storage node includes a determination unit that determines, upon receiving the access request from the client terminal, whether the access request is to be handled by itself, and notifies the client terminal of the determined result, and an update unit that updates the storage node that is to handle the access request, and wherein the asynchronous cache changes the correspondence relationship in accordance with the update, the change being made asynchronous with the update by the storage nodes. | 04-16-2015 |
20150113088 | PERSISTENT CACHING FOR OPERATING A PERSISTENT CACHING SYSTEM - A persistent caching system is provided. The persistent caching system includes a storage system having a caching server for storing data, and a client for accessing the data through a network. The caching server is configured to store the data in a number of virtual memory blocks. The virtual memory blocks refer to an associated memory-mapped file in a file system of the caching server. The caching server is configured to export addresses of the virtual memory blocks to the client. The client is configured to access at least some of the virtual memory blocks through RDMA using the exported addresses. The caching server is configured to page virtual memory blocks being accessed by one or more clients through RDMA to and/or from the memory-mapped files associated with the accessed virtual memory blocks. | 04-23-2015 |
20150120855 | HYBRID REMOTE DIRECT MEMORY ACCESS - A method for hybrid RDMA, the method may include: (i) receiving, by a first computer, a packet that was sent over a network from a second computer; wherein the packet may include data and metadata; (ii) determining, in response to the metadata, whether the data should be (a) directly written to a first application memory of the first computer by a first hardware accelerator of the first computer; or (b) indirectly written to the first application memory; (iii) indirectly writing or indirectly writing in response to the determination. | 04-30-2015 |
20150127762 | SYSTEM AND METHOD FOR SUPPORTING OPTIMIZED BUFFER UTILIZATION FOR PACKET PROCESSING IN A NETWORKING DEVICE - A system and method can support efficient packet processing in a network environment. The system can comprise a direct memory access (DMA) resources pool that comprises one or more of DMA resources. Furthermore, the system can use a plurality of packet buffers in a memory, wherein each said DMA resource can point to a chain of packet buffers in the memory. Here, the chain of packet buffers can be implemented based on either a linked list data structure and/or a linear array data structure. Additionally, each said DMA resource allows a packet processing thread to access the chain of packet buffers using a pre-assigned thread key. | 05-07-2015 |
20150127763 | PROGRAMMED INPUT/OUTPUT MODE - A data processing system and method are provided. A host computing device comprises at least one processor. A network interface device is arranged to couple the host computing device to a network. The network interface device comprises a buffer for receiving data for transmission from the host computing device. The processor is configured to execute instructions to transfer the data for transmission to the buffer. The data processing system further comprises an indicator store configured to store an indication that at least some of the data for transmission has been transferred to the buffer wherein the indication is associated with a descriptor pointing to the buffer. | 05-07-2015 |
20150134765 | POINT-TO-POINT SHARED MEMORY PROTOCOL WITH FEATURE NEGOTIATION - A method for negotiating a feature on a multiprocessor system includes determining, at a local processor, whether a remote shared memory (SMEM) item of a remote processor exists; reading, in response to determining that the remote SMEM item exists, a remote version and a remote feature flags value of the remote SMEM item; setting a local version number for a local SMEM item based on the remote version number; setting a local feature flags value for the local SMEM item based on the remote feature flags value; and creating the local SMEM item, the local SMEM item populated with the set local version number and the set local feature flags value. | 05-14-2015 |
20150142907 | DETERMINING SERVER WRITE ACTIVITY LEVELS TO USE TO ADJUST WRITE CACHE SIZE - Provided are a computer program product, system, and method for determining server write activity levels to use to adjust write cache size. Server write activity information on server write activity to the cache is gathered. The server write activity information is processed to determine a server write activity level comprising one of multiple write activity levels indicating a level of write activity. The determined server write activity level is transmitted to a storage server having a write cache, wherein the storage server uses the determined server write activity level to determine whether to adjust a size of the storage server write cache. | 05-21-2015 |
20150293881 | NETWORK-ATTACHED MEMORY - A method for memory access is applied in a cluster of computers linked by a network. For a given computer, a respective physical memory range is defined including a local memory range within the local RAM of the given computer and a remote memory range allocated to the given compute within the local RAM of at least one other computer in the cluster, which is accessible via the network using the network interface controllers of the computers. When a memory operation is requested at a given address in the respective physical memory range, the operation is executed on the data in the local RAM of the given computer when the data at the given address are valid in the local memory range. Otherwise the data are fetched from the given address in the remote memory range to the local memory range before executing the operation on the data. | 10-15-2015 |
20150296042 | DATA COLLECTION AND TRANSFER APPARATUS - A data collection and transfer apparatus includes: a memory; a buffer having large capacity and low data storage speed; a collecting unit collecting data and storing it in the memory; a transferring unit transferring data stored in the memory to a server; a monitoring unit monitoring a network; and a setting unit retaining storages. When fault is not detected, the collecting unit collects data specified in collection-data-setting storage with period specified in collection-period-setting storage and stores it in the memory and the transferring unit transfers data stored in the memory to the server with period specified in transfer-period-setting storage, and when fault is detected, the collecting unit collects data specified in buffer-data-setting storage with period specified in buffer-period-setting storage and stores it in the memory and data stored in the memory is stored in the buffer with period specified in the buffer-period-setting storage. | 10-15-2015 |
20150301762 | METHOD AND SYSTEM FOR TRANSFORMATION OF LOGICAL DATA OBJECTS FOR STORAGE - Systems capable of transformation of logical data objects for storage and methods of operating thereof are provided. One method includes identifying among a plurality of requests addressed to the storage device two or more “write” requests addressed to the same logical data object in a distributed network, deriving data chunks corresponding to identified “write” requests and transforming the derived data chunks, grouping the transformed data chunks in accordance with a predefined criteria, generating a grouped “write” request to the storage device, and providing mapping in a manner facilitating one-to-one relationship between the data in the obtained data chunks and the data to be read from the transformed logical object. The method further includes obtaining an acknowledging response from the storage device, multiplying the obtained acknowledging response, and sending respective acknowledgements to each source that initiated each respective “write” request. | 10-22-2015 |
20150319487 | RDMA BASED REAL-TIME VIDEO CLIENT PLAYBACK ARCHITECTURE - A client playback architecture for a media content distribution system is provided. In the preferred embodiment, the client playback architecture is a Remote Direct Memory Access (RDMA) based architecture. The RDMA based architecture enables the client playback device to obtain media content from a central server in real-time or in substantially real-time as the media content is needed for playback at the client playback device. More specifically, the playback device includes RDMA enabled playback circuitry operating to perform RDMA transfers for select media content, buffer the media content received as a result of the RDMA transfers, and provide the media content for presentation to one or more associated viewers via one or more audio/video interfaces. | 11-05-2015 |
20150326684 | SYSTEM AND METHOD OF ACCESSING AND CONTROLLING A CO-PROCESSOR AND/OR INPUT/OUTPUT DEVICE VIA REMOTE DIRECT MEMORY ACCESS - A method of controlling a remote computer device of a remote computer system over a remote direct memory access (RDMA) is disclosed. According to one embodiment, the method includes establishing a connection for remote direct memory access (RDMA) between a local memory device of a local computer system and a remote memory device of a remote computer system. A local command is sent from a local application that is running on the local computer system to the remote memory device of the remote computer system via the RDMA. The remote computer system executes the local command on the remote computer device. | 11-12-2015 |
20150326686 | Method, Apparatus and System for Processing User Generated Content - A method, apparatus and system for processing User Generated Content are provided. The method comprises sending a request for obtaining UGC information to a network device; receiving UGC information returned by the network device in response to the request for obtaining UGC information; determining whether UGC in a local cache is the latest UGC based on the UGC information; and downloading UGC from the network device if UGC in the local cache is not the latest UGC. | 11-12-2015 |
20150331831 | CIRCUIT SWITCH PRE-RESERVATION IN AN ON-CHIP NETWORK - Techniques described herein generally include methods and systems related to circuit switching in a network-on-chip. According to embodiments of the disclosure, a network-on-chip may include routers configured to pre-reserve circuit-switched connections between a source node and a destination node before requested data are available for transmission from the source node to the destination node. Because the circuit-switched connection is already established between the source node and the destination node when the requested data are available for transmission from the source node, the data can be transmitted without the delay or with reduced delay caused by setup overhead of the circuit-switched connection. A connection setup message may be transmitted together with a memory request from the destination node to facilitate pre-reservation of the circuit-switched connection. | 11-19-2015 |
20150347243 | MULTI-WAY, ZERO-COPY, PASSIVE TRANSACTION LOG COLLECTION IN DISTRIBUTED TRANSACTION SYSTEMS - In various embodiments a distributed computing node in a plurality of distributed computing nodes logs transactions in a distributed processing system. In one embodiment, a set of information associated with at least one transaction is recorded in a transaction log. At least a portion of memory in at least one information processing system involved in the transaction is accessed. The portion of memory is directly accessed without involving a processor of the at least one information processing system. The set of information from the transaction log is written to the portion of memory. The set of information is directly written to the portion of memory without involving a processor of the at least one information processing system. | 12-03-2015 |
20150350301 | DIRECT MEMORY ACCESS WITH CONVERSION OR REVERSE CONVERSION OF DATA - The transfer data amount between a server and storage is effectively reduced, and the broadband of an effective band between the server and the storage is realized. An interface device is located in a server module, and, when receiving a read request issued by a server processor, transmits a read command based on the read request to a storage processor. In a case where a reverse-conversion instruction to cause the interface device to perform reverse conversion of post-conversion object data acquired by converting object data of the read request is received from the storage processor, DMA to transfer post-conversion object data stored in the transfer source address on a storage memory to the transfer destination address on the server memory while reverse-converting the post-conversion object data is performed. | 12-03-2015 |
20150350327 | SWITCH-BASED DATA TIERING - Embodiments include a method, system, and computer program product for allocating data to storage in a network. A data item accessed by a server in the network is identified. A controller classifies the identified data item based on at least one of: a frequency of access requests for the data item by the server and an access time associated with providing the data item to the server once the server requests the data item. A memory of a switch in the network is selected for storing the data item based on the classification of the data item. The controller causes the data item to be stored in the memory of the switch, from which the data item is accessed by the server upon request. | 12-03-2015 |
20150370734 | COMMUNICATION INTERFACE FOR INTERFACING A TRANSMISSION CIRCUIT WITH AN INTERCONNECTION NETWORK, AND CORRESPONDING SYSTEM AND INTEGRATED CIRCUIT - A communication interface couples a transmission circuit with an interconnection network. The transmission circuit requests transmission of a predetermined amount of data. The communication interface receives data segments from the transmission circuit, stores the data segments in a memory, and verifies whether the memory contains the predetermined amount of data. In the case where the memory contains the predetermined amount of data, the communication interface starts transmission of the data stored in the memory. Alternatively, in the case where the memory contains an amount of data less than the predetermined amount of data, the communication interface determines a parameter that identifies the time that has elapsed since the transmission request or the first datum was received from the aforesaid transmission circuit, and verifies whether the time elapsed exceeds a time threshold. In the case where the time elapsed exceeds the time threshold, the communication interface starts transmission of the data stored in the memory. | 12-24-2015 |
20150373120 | ADJUSTING A DISPERSAL PARAMETER OF DISPERSEDLY STORED DATA - A method includes storing a first subset of encoded data slices of a set of encoded data slices in one local memory, LAN memory, and/or WAN memory. The method further includes storing a second subset of encoded data slices in a different one of the local memory, the LAN memory, and the WAN memory. The method further includes determining to make a change in storage of the set of encoded data slices. The method further includes determining to make an adjustment to the pillar width number based on the determined storage change. The method further includes generating adjusted encoded data slices for the set of encoded data slices based on the adjustment to the pillar width number. The method further includes storing the updated set of encoded data slices in accordance with the determined change in the storage of the set of encoded data slices. | 12-24-2015 |
20150378961 | Extended Fast Memory Access in a Multiprocessor Computer System - A multiprocessor computer system comprises a first node operable to access memory local to a remote node by receiving a virtual memory address from a requesting entity in node logic in the first node. The first node creates a network address from the virtual address received in the node logic, where the network address is in a larger address space than the virtual memory address, and sends a fast memory access request from the first node to a network node identified in the network address. | 12-31-2015 |
20160020992 | MAC TABLE SYNC SCHEME WITH MULTIPLE PIPELINES - Embodiments provide techniques for synchronizing forwarding tables across forwarding pipelines. One embodiment includes receiving, in a network switch comprising a plurality of forwarding pipelines, a plurality of data packets. Each of the plurality of data packets corresponds to a respective one of the plurality of forwarding pipelines. Each of the plurality of forwarding pipelines maintains a respective forwarding table corresponding to a respective plurality of ports managed by the forwarding pipeline. A plurality of update operations to be performed on the forwarding tables are determined, based on the received plurality of data packets. Embodiments further include performing the plurality of update operations on the forwarding tables, such that the forwarding tables across all forwarding pipelines of the plurality of forwarding pipelines are synchronized. | 01-21-2016 |
20160026604 | DYNAMIC RDMA QUEUE ON-LOADING - A remote direct memory access (RDMA) host device having a host operating system and an RDMA network communication adapter device. Responsive to determination of an RDMA on-load event for an RDMA queue used in an RDMA connection, at least one of a user-mode module and the operating system of the host device is used to provide an RDMA on-load notification to the RDMA network communication adapter device. The on-load notification notifies the adapter device of the determination of the on-load event for the RDMA queue, and the determination is performed by at least one of the user-mode module and the operating system. During processing of an RDMA transaction of the RDMA queue in a case where the RDMA on-load event is determined, the operating system is used to perform at least one RDMA sub-process of the RDMA transaction. | 01-28-2016 |
20160026605 | REGISTRATIONLESS TRANSMIT ONLOAD RDMA - An RDMA transceiving system in which an operating system of the RDMA transceiving system performs a first sub-process of an RDMA transmission, and an RDMA network communication adapter device of the RDMA transceiving system performs a second sub-process of the RDMA transmission responsive to RDMA transmission information provided by the operating system. The operating system performs the first sub-process responsive to a request that includes a virtual address corresponding to a buffer to be used for the RDMA transmission, and the operating system translates the virtual address into a physical address. The RDMA network communication adapter device performs an RDMA access responsive to the physical address. | 01-28-2016 |
20160028819 | DATA PATH SELECTION FOR NETWORK TRANSFER USING HIGH SPEED RDMA OR NON-RDMA DATA PATHS - A method and apparatus for high-speed data path selection for network transfer using IP addresses is disclosed. The method may include configuring an IP address for a non-RDMA data transfer or an RDMA high speed data transfer. An application executing in an emulated environment may transfer data using the same function calls for both non-RDMA data transfer or an RDMA high speed data transfer. This reduces changes to the application to allow RDMA transfers. A non-emulated interface determines whether the IP address is flagged as an RDMA address. If so, the data is transferred via an RDMA data path. If the IP address is not flagged as an RDMA address, the data is transferred via a non-RDMA data path through a traditional network stack. | 01-28-2016 |
20160034418 | SCALABLE DATA USING RDMA AND MMIO - To improve upon some of the characteristics of current storage systems in general and block data storage systems in particular, exemplary embodiments combine state-of-the art networking techniques with state-of-the-art data storage elements in a novel way. To accomplish this combination in a highly effective way, it is proposed to combine networking remote direct memory access (RDMA) technique and storage-oriented memory mapped input output (MMIO) technique in a system to provide direct access from a remote storage client to a remote storage system with little to no central processing unit (CPU) intervention of the remote storage server. In some embodiments, this technique may reduce the required CPU intervention on the client side. These reductions of CPU intervention potentially reduce latency while providing performance improvements, and/or providing more data transfer bandwidth and/or throughput and/or more operations per second compared to other systems with equivalent hardware. | 02-04-2016 |
20160048476 | DATA MANAGING SYSTEM, DATA MANAGING METHOD, AND COMPUTER-READABLE, NON-TRANSITORY MEDIUM STORING A DATA MANAGING PROGRAM - A data managing system includes data managing apparatuses storing data using a first storage unit and a second storage unit with a higher access speed than the first storage unit. Each data managing apparatuses includes an operation performing unit performing, upon reception of an operation request including a first identifier and a second identifier indicating an operation target performed before an operation target of the first identifier, an operation on first data corresponding to the first identifier; a prior-read request unit requesting a prior-read target data managing apparatus to store data corresponding to a third identifier in the second storage unit upon reception of the operation request; and a prior-read target registration request unit requesting the data managing apparatuses corresponding to the second identifier to store the first identifier as the prior-read target of the second identifier. | 02-18-2016 |
20160062944 | SUPPORTING RMA API OVER ACTIVE MESSAGE - Methods, apparatus, and software for implementing RMA application programming interfaces (APIs) over Active Message (AM). AM write and AM read requests are sent from a local node to a remote node to write data to or read data from memory on the remote node using Remote Memory Access (RMA) techniques. The AM requests are handled by corresponding AM handlers, which automatically perform operations associated with the requests. For example, for AM write requests an AM write request handler may write data contained in an AM write request to a remote address space in memory on the remote node, or generate a corresponding RMA write request that is enqueued into an RMA queue used in accordance with a tagged message scheme. Similar operations are performed by AM read requests handlers. RMA reads and writes using AM are further facilitated through use of associated read, write, and RMA progress modules. | 03-03-2016 |
20160077997 | APPARATUS AND METHOD FOR DEADLOCK AVOIDANCE - An improved method for the prevention of deadlock in a massively parallel processor (MPP) system wherein, prior to a process sending messages to another process running on a remote processor, the process allocates space in a deadlock-avoidance FIFO. The allocated space provides a “landing zone” for requests that the software process (the application software) will subsequently issue using a remote-memory-access function. In some embodiments, the deadlock-avoidance (DLA) function provides two different deadlock-avoidance schemes: controlled discard and persistent reservation. In some embodiments, the software process determines which scheme will be used at the time the space is allocated. | 03-17-2016 |
20160080497 | INTERCONNECT DELIVERY PROCESS - A method for enforcing data integrity in an RDMA data storage system includes flushing data write requests to a data storage device before sending an acknowledgment that the data write requests have been executed. An RDMA data storage system includes a node configured to flush data write requests to a data storage device before sending an acknowledgment that a data write request has been executed. | 03-17-2016 |
20160088082 | TECHNIQUES FOR COORDINATING PARALLEL PERFORMANCE AND CANCELLATION OF COMMANDS IN A STORAGE CLUSTER SYSTEM - Various embodiments are directed to techniques for coordinating at least partially parallel performance and cancellation of data access commands between nodes of a storage cluster system. An apparatus may include a processor component of a first node coupled to a first storage device storing client device data; an access component to perform replica data access commands of replica command sets on the client device data, each replica command set assigned a set ID; a communications component to analyze a set ID included in a network packet to determine whether a portion of a replica command set in the network packet is redundant, and to reassemble the replica command set from the portion based if the portion is not redundant; and an ordering component to provide the communications component with set IDs of replica command sets of which the access component has fully performed the set of replica data access commands. | 03-24-2016 |
20160092395 | MAPPING AND REDUCING - As disclosed herein, a method for conducting mapping and reducing operations includes receiving a plurality of data records and aggregating data records having a common value for a selected field within the data records to provide aggregated data records for each common value, storing the aggregated data records on a shared storage subsystem, and accessing the aggregated data records on the shared storage subsystem. The method further comprises accumulating information for the aggregated data records to provide accumulated information, and using the accumulated information. | 03-31-2016 |
20160094630 | MAPPING AND REDUCING - As disclosed herein, a system for conducting mapping and reducing operations includes a shared storage subsystem that is connected to one or more mapping servers and one or more reducing servers via a high-speed data link and communication protocol. Each mapping server receives a multitude of data records, aggregates the data records having a particular value, and sorts and stores the resulting aggregated data records on the shared storage subsystem. Each reducing server accesses the shared storage subsystem and accumulates information on the aggregated data records for a particular common value. In many instances, the access rates to the shared storage subsystem achieved by the mapping servers and the reducing servers approach that of accessing a local attached storage device. A computer program product and method corresponding to the system for conducting mapping and reducing operations are also disclosed herein. | 03-31-2016 |
20160098376 | SIDE CHANNEL COMMUNICATION HARDWARE DRIVER - A system and method for communicating data between a first software and a second software located on first and second devices, respectively, has a hardware driver and memory associated with each device. Each communication of data from the first software to the second software allocates memory to manage data to be communicated from the first software to the second software, provides memory allocation information to the hardware driver associated with the first software, and transmits the data from the first hardware driver to the second hardware driver for delivery to the second software via the memory associated with the second software. | 04-07-2016 |
20160100012 | METHOD OF STORING DATA - A method of sharing data in a subsea network comprising a plurality of nodes interconnected by a plurality of data connections arranged to carry data to and from equipment in subsea installations, the method comprising: storing data in a mass subsea data store provided across one or more nodes in the subsea network configured to act as a subsea data server; and on receiving, at the subsea data server, a request for access to data stored in the mass subsea data store, the subsea data server retrieving the requested data from the data store and causing the requested data to be sent over the subsea | 04-07-2016 |
20160103782 | METHODS AND SYSTEMS FOR SECURE TRANSMISSION AND RECEPTION OF DATA BETWEEN A MOBILE DEVICE AND A CENTRAL COMPUTER SYSTEM - Methods and systems provide secure data transmission from a mobile device to a central computer system over a communication network. The method includes executing a first computer program in the mobile device and allocating by the first computer program a volatile memory space in the mobile device for a defined session. The method includes storing data in the allocated volatile memory space. The method includes transmitting the stored data to the central computer using a secure transmission protocol over the communication network. The method includes de-allocating by the first computer program the volatile memory space at the termination of the session. The de-allocation erases the transmitted data from the volatile memory space. | 04-14-2016 |
20160103783 | LOW LATENCY DEVICE INTERCONNECT USING REMOTE MEMORY ACCESS WITH SEGMENTED QUEUES - A writing application on a computing device can reference a tail pointer to write messages to message buffers that a peer-to-peer data link replicates in memory of another computing device. The message buffers are divided into at least two queue segments, where each segment has several buffers. Messages are read from the buffers by a reading application on one of the computing devices using an advancing head pointer by reading a message from a next message buffer when determining that the next message buffer has been newly written. The tail pointer is advanced from one message buffer to another within a same queue segment after writing messages. The tail pointer is advanced from a message buffer of a current queue segment to a message buffer of a next queue segment when determining that the head pointer does not indicate any of the buffers of the next queue segment. | 04-14-2016 |
20160105511 | SCHEDULING AND EXECUTION OF DAG-STRUCTURED COMPUTATION ON RDMA-CONNECTED CLUSTERS - A server and/or a client stores a metadata hash map that includes one or more entries associated with keys for data records stored in a cache on a server, wherein the data records comprise a directed acyclic graph (DAG), and the directed acyclic graph is comprised of a collection of one or more nodes connected by one or more edges, each of the nodes representing one or more tasks ordered into a sequence, and each of the edges representing one or more constraints on the nodes connected by the edges. Each of the entries stores metadata for a corresponding data record, wherein the metadata comprises a server-side remote pointer that references the corresponding data record stored in the cache. A selected data record is accessed using a provided key by: (1) identifying potentially matching entries in the metadata hash map using the provided key; (2) accessing data records stored in the cache using the server-side remote pointers from the potentially matching entries; and (3) determining whether the accessed data records match the selected data record using the provided key. | 04-14-2016 |
20160112292 | METHOD AND SYSTEM FOR NON-TAGGED BASED LATENCY CALCULATION - A system and method for calculating latency including a latency calculation device configured to: receive an enqueue notification relating to a packet enqueue operation and including a queue identifier, increment an enqueue counter, and determine that a latency calculation flag is not set. Based on the determination that the latency calculation flag is not set, the latency calculation device is configured to: determine a first time corresponding to the enqueue notification, store the first time, store a latency start count, and set the latency calculation flag. The latency calculation device is also configured to: receive a dequeue notification relating to the packet dequeue operation and including the queue identifier, increment a dequeue counter, determine that the latency start count and the dequeue counter values match, determine a second time corresponding to the dequeue notification, and calculate latency as the difference between the first time and the second time. | 04-21-2016 |
20160112532 | IMAGE FORMATION APPARATUS, NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM HAVING PROGRAM STORED THEREIN TO CONTROL IMAGE FORMATION APPARATUS, AND METHOD FOR CONTROLLING IMAGE FORMATION APPARATUS - An MFP includes a processor following a user instruction to execute a browser and cause a display device to display a designated web page. In doing so, the processor obtains user information and stores it to a memory. The processor determines that the user information stored in the memory is deleted at a point of time at which the processor determines that the MFP presents a predefined state allowing the MFP to end displaying a web page or at a point of time at which the processor determines that a user operation is done that is predefined to cause the MFP to end displaying the web page. At the earlier one of these points of time, the processor deletes the user information stored in the memory. | 04-21-2016 |
20160117283 | REMOTE DIRECT MEMORY ACCESS (RDMA) OPTIMIZED HIGH AVAILABILITY FOR IN-MEMORY DATA STORAGE - A method for RDMA optimized high availability for in-memory storing of data includes receiving RDMA key-value store write requests in a network adapter of a primary computing server directed to writing data to an in-memory key-value store of the primary computing server and performing RDMA write operations of the data by the network adapter of the primary computing server responsive to the RDMA key-value store write requests. The method also includes replicating the RDMA key-value store write requests to a network adapter of a secondary computing server, by the network adapter of the primary computing server. Finally, the method includes providing address translation data for the in-memory key-value store of the primary computing server from the network adapter of the primary computing server to the network adapter of the secondary computing server. | 04-28-2016 |
20160119422 | REMOTE DIRECT MEMORY ACCESS (RDMA) OPTIMIZED HIGH AVAILABILITY FOR IN-MEMORY DATA STORAGE - A method for RDMA optimized high availability for in-memory storing of data includes receiving RDMA key-value store write requests in a network adapter of a primary computing server directed to writing data to an in-memory key-value store of the primary computing server and performing RDMA write operations of the data by the network adapter of the primary computing server responsive to the RDMA key-value store write requests. The method also includes replicating the RDMA key-value store write requests to a network adapter of a secondary computing server, by the network adapter of the primary computing server. Finally, the method includes providing address translation data for the in-memory key-value store of the primary computing server from the network adapter of the primary computing server to the network adapter of the secondary computing server. | 04-28-2016 |
20160119423 | CODE-DIVISION-MULTIPLE-ACCESS (CDMA)-BASED NETWORK CODING FOR MASSIVE MEMORY SERVERS - Technologies are generally described for code-division-multiple-access (CDMA)-based network-coding for reading data from massive memory servers. According to some examples, data may be modulated by spreading sequences prior to being transmitted from one memory node to another. In addition, the received signals (modulated data) from multiple memory nodes may be combined by a receiver memory node before being forwarded to other memory nodes arranged in a grid of memory nodes. The memory nodes may be assigned communication bandwidths flexibly and rapidly by changing the respective spreading sequences, which may be orthogonal or near-orthogonal for different memory nodes to allow support of random-access burst or dynamic data traffic, and enhancing fault-tolerance. | 04-28-2016 |
20160124897 | CACHE MANAGEMENT FOR RDMA DATA STORES - Embodiments relate to methods, systems and computer program products for cache management in a Remote Direct Memory Access (RDMA) data store. Aspects include receiving a request from a remote computer to access a data item stored in the RDMA data store and creating a lease including a local expiration time for the data item. Aspects further include creating a remote pointer to the data item, wherein the remote pointer includes a remote expiration time and transmitting the remote pointer to the remote computer, wherein the lease is an agreement that that the remote computer can perform RDMA reads on the data item until the remote expiration time. | 05-05-2016 |
20160124898 | CACHE MANAGEMENT FOR RDMA DATA STORES - Embodiments relate to methods, systems and computer program products for cache management in a Remote Direct Memory Access (RDMA) data store. Aspects include receiving a request from a remote computer to access a data item stored in the RDMA data store and creating a lease including a local expiration time for the data item. Aspects further include creating a remote pointer to the data item, wherein the remote pointer includes a remote expiration time and transmitting the remote pointer to the remote computer, wherein the lease is an agreement that that the remote computer can perform RDMA reads on the data item until the remote expiration time. | 05-05-2016 |
20160127468 | VIRTUAL NON-VOLATILE MEMORY EXPRESS DRIVE - A processing device receives an input/output (I/O) command generated by a process, the I/O command directed to a virtual storage provided to a host executing the process, wherein the virtual storage comprises a virtual non-volatile memory express (NVMe) drive. The processing device generates a new I/O command based on the received I/O command and encapsulates the new I/O command into a message. The processing device sends the message to at least one of a remote NVMe storage device or a storage server comprising one or more remote NVMe storage devices. | 05-05-2016 |
20160127492 | NON-VOLATILE MEMORY EXPRESS OVER ETHERNET - A processing device receives a message encapsulating an input/output (I/O) command from a remote computing device. The processing device identifies one or more physical storage devices to be accessed to satisfy the I/O command. The processing device then sends, to each physical storage device of the one or more physical storage devices, one or more non-volatile memory express (NVMe) commands directed to that physical storage device. | 05-05-2016 |
20160127494 | REMOTE DIRECT NON-VOLATILE CACHE ACCESS - A system and method of providing direct data access between a non-volatile cache and a NIC in a computing system. A system is disclosed that includes a processing core embedded in a controller that controls a non-volatile cache; and a direct access manager for directing the processing core, wherein the direct access manager includes: a switch configuration system that includes logic to control a switch for either a remote direct access mode or a host access mode, wherein the switch couples each of the NIC, a local bus, and the non-volatile cache; a command processing system that includes logic to process data transfer commands; and a data transfer system that includes logic to manage the flow of data directly between the non-volatile cache and the NIC. | 05-05-2016 |
20160132460 | EXTERNAL MEMORY CONTROLLER NODE - A memory controller to provide memory access services in an adaptive computing engine is provided. The controller comprises: a network interface configured to receive a memory request from a programmable network; and a memory interface configured to access a memory to fulfill the memory request from the programmable network, wherein the memory interface receives and provides data for the memory request to the network interface, the network interface configured to send data to and receive data from the programmable network. | 05-12-2016 |
20160134704 | MANAGED P2P NETWORK WITH CONTENT-DELIVERY NETWORK - A content-acquisition request is sent to a centralized management service computer via a computer network. The content-acquisition request may query the centralized management service computer for a recommended content source to provide a first digital content item. If a response to the content-acquisition request is received via the computer network and identifies a recommended peer computer of a peer-to-peer network as the recommended content source, a request to download the first digital content item is sent to the recommended peer computer via the computer network. If a response to the content-acquisition request is not received, a fallback request to download the first digital content item is automatically sent to a content-delivery network computer via the computer network. | 05-12-2016 |
20160147709 | PROVIDING REMOTE, RELIANT AND HIGH PERFORMANCE PCI EXPRESS DEVICE IN CLOUD COMPUTING ENVIRONMENTS - A system architecture, a method, and a computer program product are disclosed for attaching remote physical devices. In one embodiment, the system architecture comprises a compute server and a device server. The compute server includes a system memory, and one or more remote device drivers; and the device server includes a system memory and one or more physical devices, and each of the physical devices includes an associated device memory. The compute server and the device server are connected through an existing network fabric that provides remote direct memory access (RDMA) services. A system mapping function logically connects one or more of the physical devices on the device server to the compute server, including mapping between the system memories and the device memories and keeping the system memories and the device memories in synchronization using the RDMA. | 05-26-2016 |
20160147710 | PROVIDING REMOTE, RELIANT AND HIGH PERFORMANCE PCI EXPRESS DEVICE IN CLOUD COMPUTING ENVIRONMENTS - A system architecture, a method, and a computer program product are disclosed for attaching remote physical devices. In one embodiment, the system architecture comprises a compute server and a device server. The compute server includes a system memory, and one or more remote device drivers; and the device server includes a system memory and one or more physical devices, and each of the physical devices includes an associated device memory. The compute server and the device server are connected through an existing network fabric that provides remote direct memory access (RDMA) services. A system mapping function logically connects one or more of the physical devices on the device server to the compute server, including mapping between the system memories and the device memories and keeping the system memories and the device memories in synchronization using the RDMA. | 05-26-2016 |
20160162438 | SYSTEMS AND METHODS FOR ENABLING ACCESS TO ELASTIC STORAGE OVER A NETWORK AS LOCAL STORAGE VIA A LOGICAL STORAGE CONTROLLER - A new approach is proposed that contemplates systems and methods to support elastic (extensible/flexible) storage access in real time by mapping a plurality of remote storage devices that are accessible over a network fabric as logical namespace(s) via a logic storage controller using a multitude of access mechanisms and storage network protocols. The logical storage controller exports and presents the remote storage devices to one or more VMs running on a host of the logical storage controller as the logical namespace(s), wherein these remote storage devices appear virtually as one or more logical volumes of a collection of logical blocks in the logical namespace(s) to the VMs. As a result, each of the VMs running on the host can access these remote storage devices to perform read/write operations as if they were local storage devices via the logical namespace(s). | 06-09-2016 |
20160173555 | WIRELESS BROADBAND NETWORK WITH INTEGRATED STREAMING MULTIMEDIA SERVICES | 06-16-2016 |
20160173600 | PROGRAMMABLE PROCESSING ENGINE FOR A VIRTUAL INTERFACE CONTROLLER | 06-16-2016 |
20160173601 | Command Message Generation and Execution Using a Machine Code-Instruction | 06-16-2016 |
20160179391 | Remote Memory Swapping Method, Apparatus and System | 06-23-2016 |
20160182637 | Isolating Clients of Distributed Storage Systems | 06-23-2016 |
20160188527 | METHODS AND SYSTEMS TO ACHIEVE MULTI-TENANCY IN RDMA OVER CONVERGED ETHERNET - A method for providing multi-tenancy support for RDMA in a system that includes a plurality of physical hosts. Each physical host hosts a set of data compute nodes (DCNs). The method, at an RDMA protocol stack of the first host, receives a packet that includes a request from a first DCN hosted on a first host for RDMA data transfer from a second DCN hosted on a second host. The method sends a set of parameters of an overlay network that are associated with the first DCN to an RDMA physical network interface controller of the first host. The set of parameters are used by the RDMA physical NIC to encapsulate the packet with an RDMA data transfer header and an overlay network header by using the set of parameters of the overlay network to transfer the encapsulated packet to the second physical host using the overlay network. | 06-30-2016 |
20160188528 | ELECTRONIC SYSTEM WITH STORAGE CONTROL MECHANISM AND METHOD OF OPERATION THEREOF - An electronic system includes: a management server providing a management mechanism with an address structure having a unified address space; a communication block, coupled to the management server, configured to implement a communication transaction based on the management mechanism with the address structure having the unified address space; and a server, coupled to the communication block, providing the communication transaction with a storage device based on the management mechanism with the address structure having the unified address space. | 06-30-2016 |
20160197992 | FILE STORAGE PROTOCOLS HEADER TRANSFORMATION IN RDMA OPERATIONS | 07-07-2016 |
20160203102 | EFFICIENT REMOTE POINTER SHARING FOR ENHANCED ACCESS TO KEY-VALUE STORES | 07-14-2016 |
20160254942 | Instant Office Infrastructure Device | 09-01-2016 |
20160378713 | SYSTEM AND METHOD FOR PERSISTENCE OF APPLICATION DATA USING REPLICATION OVER REMOTE DIRECT MEMORY ACCESS - In accordance with an embodiment, described herein is a system and method for enabling persistence of application data, using replication over a remote direct memory access (RDMA) network. In an enterprise application server or other environment having a plurality of processing nodes, a replicated store enables application data to be written using remote direct memory access to the random access memory (RAM) of a set of nodes, which avoids single points of failure. Replicated store daemons allocate and expose memory to client applications via network endpoints, at which data operations such as reads and writes can be performed, in a manner similar to a block storage device. Resilvering can be used to copy data from one node to another, if it is determined that the number of data replicas within a particular set of nodes is not sufficient to meet the persistence requirements of a particular client application. | 12-29-2016 |
20160378714 | COMMUNICATION APPARATUS AND CONTROL METHOD THEREOF - A communication apparatus includes the first memory unit which stores data to be a sending target to another communication apparatus and the second memory unit accessible at higher speed than the first memory unit, and transfers the sending target data to the second memory unit concurrently with transfer of the data to the first memory unit. The communication apparatus sends the sending target data from the second memory unit to the other communication apparatus and discards the data from the second memory unit after that sending and before receiving a response to the data from the other communication apparatus. When resending the data, resending processing to the other communication apparatus is performed based on the data transferred to the first memory unit. | 12-29-2016 |
20160381138 | Computing Erasure Metadata and Data Layout Prior to Storage Using A Processing Platform - Techniques are provided for computing data and metadata layout prior to storage in a storage system using a processing platform. An exemplary processing platform comprises one or more of a compute node and a burst buffer appliance. The processing platform communicates with a plurality of the compute nodes over a network, wherein a plurality of applications executing on the plurality of compute nodes generate a plurality of data objects; computes erasure metadata for one or more of the data objects on at least one of the compute nodes; and provides the erasure metadata with the corresponding one or more data objects to a storage system. The processing platform optionally determines a full set of the data objects to be stored and queries the storage system to determine an anticipated layout of the full set of the data objects to be stored. The anticipated layout allows special handling, for example, for small files and large files that are identified based on predefined criteria. | 12-29-2016 |
20170235702 | REMOTE DIRECT MEMORY ACCESS-BASED METHOD OF TRANSFERRING ARRAYS OF OBJECTS INCLUDING GARBAGE DATA | 08-17-2017 |
20180027074 | SYSTEM AND METHOD FOR STORAGE ACCESS INPUT/OUTPUT OPERATIONS IN A VIRTUALIZED ENVIRONMENT | 01-25-2018 |
20190146945 | CONVERGED MEMORY DEVICE AND METHOD THEREOF | 05-16-2019 |