Entries |
Document | Title | Date |
20080198855 | TRANSMISSION-RECEPTION APPARATUS - A transmission-reception apparatus does not configure ARQ control information from only sequence number, but the transmission-reception apparatus configures the ARQ control information such that the ARQ control information is comprised of one sequence number containing first occurrence of a corresponding packet's error, and bit information representing existence of retransmission requirements about sequence numbers followed on the heels of such the one sequence number. | 08-21-2008 |
20080205406 | RECORDING MEDIUM HAVING RECEPTION PROGRAM RECORDED THEREIN, RECORDING MEDIUM HAVING TRANSMISSION PROGRAM RECORDED THEREIN, TRANSMISSION/RECEPTION SYSTEM AND TRANSMISSION/RECEPTION METHOD - The reception program making a transmission-destination computer control the processing of receiving a packet having the same content which is transmitted from a transmission-source computer through plural routes. | 08-28-2008 |
20080205407 | Network switch cross point - A switching fabric having cross points that process multiple stripes of serial data. Each cross point includes a plurality of port slices and ports. Each port includes a plurality of FIFOs, a FIFO read arbitrator, a multiplexer, a dispatcher, and an accumulator. In one embodiment, each cross point has eight ports and eight port slices. A method for processing a stripe of data at a cross point at one port slice includes storing data received from other port slices in a plurality of FIFOs and arbitrating the reading of the stored data. A step of writing data received from a port at the one port slice to an appropriate FIFO in a different port slice is also included. In one embodiment, a method for processing data in port slice based on wide cell encoding and an external flow control command is provided. | 08-28-2008 |
20080212588 | SYSTEM AND METHOD FOR AVOIDING STALL USING TIMER FOR HIGH-SPEED DOWNLINK PACKET ACCESS SYSTEM - At least one timer is used to prevent a stall condition. If a timer is not active, the timer is started for a data block that is correctly received. The data block has a sequence number higher than a sequence number of another data block that was first expected to be received. When the timer is stopped or expires, all correctly received data blocks among data blocks up to and including a data block having a sequence number that is immediately before the sequence number of the data block for which the timer was started is delivered to a higher layer. Further, all correctly received data blocks up to a first missing data block, including the data block for which the timer was started, is delivered to the higher layer. | 09-04-2008 |
20080219267 | METHOD FOR TRANSMITTING NETWORK PACKETS - A method for transmitting network packets is provided. Reordered network packets received by a receiving end are put into a buffer queue. When a waiting time expires or when a network packet with a sequence number equal to the current transmission sequence number is received, the receiving end picks and transmits appropriate network packets from the buffer queue. Therefore, reordered network packets are sorted and then sent out sequentially, thereby avoiding waste of network bandwidth caused by retransmitting network packets, and improving the transmitting efficiency of the network. | 09-11-2008 |
20080225854 | PACKET UNSTOPPER SYSTEM FOR A PARALLEL PACKET SWITCH - A system for controlling egress buffer saturation includes, for each data packet flow, a comparator for comparing the number of data packets ‘WPC’ temporarily stored within an egress buffer to a predefined threshold value ‘WPCth’. The packet sequence number ‘PSNr’ of a last received in-sequence data packet and each highest packet sequence number ‘HPSNj’ received through respective ones of the plurality of switching planes is stored. By comparing the last received in-sequence packet sequence number ‘PSNr’ to each highest packet sequence number ‘HPSNj’ when the number of data packets ‘WPC’ exceeds the predefined threshold value ‘WPCth’ a determination as to which switching plane(s), among the plurality of switching planes, to unstop the flow of data packets can be made. | 09-18-2008 |
20080240107 | SEQUENCE NUMBERING FOR DISTRIBUTED WIRELESS NETWORKS - Systems and methodologies are described that facilitate maintaining consistent radio-link layer protocol (RLP) sequence numbers in the event of an RLP sequence number reset. An offset can be adjusted upon occurrence of the event to reflect a subsequent expected sequence number. The offset can be added to the RLP sequence numbers such that receiving devices and/or higher layer applications can operate without realizing the sequence number reset. Additionally, the offset can be synchronized among base stations to facilitate operability following handoff of the receiving device. | 10-02-2008 |
20080240108 | Processing Encoded Real-Time Data - Processing packets of encoded real-time data to perform a gradual fade-out and fade-in of a signal, for example upon detecting a packet loss period. Upon detecting the packet loss period, a last correctly received packet is repeated with gradually increased attenuation, to solely fade out for example an audible output but similarly, after the end of a packet loss period the reappearing can be slowly faded in by attenuating the first or a number of first data packets after the packet loss period. The attenuation operation can be performed at low complexity by decrementing segment numbers of samples of data packets such as in a lookup operation. | 10-02-2008 |
20080253375 | DATA TRANSMISSION METHOD FOR HSDPA - In the data transmission method of an HSDPA system according to the present invention, a transmitter transmits Data Blocks each composed of one or more data units originated from a same logical channel, and a receiver receives the Data Block through a HS-DSCH and distributes the Data Block to a predetermined reordering buffer. Since each Data Block is composed of the MAC-d PDUs originated from the same logical channel, it is possible to monitor the in-sequence delivery of the data units, resulting in reduction of undesirable queuing delay caused by logical channel multiplexing. | 10-16-2008 |
20080259926 | Parsing Out of Order Data Packets at a Content Gateway of a Network - In one embodiment, a method includes receiving, at a local node of a network, a sequenced data packet of a flow made up of multiple sequenced data packets from a source node directed toward a destination node. The flow is to be parsed by the local node to describe the flow for administration of the network. Based on sequence data in the sequenced data packet, it is determined whether the sequenced data packet is out of order in the flow. If it is determined that the sequenced data packet is out of order, then the sequenced data packet is forwarded toward the destination node before parsing the sequenced data packet. The out of order sequenced data packet is also stored for subsequent parsing at the local node. | 10-23-2008 |
20080259927 | Information dissemination method and system having minimal network bandwidth utilization - An information disseminating apparatus that transmits information between nodes of a network while expending minimal or no network bandwidth for transmitting the information. The apparatus can include a message processor that generates or receives a message to be transmitted from a first note to a second node in the network, and a transmitter that transmits data packets in a sequence that represents the message from the first node to the second node. The apparatus may further include a plurality of queues each associated with a class and services one or more data packets each having a marker that corresponds to the class, and a queue processor that dequeues the data packets from the queues in accordance to the sequence and the class associated with each of the queues. | 10-23-2008 |
20080259928 | Method and Apparatus for Out-of-Order Processing of Packets using Linked Lists - These and other aspects of the present invention will be better described with reference to the Detailed Description and the accompanying figures. A method and apparatus for out-of-order processing of packets using linked lists is described. In one embodiment, the method includes receiving packets in a global order, the packets being designated for different ones of a plurality of reorder contexts. The method also includes storing information regarding each of the packets in a shared reorder buffer. The method also includes for each of the plurality of reorder contexts, maintaining a reorder context linked list that records the order in which those of the packets that were designated for that reorder context and that are currently stored in the shared reorder buffer were received relative to the global order. The method also includes completing processing of at least certain of the packets out of the global order and retiring the packets from the shared reorder buffer out of the global order for at least certain of the packets. | 10-23-2008 |
20080267190 | Method of, and System for, Communicating Data, and a Station for Transmitting Data - Data is transmitted from a first station ( | 10-30-2008 |
20080273537 | CIPHERING SEQUENCE NUMBER FOR AN ADJACENT LAYER PROTOCOL IN DATA PACKET COMMUNICATIONS - A data packet communication system employs data encryption in a packet data convergence protocol (PDCP) and radio link control (RLC) in Layer 2 of transmission between a transmitter (TX) and a receiver (RX). A single sequence number is used for both the PDCP and RLC to reduce overhead by signaling a TX PDCP first ciphering sequence number to the RX prior to encrypted data packet communication. A sequence number accompanies each RLC PDU, which can encompass concatenated or segmented service data units (SDUs) from the higher layer PDCP. This sequence number is sufficient for the RLC to perform re-ordering, gap detection, retransmission, etc., while also allowing the RX upper layer PDCP to reconstruct a sequenced value used to encrypt content. | 11-06-2008 |
20080279189 | Method and System For Controlled Delay of Packet Processing With Multiple Loop Paths - A method and system for introducing controlled delay of packet processing at a network entity using multiple delay loop paths (DLPs). For each packet received at the network entity, a determination will be made as to whether or not processing should be delayed. If delay is necessary, one of a plurality of DLPs will be selected according to a desired delay for the packet and a path delay determined for each DLP. Upon completion of a DLP delay, a packet will be returned for processing, an additional delay, or some other action. Multiple DLPs may be enabled with packet queues, and may be used advantageously by security devices, such as Intrusion Prevention Systems (and other packet processing platforms) for which in-order processing of packets may be desired or required. | 11-13-2008 |
20080279190 | Maintaining End-to-End Packet Ordering - A network processor to maintain end-to-end packet ordering by re-ordering the packets processed in an order that is not the same as the order in which the packets are received. A first microblock stores a null value for a status flag corresponding to each packet, a second microblock modifies the null value to a first value or a second value respectively based on whether the packet is processed successfully, and a third microblock retrieves the values stored in the status flags of each packet and re-orders the packets. | 11-13-2008 |
20080279191 | Method and Apparatus of Delivering Protocol Data Units for a User Equipment in a Wireless Communications System - A method of delivering protocol data units (PDUs) for a user equipment in a wireless communications system includes receiving a reordering PDU from a protocol entity. The reordering PDU has a PDU sequence, A first PDU of the PDU sequence is a segment of a PDU having a segmented side at the front, and a last PDU of the PDU sequence is a complete PDU. The first PDU is concatenated with a previously stored PDU segment to form a complete PDU for being delivered to an upper layer entity of the protocol entity when the reordering PDU and the previously stored PDU segment are consecutive. All PDUs other than the first PDU in the reordering PDU are delivered to the upper layer entity. | 11-13-2008 |
20080279192 | Method and Apparatus of Delivering Protocol Data Units for a User Equipment in a Wireless Communications System - A method of delivering protocol data units (PDUs) for a user equipment in a wireless communications system includes receiving a reordering PDU from a protocol entity. The reordering PDU has a PDU sequence. A first PDU of the PDU sequence is a segment of a PDU having a segmented side at the front, and a last PDU of the PDU sequence is a segment of a PDU having a segmented side at the end. The first PDU and a previously stored PDU segment are discarded when the reordering PDU and the previously stored PDU segment are not consecutive. All PDUs other than the first PDU and the last PDU in the reordering PDU are delivered to an upper layer entity of the protocol entity and the last PDU is stored. | 11-13-2008 |
20080279193 | Method and Apparatus of Delivering Protocol Data Units for a User Equipment in a Wireless Communications System - A method of delivering packet data units (PDUs) for a user equipment in a wireless communications system includes receiving a reordering PDU having at least one upper layer PDU from a protocol entity, determining whether the at least one upper layer PDU are segmented for reassembling with a previously stored segment of a upper layer PDU according to a segmentation indication message corresponding to the reordering PDU, and delivering the at least one PDU to an upper layer protocol entity and discarding the previously stored segment when the at least one upper layer PDU are all not segmented PDUs. | 11-13-2008 |
20080279194 | Method and Apparatus of Improving Reset of Evolved Media Access Control Protocol Entity in a Wireless Communications System - A method of improving reset of an evolved media access control (MAC-ehs) protocol entity for a user equipment of a wireless communications system is disclosed. The MAC-ehs entity includes a plurality of reordering queues and a plurality of reassembly entities. The method includes receiving a reset request for resetting the MAC-ehs entity, delivering all reordering packet data units (PDUs) stored in the plurality of reordering queues to the corresponding reassembly entities for performing a reassembly process to deliver complete upper layer PDUs to an upper layer entity, and discarding all PDU segments still existing in the plurality of reassembly entities. | 11-13-2008 |
20080285565 | SYSTEMS AND METHODS FOR CONTENT INSERTION WITHIN A ROUTER - Systems and methods are provided for inserting content into messages being sent between systems or networks. A router according to one embodiment automatically inserts new content within a received data packet. Before insertion, the router may determine that the received data packet corresponds to a certain type of message such as a web page, an e-mail, or a text message. The router may also determine whether the packet includes a predetermined insertion point in the corresponding message. The predetermined insertion point may be, for example, an end of the web page, e-mail, or text message. The type of message and/or the subject matter of the inserted content may be based on user selectable preferences. In one embodiment, a plurality of packets are received before the new content is inserted into the message to improve reliability and/or allow message decoding. | 11-20-2008 |
20080285566 | METHOD AND APPARATUS FOR PROVIDING AND UTILIZING RADIO LINK CONTROL AND MEDIUM ACCESS CONTROL PACKET DELIVERY NOTIFICATION - A method and apparatus for providing and utilizing radio link control (RLC) and medium access control (MAC) packet delivery notification are disclosed. A MAC entity tracks a delivery status of a MAC protocol data unit (PDU) and sends MAC service data unit (SDU) delivery notification to an RLC entity and/or an upper layer upon occurrence of a triggering event. The triggering event may be expiration of a timer, a packet discard decision, MAC reset, MAC re-establishment, or a request from the upper layer. The RLC entity may also track a delivery status of an RLC service data unit (SDU) and send RLC delivery notification to the upper layer. An upper layer including a header compression entity may adjust a header compression parameter based on the RLC delivery notification or the MAC delivery notification. The RLC delivery notification may be used for lossless handover. Retransmission may be performed at the upper layer based on the RLC delivery notification or the MAC delivery notification. | 11-20-2008 |
20080298369 | SYSTEM AND METHOD FOR HANDLING OUT-OF-ORDER FRAMES - A system for handling out-of-order frames may include one or more processors that enable receiving of an out-of-order frame via a network subsystem. The one or more processors may enable placing data of the out-of-order frame in a host memory, and managing information relating to one or more holes resulting from the out-of-order frame in a receive window. The one or more processors may enable setting a programmable limit with respect to a number of holes allowed in the receive window. The out-of-order frame is received via a TCP offload engine (TOE) of the network subsystem or a TCP-enabled Ethernet controller (TEEC) of the network subsystem. The network subsystem may not store the out-of-order frame on an onboard memory, and may not store one or more missing frames relating to the out-of-order frame. The network subsystem may include a network interface card (NIC). | 12-04-2008 |
20080304488 | NON-STOP SWITCHING EQUIPMENT FOR PACKET BASED COMMUNICATION LINKS - When packets reach from the | 12-11-2008 |
20090010261 | Signal Transition Feature Based Coding For Serial Link - Signal transition feature based coding for serial link is described herein. According to one embodiment, in response to a data stream transmitted onto a serial communication link, one or more bits of the data stream are encoded according to bit order determined based on a frequency of signal transitions of the data stream. As a result, a sequence of encoded data stream having a lower number of bit transitions with respect to the frequency of signal transitions of the data stream prior to the encoding is generated. Thereafter, the encoded data sequence is transmitted onto the serial communication link. Other methods and apparatuses are also described. | 01-08-2009 |
20090010262 | NETWORK ROUTING APPARATUS - A network routing apparatus in which packet forwarding units for performing a packet forwarding process are arranged in parallel to one another, a packet distribution unit for distributing packets to the packet forwarding units arranged in parallel to one another, a packet rearrangement unit for rearranging outputs of the packet forwarding units are provided in the network routing apparatus, and packet retrieving units for retrieving packet headers in the packet forwarding units are further arranged in parallel to one another. | 01-08-2009 |
20090016352 | MULTIPLEXING METHOD AND APPARATUS THEREOF FOR DATA SWITCHING - A multiplexing method for data switching is disclosed. In the method, a continuous data is received, and the continuous data includes a plurality of super frames, and each super frame includes a plurality of frames. These super frames are divided into a set of even super frames and a set of odd super frames. The frames included in the set of odd super frames are sorted by corresponding required bit error rate of each frame decreasingly or increasingly. The frames included in the set of even super frames are sorted by the required bit error rate of each frame increasingly or decreasingly. An encoder is used to encode these sorted super frames. | 01-15-2009 |
20090022156 | Pacing a Data Transfer Operation Between Compute Nodes on a Parallel Computer - Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (‘DMA’) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine. | 01-22-2009 |
20090022157 | Error masking for data transmission using received data - A method and apparatus for error masking for data transmission using received data. An embodiment of a method includes receiving a first data packet, where the first data packet contains multiple data elements, the first data packet being a data packet in a data stream. The method further includes determining that one or more data packets are missing from the data stream, and generating one or more data packets to replace the one or more missing data packets based at least in part on the one or more data elements of the first data packet. | 01-22-2009 |
20090022158 | Method For Increasing Network Transmission Efficiency By Increasing A Data Updating Rate Of A Memory - A network interface circuit or card has a memory and a medium control module for transmitting data stored in the memory to a network. The method includes: when a packet data is transmitted (such as completely transmitted) from the memory to the medium control module, making the memory send an interrupt request such that a new packet data can be read into the memory. This results in increased data transmission efficiency in the network interface circuit. | 01-22-2009 |
20090028156 | FRAME TRANSMISSION SYSTEM AND FRAME TRANSMISSION APPARATUS - A frame transmission apparatus collects a plurality of frames supplied via multiple cables forming a single logic path based on link aggregation setting and outputs the collected frames to a single output line. The frame transmission apparatus includes a delay information storage unit which stores delay information indicating a transmission delay of each of the multiple cables, a reception timing correcting unit which corrects reception timings of the plurality of frames supplied via the multiple cables for the delay information corresponding to the multiple cables through which the plurality of frames have been supplied, and a data recovery unit which collects the plurality of frames supplied via the multiple cables in an order of the reception timings corrected by the reception timing correcting unit and outputs the collected frames to the output line. | 01-29-2009 |
20090041020 | CLOCK MANAGEMENT BETWEEN TWO ENDPOINTS - Clock correlation can be achieved, for example, utilizing the RTP stream between a sender and receiver by determining a baseline at the start of, for example, a communication. This baseline is derived as a point in time from an arriving packet and represents a point from which subsequent packets deviate. Using this baseline, an early packet or a late packet can be detected. An early packet pushes the baseline down to that earlier point, while late arriving packets, if they are arriving late for a continuous period of time, represents a shift in the opposite direction from the baseline, resulting in a baseline moving to the “earliest” packet out of the sequence of the late arriving packets. | 02-12-2009 |
20090041021 | Method And Computer Program Product For Compressing Time-Multiplexed Data And For Estimating A Frame Structure Of Time-Multiplexed Data - A method and computer program product are provided for compressing and, in turn, for estimating the frame structure of time-multiplexed data. The time-multiplexed data may be received without an indication of the frame structure for the time-multiplexed data. As such, the frame structure of the time-multiplexed data may be estimated and the time-multiplexed data may be compressed at least partially in accordance with the estimation of the frame structure. The frame structure may be estimated by representing an estimation of frame structure with a tree structure. The tree structure may include a plurality of leaf nodes associated with a respective estimated signal sequence with a respective sampling rate and interleave location. The tree structure may include a plurality of tree branches with the estimation of the frame structure including at least one of splitting or merging tree branches. | 02-12-2009 |
20090046720 | GATHERING TRAFFIC PROFILES FOR ENDPOINT DEVICES THAT ARE OPERABLY COUPLED TO A NETWORK - Methods and computer program products for gathering a traffic profile for an endpoint device operably coupled to a network. Methods include simultaneously sending a set of numerically sequenced packets to a plurality of endpoint devices on the network including the endpoint device, wherein the plurality of endpoint devices are time synchronized with the numerically sequenced packets; receiving the numerically sequenced packets at the endpoint device; electronically storing packet data for the numerically sequenced packets at the endpoint device, wherein the packet data is stored for a user configurable time period; collecting the packet data within the user configurable time period; and using the collected packet data to determine packet loss information or packet sequence information for the endpoint device. | 02-19-2009 |
20090059928 | COMMUNICATION APPARATUS, COMMUNICATION SYSTEM, ABSENT PACKET DETECTING METHOD AND ABSENT PACKET DETECTING PROGRAM - Any packet loss is detected very quickly by means of only a series of sequence number in a multi-path environment where a transmitter and a receiver are connected to each other by way of a plurality of networks when no inversion of sequence arises in any of the networks. A communication apparatus includes a plurality of sequence buffers arranged at each network to accumulate packets until a sequence acknowledgement and an absence detecting section adapted to determine the occurrence of an absence of a packet when one or more packets are accumulated in all the sequence buffers. With this arrangement, the absence detecting section of the receiver monitors the packets staying in the sequence guaranteeing buffer arranged in each of the network, paying attention to the characteristic that packets are stored in the sequence buffers of all the networks when a packet loss takes place. | 03-05-2009 |
20090067430 | PARTIAL BUILD OF A FIBRE CHANNEL FABRIC - In one embodiment, a technique for performing partial build fabric operations when merging two or more Fibre Channel fabrics is provided. By maintaining a Principal Switch already assigned for one of two merging fabrics, a limited “partial build” may be performed for the other merging fabric. As a result, the time required for a Principal Switch selection phase may be greatly reduced. | 03-12-2009 |
20090067431 | HIGH PERFORMANCE NETWORK ADAPTER (HPNA) - A high performance network adapter is provided for forwarding traffic and providing adaptation between packetized memory fragment based processor links of multiple CPUs and multiple switch planes of a packet switching network. Low latency for short and long packets is provided by innovative packet reassembly, overlapping transmission, and reverse order transmission in the upstream direction, and cut through operation in the downstream direction. | 03-12-2009 |
20090073984 | PACKET GENERATOR FOR A COMMUNICATION NETWORK - The present disclosure provides systems and methods for distributing data packets on an aircraft data network having a plurality of end systems. The systems and methods include retrieving a configuration module, retrieving a payload description module, selecting a network interface component, selecting a port rate for the network interface component, selecting a payload module, generating payload data, generating a data packet from the generated payload data, and transmitting the generated data packet to at least one of the plurality of end systems via the network interface component and the aircraft data network. | 03-19-2009 |
20090080432 | SYSTEM AND METHOD FOR MESSAGE SEQUENCING IN A BROADBAND GATEWAY - A system and method for message sequencing in a broadband gateway comprising a receiver to receive two or more messages, a data storage system to store the two or more messages, an identifier to identify a processing sequence for the two or more messages, and a retriever to retrieve the two or more messages for processing based on the identified processing sequence for providing broadband DSL service to a customer. | 03-26-2009 |
20090080433 | Method and system for power consumption reduction by network devices as a function of network activity - A method and system may provide power consumption reduction by a network device. A device on a network and its components each may capable of operating in a low-power state and a high power state. The device may include a host to run an application and a network controller to interface with the network. The network controller may include a host interface logic to interface with host, a micro-engine or other logic to process network maintenance data packets, and a filter to classify data packets. The filter may classify a received data packet having an associated data type and an associated destination by data type by data type and destination. The data packet may be sent to the micro-engine if the data type and destination of the packet may be processed by the micro-engine. | 03-26-2009 |
20090086735 | Method of Skipping Nullified Packets During Mass Replay from Replay Buffer - In PCI-Express and alike network systems, back-up copies of recently sent packets are kept in a replay buffer for resending if the original packet is not well received by an intended destination device. A method for locating the back-up copy in the retry buffer comprises applying a less significant portion of the sequence number of a to-be-retrieved back-up copy to an index table to obtain a start address or other locater indicating where in the retry buffer the to-be-retrieved back-up copy resides. A method for skipping replay of late nullified packets includes deleting from the index table, references to late nullified packets. | 04-02-2009 |
20090086736 | NOTIFICATION OF OUT OF ORDER PACKETS - Methods and apparatus relating to notification of out-of-order packets are described. In an embodiment, data such as a sequence number and a flow identifier may be extracted from a packet. The extracted data may be used to check the extracted sequence number against an expected sequence number and indicate that the packet is an out-of-order packet. Other embodiments are also disclosed. | 04-02-2009 |
20090086737 | System-on-chip communication manager - A Queue Manager (QM) system and method are provided for communicating control messages between processors. The method accepts control messages from a source processor addressed to a destination processor. The control messages are loaded in a first-in first-out (FIFO) queue associated with the destination processor. Then, the method serially supplies loaded control messages to the destination processor from the queue. The messages may be accepted from a plurality of source processors addressed to the same destination processor. The control messages are added to the queue in the order in which they are received. In one aspect, a plurality of parallel FIFO queues may be established that are associated with the same destination processor. Then, the method differentiates the control messages into the parallel FIFO queues and supplies control messages from the parallel FIFO queues in an order responsive to criteria such as queue ranking, weighting, or shaping. | 04-02-2009 |
20090103543 | Recovering From a Link Down Event - The present invention is directed to recovering from (e.g., surviving) a link down event in a link having a two-way serial-connection (e.g., a peripheral component interface express (“PCI Express”) link). Rather than resetting a sequence numbering and control flow credits of the two-way serial-connection link when a link down event occurs, a method in accordance with an embodiment of the present invention maintains the sequence numbering and control flow credits. In this way, packets that were transmitted are not lost. After the two-way serial-connection link comes back up, any packets that were “in flight” (i.e., transmitted but not acknowledged) are retransmitted (e.g., replayed) in accordance with the sequence numbering and the control flow credits. | 04-23-2009 |
20090116489 | METHOD AND APPARATUS TO REDUCE DATA LOSS WITHIN A LINK-AGGREGATING AND RESEQUENCING BROADBAND TRANSCEIVER - A rate shaper between a transmit queue and a transmitter regulates the flow rate of packets through a resequencing broadband transmission device, such as a cable modem, toward a user device, such as a computer. When the resequencing device receives packets out of order that belong to a program flow spread across multiple links, or channels, the rate shaper regulates the flow of packets by imposing a delay to the flow of resequenced packets by a factor that is inversely proportional to the number of out of sequence packets buffered in a message storage buffer. A setpoint signal may be input to the shaper to provide a target flow rate to the shaper. | 05-07-2009 |
20090135827 | SYNCHRONIZING SEQUENCE NUMBERS AMONG PEERS IN A NETWORK - A method and system are disclosed. In one embodiment the method includes a first device sending a stream of packets in a sequence across a network to a second device. In the sequence of packets there are a number of data packets and one or more synchronization packets. The synchronization packets are interspersed throughout the data packets. The method also includes the second device being capable of dropping any of the received data packets in the sequence arriving more than a first delta of time threshold value after the arrival of the most recent synchronization packet. | 05-28-2009 |
20090135828 | INTERNET PROTOCOL TELEVISION (IPTV) BROADCASTING SYSTEM WITH REDUCED DISPLAY DELAY DUE TO CHANNEL CHANGING, AND METHOD OF GENERATING AND USING ACCELERATION STREAM - An Internet Protocol Television (IPTV) broadcasting system is provided. The IPTV broadcasting system includes: An Internet Protocol Television (IPTV) broadcasting system for reducing a display delay when a channel is changed, including: a stream transmitting apparatus receiving a broadcast channel stream, generating an acceleration stream from the broadcast channel stream, and transmitting the broadcast channel stream and the acceleration stream; and a set-top box receiving the broadcast channel stream and the acceleration stream from the stream transmitting apparatus, and reproducing a decoding stream, according to a type of a received frame in the broadcast channel stream and the acceleration stream. Therefore, it is possible to reduce a display delay when a channel is changed. | 05-28-2009 |
20090141726 | NETWORK ADDRESS ASSIGNMENT METHOD AND ROUTING METHOD FOR A LONG THIN ZIGBEE NETWORK - A network address assignment method and a routing method for a long thin ZigBee network are provided. The routing method includes the following steps. A network address is assigned to each node of the ZigBee network, wherein each network address includes a cluster ID and a node ID. The cluster ID is used for identifying a plurality of clusters of the ZigBee network. The Node ID is used for identifying a plurality of nodes of each cluster. When a packet is transmitted, check whether the current node holding the packet and the destination node are in the same cluster. If they are in the same cluster, the packet is routed within the cluster according to the node ID of the destination node and a predetermined algorithm. If they are not in the same cluster, the packet is routed among the clusters according to the cluster ID of the destination node. | 06-04-2009 |
20090185565 | SYSTEM AND METHOD FOR USING SEQUENCE ORDERED SETS FOR ENERGY EFFICIENT ETHERNET COMMUNICATION - A system and method for using sequence ordered sets for energy efficient Ethernet communication. Sequence ordered sets can be generated by a first device for communication of parameter(s) to a second device, which parameters can be used in implementing an energy efficient Ethernet control policy. Sequence ordered sets can be used in communication between physical layer devices, or between a physical layer device and a media access control device. In one example, the sequence ordered set can identify a point at which a rate transition is to occur. | 07-23-2009 |
20090190593 | PACKET ANALYSIS METHOD, PACKET ANALYSIS APPARATUS, RECORDING MEDIUM STORING PACKET ANALYSIS PROGRAM - A packet analysis apparatus analyzes content of communication obtained as a result of monitoring or capturing a packet passing through a network. The apparatus has a unit of acquiring source or destination address information from a network layer packet header. The apparatus has a unit of acquiring from the network layer packet header an identifier for which a value that increases monotonously with each sending for each source or destination address information is set. The apparatus has a unit of searching and acquiring an identifier corresponding to address information in a current packet from a storage part holding an identifier in a previous packet corresponding to source or destination address information. The apparatus has a unit of comparing the identifier in the previous packet acquired and the identifier in the current packet and determining that reordering occurs when the identifier in the current packet is smaller. | 07-30-2009 |
20090196294 | PACKET TRANSMISSION VIA MULTIPLE LINKS IN A WIRELESS COMMUNICATION SYSTEM - Techniques for generating and transmitting packets on multiple links in a wireless communication system are described. In one aspect, a transmitter generates new packets for the multiple links based on the likelihood of each link being available. The transmitter determines the likelihood of each carrier being available based on whether or not there is a pending packet on that carrier and, if yes, the number of subpackets sent for the pending packet. The transmitter generates new packets such that packets for links progressively less likely to be available contain data units with progressively higher sequence numbers. The transmitter determines whether each link is available and sends a packet on each link that is available. In another aspect, the transmitter generates and sends new packets in a manner to ensure in-order transmission. In one design, the transmitter generates new packets for each possible combination of links that might be available. | 08-06-2009 |
20090213857 | METHOD, SYSTEM, COMPUTER PROGRAM PRODUCT, AND HARDWARE PRODUCT FOR ETHERNET VIRTUALIZATION USING AN ELASTIC FIFO MEMORY TO FACILITATE FLOW OF BROADCAST TRAFFIC TO VIRTUAL HOSTS - A packet that represents unknown traffic for a virtual host is received. A test is performed to ascertain whether or not a destination connection can be determined for the received packet wherein it is discovered the packet is a broadcast (or multicast) packet. Since such packets have multiple destinations in a virtualized environment, the broadcast (or multicast) packet requires special handling and is passed to a store engine. The store engine obtains a free packet buffer from an elastic FIFO memory, moves the packet into the free packet buffer, and submits the free packet buffer back to the elastic FIFO memory. An assist engine determines and assigns connections to packets submitted to the elastic FIFO without known connections, such as broadcast (or multicast) packets. The assist engine efficiently performs this task through the use of indirect buffers, which are also obtained from and submitted back to the elastic FIFO. A monitoring engine detects both an availability of connection-specific resources and a presence of one or more waiting packets, within the elastic FIFO, with a known destination connection. When both are detected, said monitoring engine removes a packet from the elastic FIFO and passes it to an allocating engine. The allocating engine allocates the one or more connection-specific resources required to send the packet to the virtual host memory corresponding to the connection destination, then passes the packet to a sending engine which writes the packet to the virtual host memory. | 08-27-2009 |
20090232140 | PACKET TRANSMISSION APPARATUS AND METHOD - A packet transmission apparatus includes a transmission unit dividing input data and transferring segments, which are obtained by adding sequence numbers to the respective pieces of the divided data, a switch unit transferring the segments to one of a plurality of reception units, and a reception unit reconstructing an original input packet from the plurality of segments that arrive from the switch units on the basis of the sequence numbers. The reception unit includes a packet buffer storing segments that arrive from the switch units, a determination unit determining, on the basis of the sequence number, whether the segment stored in the packet buffer is to be discarded; and a discard part reading the segment stored in the packet buffer in an order from the segment having an older sequence number and discarding the segment that is determined to be discarded. | 09-17-2009 |
20090252167 | QUEUE PROCESSING METHOD - A method of processing data packets, each data packet being associated with one of a plurality of entities. The method comprises storing a data packet associated with a respective one of said plurality of entities in a buffer, storing state parameter data associated with said stored data packet, the state parameter data being based upon a value of a state parameter associated with said respective one of said plurality of entities, and processing a data packet in said buffer based upon said associated state parameter data. | 10-08-2009 |
20090257435 | MODIFICATION OF LIVE STREAMS - Mechanisms are provided for generating and modifying live media streams. A device establishes a session and requests a media stream from a content server. The content server provides the media stream to the device. The content server also obtains an insertion stream for inclusion in the media stream. Packets are removed from the media stream to allow inclusion of the insertion stream. Timestamp information and sequence number information is maintained to allow uninterrupted delivery of the modified media stream. | 10-15-2009 |
20090262743 | METHODS AND APPARATUS FOR EVALUATING THE SEQUENCE OF PACKETS - Embodiments of the invention are directed to evaluating the sequence of packets in a received packet stream. In some embodiments, when a packet in the packet stream is received, its sequence number may be determined and compared to an expected sequence number indicative of the highest received sequence number in the packet stream. If the sequence number of the packet is greater than or equal to the expected sequence number, the packet may be considered an in-order packet and a counter that counts the number of received in-order packets in the packet stream may be incremented. | 10-22-2009 |
20090279548 | PIPELINE METHOD AND SYSTEM FOR SWITCHING PACKETS - A switching device comprising one or more processors coupled to a media access control (MAC) interface and a memory structure for switching packets rapidly between one or more source devices and one or more destination devices. Packets are pipelined through a series of first processing segments to perform a plurality of first sub-operations involving the initial processing of packets received from source devices to be buffered in the memory structure. Packets are pipelined through a series of second processing segments to perform a plurality of second sub-operations involved in retrieving packets from the memory structure and preparing packets for transmission. Packets are pipelined through a series of third processing segments to perform a plurality of third sub-operations involved in scheduling transmission of packets to the MAC interface for transmission to one or more destination devices. | 11-12-2009 |
20090285217 | STATISTICAL MULTIPLEXING OF COMPRESSED VIDEO STREAMS - Described are computer-based methods and apparatuses, including computer program products, for statistical multiplexing of compressed video streams. A deadline of a packet of a compressed video stream is computed based on a program clock reference value of the packet. A plurality of packets, which includes the packet, is sorted based on deadlines corresponding to the packets. A next packet from the sorted plurality of packets is selected, the next packet having a corresponding deadline nearest to a system clock time. The next packet is transmitted. | 11-19-2009 |
20090290586 | Shared Transport - A method and apparatus for processing message is described. In one embodiment, messages are received over a plurality of channels from a plurality of applications in a virtual machine. An identifier is coupled to each message. The identifier refers to the application originating the corresponding message. A shared transport is formed and associated with the channels. The messages are processed with the shared transport with the identifier. | 11-26-2009 |
20090310611 | Method and device for communication between multiple sockets | 12-17-2009 |
20100002704 | System and Method for End-User Custom Parsing Definitions - Systems and methods for performing customizable analysis of a communication session between two entities includes loading predetermined first parser definitions, stored as at least one binary file, receiving second parser definitions in a form other than a binary file form, after the first parser definitions are already operating, loading and compiling the second parser definitions, and applying the first and second parser definitions to a communication session, wherein the first parser definitions identify standard components of the communication session and the second parser definitions are customizable and identify non-standard components of the communication session. | 01-07-2010 |
20100020801 | System and Method for Filtering and Alteration of Digital Data Packets - A method comprises receiving data from a data source and converting the data, in approximately real time, into digital data packets, wherein the data packets have a common format. The method further comprises filtering the data packets using a user-defined metadata schema and storing the filtered data packets into a data storage medium. | 01-28-2010 |
20100027547 | Relay Device And Terminal Unit - In a relay device, a storage section stores data items having order information (e.g. sequence numbers) added thereto, received from a transmitting-end device. A transmission order controller reads out the data items from the storage section such that the data items are transmitted in an order corresponding to information to be notified to a receiving-end device, and transmits the data items. This enables a detecting section of the receiving-end device to detect that a receiver thereof has received the data items the sequence numbers of which are arranged not in an order of 1, 2 and 3 but in an order of 3, 1 and 2, for example, and detect that information corresponding to this order has been transmitted from the relay device. A rearrangement controller rearranges and outputs the data items received by the receiver based on the sequence numbers in an order of the sequence numbers. | 02-04-2010 |
20100046519 | Re-Ordering Segments of a Large Number of Segmented Service Flows - A method and network device for re-ordering segments of a segmented data stream. The method includes receiving at least two segments of a segmented data stream. A descriptor for each of the at least two segments is obtained, and the at least two segments are re-ordered to generate re-ordered segments, where the re-ordered segments are in an original order. A set of re-ordered segments are processed to obtain at least one data packet, where at least one descriptor is utilized in the processing of the set of re-ordered segments. | 02-25-2010 |
20100046520 | PACKET RECOVERY METHOD, COMMUNICATION SYSTEM, INFORMATION PROCESSING DEVICE, AND PROGRAM - A packet recovery method of the present invention is a packet recovery method upon loss of a plurality of packets transmitted from a first node | 02-25-2010 |
20100046521 | System and Method for High Speed Packet Transmission - The present invention provides systems and methods for providing data transmission speeds at or in excess of 10 gigabits per second between one or more source devices and one or more destination devices. According to one embodiment, the system of the present invention comprises a first and second media access control (MAC) interfaces to facilitate receipt and transmission of packets over an associated set of physical interfaces. The system also contemplates a first and second field programmable gate arrays (FPGA) coupled to the MAC interfaces and an associated first and second memory structures, the first and second FPGAs are configured to perform initial processing of packets received from the first and second MAC interfaces and to schedule the transmission of packets to the first and second MAC interface for transmission to one or more destination devices. The first and second FPGAs are further operative to dispatch and retrieve packets to and from the first and second memory structures. A third FPGA, coupled to the first and second memory structures and a backplane, is operative to retrieve and dispatch packets to and from the first and second memory structures, compute appropriate destinations for packets and organize packets for transmission. The third FPGA is further operative to receive and dispatch packets to and from the backplane. | 02-25-2010 |
20100061374 | Credit based flow control in an asymmetric channel environment - A system and method are provided for controlling information flow from a channel service module (CSM) in an asymmetric channel environment. The method provides information for transmission to an OSI model PITY (physical) layer device with a channel buffer. The PHY device channel buffer current capacity is estimated. Information is sent to the channel buffer responsive to estimating the channel buffer capacity, prior to receiving a Polling Result message from the PHY device. Initially, Polling Request messages are sent to the PHY device, and Polling Result messages received from the PHY device, as is conventional. In response to analyzing the Polling messages, a transmission pattern is determined, which includes the amount of information to transmit and a period between transmissions. | 03-11-2010 |
20100080231 | Method and system for restoration of a packet arrival order by an information technology system - A system and method for restoring the arrival order of a plurality of packets after receipt of the packets and prior to a retransmission of the plurality of packets are provided. The invented system is configured to process a first number of packets through a high latency path, and then process all remaining packets through a lower latency path. The received packets are stored after processing in a queue memory until either (a.) all of the packets processed through the high latency path are fully processed through the high latency path, or (b.) a time period of packet processing has expired. The packets stored in the queue are transmitted from the system in the order in which the packets were received by the system, and the additional data packets are retransmitted without storage in the queue memory. A method for allocating system resources for memory queue use is further provided | 04-01-2010 |
20100098087 | Traffic Generator Using Parallel Coherent Transmit Engines - There is disclosed a packet generator and method of generating a packet flow. The packet generator may include a plurality of parallel transmit engines to form packets for transmission and a multiplexer to coherently interleave packets formed by the plurality of transmit engines. | 04-22-2010 |
20100118876 | Link Layer Control Protocol Implementation - The present invention relates to a link layer control protocol implementation in a communication system. To improve operative efficiency of the link layer control protocol implementation it is suggested to delay issuance of a retransmission request for a missing data unit during a retransmission delay time period. Therefore, according to the present invention a retransmission request is not issued immediately upon detection of a missing data unit. Therefore, the present invention avoids issuance of false alarm for the missing data unit when it is received during the retransmission delay period. | 05-13-2010 |
20100135303 | RETRANSMISSION-REQUEST TRANSMITTING METHOD AND RECEIVING SIDE APPARATUS - In a retransmission-request transmitting method, the receiving side apparatus activates a reordering timer, when receiving a first packet before receiving an unreceived packet with a sequence number smaller than a sequence number of the first packet; triggers transmission of a retransmission request for the unreceived packet, when having not received the unreceived packet by the time of expiration of the reordering timer activated in response to the receipt of the first packet; and stops and reactivates the reordering timer activated in response to the receipt of the first packet, when a value of the sequence number of the first packet falls out of a range of the receiving side window as a result of changing the upper limit value and the lower limit value in accordance with a sequence number of a second packet received from the transmitting side apparatus. | 06-03-2010 |
20100142534 | SYSTEM AND METHOD FOR HANDLING OUT-OF-ORDER FRAMES - A system for reordering frames may include at least one processor that enable receiving of an out-of-order frame via a network subsystem. The at least one processor may enable placing data of the out-of-order frame in a host memory, and managing information relating to one or more holes resulting from the out-of-order frame in a receive window. The at least one processor may enable setting a programmable limit with respect to a number of holes allowed in the receive window. The out-of-order frame is received via a TCP offload engine (TOE) of the network subsystem or a TCP-enabled Ethernet controller (TEEC) of the network subsystem. The network subsystem may not store the out-of-order frame on an onboard memory, and may not store one or more missing frames relating to the out-of-order frame. The network subsystem may include a network interface card (NIC). | 06-10-2010 |
20100172356 | PARSING OUT OF ORDER DATA PACKETS AT A CONTENT GATEWAY OF A NETWORK - In one embodiment, a method includes receiving, at a local node of a network, a sequenced data packet of a flow made up of multiple sequenced data packets from a source node directed toward a destination node. The flow is to be parsed by the local node to describe the flow for administration of the network. Based on sequence data in the sequenced data packet, it is determined whether the sequenced data packet is out of order in the flow. If it is determined that the sequenced data packet is out of order, then the sequenced data packet is forwarded toward the destination node before parsing the sequenced data packet. The out of order sequenced data packet is also stored for subsequent parsing at the local node. | 07-08-2010 |
20100177776 | RECOVERING FROM DROPPED FRAMES IN REAL-TIME TRANSMISSION OF VIDEO OVER IP NETWORKS - Technologies for recovering from dropped frames in the real-time transmission of video over an IP network are provided. A video streaming module receives a notification from a receiving module that a data packet has been lost. The video streaming module determines, based on the type of video frame conveyed in the lost packet and the timing of the lost packet in relation to the sequence of video frames transmitted to the receiving module, whether or not a replacement video frame should be sent to the receiving module. If the video streaming module determines a replacement video frame is warranted, then the video streaming module instructs a video encoding module to generate a replacement video frame and then transmits the replacement video frame to the receiving module. | 07-15-2010 |
20100177777 | PRESERVING THE ORDER OF PACKETS THROUGH A DEVICE - A network device includes one or more sprayers, multiple packet processors, and one or more desprayers. The sprayers receive packets on at least one incoming packet stream and distribute the packets according to a load balancing scheme that balances the number of bytes of packet data that is given to each of the packet processors. The packet processors receive the packets from the sprayers and process the packets to determine routing information for the packets. The desprayers receive the processed packets from the packet processors and transmit the packets on at least one outgoing packet stream based on the routing information. | 07-15-2010 |
20100183013 | PACKET PROCESSING DEVICE AND METHOD - A packet processing device is provided, which is applied to a network equipment that transmits packets. The device includes: a control module for executing a control schedule; a capture module for capturing at least one packet according to the control schedule; and a disassembling module for disassembling the header of the packet according to the control schedule so as to obtain packet header information. The packet processing device of the present invention can be installed in any network equipment to disassemble and process packets before they are captured by CPUs or memories of back-end computers, thereby achieving rapid processing of packets and reducing usage of CPU resources and occupancy of memories. | 07-22-2010 |
20100202460 | MAINTAINING PACKET SEQUENCE USING CELL FLOW CONTROL - Packets out-of-sequence problem can be solved by using a window flow control scheme that can dispatch traffic at the cell level, in a round robin fashion, as evenly as possible. Each VOQ at the input port has a sequence head pointer that is used to assign sequence numbers (SN) to the cells. Also a sequence tail pointer is available at each VOQ that is used to acknowledge and limit the amount of cells that can be sent to the output ports based on the window size of the scheme. Each VIQ at the output port has a sequence pointer or sequence number (SN) pointer that indicates to the VIQ which cell to wait for. Once the VIQ receives the cell that the SN pointer indicated, the output port sends an ACK packet back to the input port. By using sequence numbers and the relevant pointers, the packet out-of-sequence problem is solved. | 08-12-2010 |
20100208735 | Apparatus and Method for Coding an Information Signal into a Data Stream, Converting the Data Stream and Decoding the Data Stream - More customization and adaptation of coded data streams may be achieved by processing the information signal such that the various syntax structures obtained by pre-coding the information signal are placed into logical data packets, each of which being associated with a specific data packet type of a predetermined set of data packet types, and by defining a predetermined order of data packet types within one access unit of data packets. The consecutive access units in the data stream may, for example, correspond to different time portions of the information signal. By defining the predetermined order among the data packet types it is possible, at decoder's side, to detect the borders between successive access units even when removable data packets are removed from the data stream on the way from the data stream source to the decoder without incorporation of any hints into the reminder of the data stream. Due to this, decoders surely detect the beginnings and endings of access units and therefore are not liable to a buffer overflow despite a removal of data packets from the data stream before arrival at the decoder. | 08-19-2010 |
20100220728 | SYSTEM AND METHOD FOR ACHIEVING ACCELERATED THROUGHPUT - Systems and methods for transporting data between two endpoints over an encoded channel are disclosed. Data transmission units (data units) from the source network are received at an encoding component logically located between the endpoints. These first data units are subdivided into second data units and are transmitted to the destination network over the transport network. Also transmitted are encoded or extra second data units that allow the original first data units to be recreated even if some of the second data units are lost. These encoded second data units may be merely copies of the second data units transmitted, parity second data units, or second data units which have been encoded using erasure correcting coding. At the receiving endpoint, the second data units are received and are used to recreate the original first data units. | 09-02-2010 |
20100220729 | SYSTEM AND METHOD FOR IDENTIFYING UPPER LAYER PROTOCOL MESSAGE BOUNDARIES - Systems and methods that identify the Upper Layer Protocol (ULP) message boundaries are provided. In one example, a method that identifies ULP message boundaries is provided. The method may include one or more of the following steps: attaching a framing header of a frame to a data payload to form a packet, the framing header being placed immediately after the byte stream transport protocol header, the framing header comprising a length field comprising a length of a framing protocol data unit (PDU); and inserting a marker in the packet, the marker pointing backwards to the framing header and being inserted at a preset interval. | 09-02-2010 |
20100238936 | DATA PROCESSING APPARATUS AND REDUNDANCY SWITCHING METHOD - A data processing apparatus includes a first frame processing unit that fragments a first input frame and identifies a head of the first input frame and outputs first head position information; a second frame processing unit that fragments a second input frame which is a redundant frame of the first input frame and is input asynchronously with the first input frame, identifies a head of the second frame, and outputs second head position information; a first and a second storage unit that receive and store the fragmented pieces of data output from the first and the second frame processing units respectively; and a fragmented data processing unit that reads the fragmented pieces of data out of one of the first and second storage units based on the first and second head position information and outputs the fragmented data. | 09-23-2010 |
20100238937 | HIGH SPEED PACKET FIFO INPUT BUFFERS FOR SWITCH FABRIC WITH SPEEDUP AND RETRANSMIT - Described embodiments provide a first-in, first-out (FIFO) buffer for packet switching in a crossbar switch with a speedup factor of m. The FIFO buffer comprises a plurality of registers configured to receive N-bit portions of data in packets and a plurality of one-port memories, each having width W segmented into S portions a width W/S. A first logic module is coupled to the registers and the one-port memories and receives the N-bit portions of data in and the outputs of the registers. A second logic module coupled to the one-port memories constructs data out read from the one-port memories. In a sequence of clock cycles, the N-bit data portions are alternately transferred from the first logic module to a segment of the one-port memories, and, for each clock cycle, the second logic module constructs the data out packet with output width based on the speedup factor of m. | 09-23-2010 |
20100238938 | HIGH SPEED PACKET FIFO OUTPUT BUFFERS FOR SWITCH FABRIC WITH SPEEDUP - Described embodiments provide a first-in, first-out (FIFO) buffer for packet switching in a crossbar switch with a speedup factor of m. The FIFO buffer comprises a first logic module that receives m N-bit data portions from a switch fabric, the m N-bit data portions comprising one or more N-bit data words of one or more data packets. A plurality of one-port memories store the received data portions. Each one-port memory has a width W segmented into S portions of width W/S, where W/S is related to N. A second logic module provides one or more N-bit data words, from the one-port memories, corresponding to the received m N-bit data portions. In a sequence of clock cycles, the data portions are alternately transferred from corresponding segments of the one-port memories in a round-robin fashion, and, for each clock cycle, the second logic module constructs data out read from the one-port memories. | 09-23-2010 |
20100246585 | Multiple Channel Digital Subscriber Line Framer/Deframer System and Method - The framer, also referred to as the scrambler/Reed-Solomon encoder (SRS), is a part of the transmitter and accepts user and control data in the form of one or more logical channels, partitions this data into frames, adds error correction codes, randomizes the data through a scrambler, and multiplexes logical channels into a single data stream. The multiplexed data is then passed to the constellation encoder as the next step in the formation of the VDSL symbol. The deframer, also referred as the descrambler/Reed-Solomon decoder (DRS), is part of the receiver and performs the inverse function of the framer. Disclosed is a highly configurable hardware framer/deframer that includes a digital signal processor interface configured to provide high level control, a FIFO coupled to data interfaces, a scrambler and CRC generator, a Reed-Solomon encoder, an interleaver, a data interface coupled to a constellation encoder, a data interface coupled to a constellation decoder, a deinterleaver, a Reed-Solomon decoder, descrambler and CRC check, an interface to external data sync, methods for control of configuration of data paths between hardware blocks, and methods for control and configuration of the individual hardware blocks in a manner that provides compliance with VDSL and many related standards. | 09-30-2010 |
20100254388 | METHOD AND SYSTEM FOR APPLYING EXPRESSIONS ON MESSAGE PAYLOADS FOR A RESEQUENCER - Described is an improved method, system, and computer program product for implementing an improved resequencer, along with related mechanisms and processes. Expressions are applied to a message payload to perform message sequencing. | 10-07-2010 |
20100254389 | METHOD AND SYSTEM FOR IMPLEMENTING A BEST EFFORTS RESEQUENCER - Described is an improved method, system, and computer program product for implementing an improved resequencer, along with related mechanisms and processes. A best efforts resequencing approach is described for determining a set of message sot process in a computing system. | 10-07-2010 |
20100260186 | LARGE SEND SUPPORT IN LAYER 2 SWITCH TO ENHANCE EFFECTIVENESS OF LARGE RECEIVE ON NIC AND OVERALL NETWORK THROUGHPUT - The present disclosure is directed to a method for delivering a plurality of packets from a network switch to a receiving node. The method may comprise collecting a plurality of packets received at the network switch during a time window; arranging the plurality of packets based on a source address, a package number, and a destination address for each one of the plurality of packets collected during the time window; and delivering the arranged plurality of packets to the receiving node. | 10-14-2010 |
20100265953 | METHOD AND APPARATUS FOR SIGNAL SPLITTING AND COMBINING - A method and apparatus for splitting an asynchronous signal are provided. The method includes: buffering, according to frame sequence, an asynchronous signal to be split; and sending n frames of data respectively on n channels in parallel whenever n frames of data have been buffered, where n is a ratio of a rate level of the asynchronous signal before split to that of the asynchronous signal after split. A method and apparatus for signal combination are provided. The method includes: buffering n channels of parallel signals to be combined simultaneously according to frame sequence; and sending n channels of frames serially after one frame is buffered for each of the n channels of the parallel signals; wherein n is a ratio of a rate level of the parallel signals after combined to a rate level of the parallel signals before combined. | 10-21-2010 |
20100265954 | Method, System, and Computer Program Product for High-Performance Bonding Resequencing - A method, system, and computer program product for receiving and resequencing a plurality of data segments received on a plurality of channels of a bonding channel set, comprising deter mining if a sequence number of a received segment matches an expected sequence number. If so, the process includes forwarding the segment for further processing, incrementing the expected sequence number; and forwarding any queued packets corresponding to the expected sequence number and immediately succeeding sequence numbers less than a sequence number of annexed missing segment. If the sequence number of the received segment does not match the expected sequence number, the received segment is queued at a memory location. The address of this location is converted to a segment index. The segment index is stored in a sparse array. | 10-21-2010 |
20100278182 | UPDATING NEXT-EXPECTED TSN AND RECEIVER WINDOW TO AVOID STALL CONDITIONS - Updating the next-expected transmission sequence number (NET) or the receiver window position to ensure that the NET always falls within the receiver window range to prevent unnecessary delays in delivering data blocks in order to avoid stall conditions and achieve high speed data transmission capabilities for a high-speed downlink packet access (HSDPA) system. | 11-04-2010 |
20100284408 | SYSTEM FOR RECEIVING TRANSPORT STREAMS - A system comprising first input means for receiving a transport stream from an external source, second input means for receiving an input from a memory, means for connecting the first and second input means to an interface which is arranged to provide an output stream to a decoder. The second input means is arranged to provide an output to the interface in such a form that the interface does not distinguish between the output from the first and second input means. | 11-11-2010 |
20100290470 | PACKET RECEIVING MANAGEMENT METHOD AND NETWORK CONTROL CIRCUIT WITH PACKET RECEIVING MANAGEMENT FUNCTIONALITY - A method of network packet receiving management includes: providing a buffer unit which includes a plurality of data blocks with a first packet number and a plurality of data blocks with a second number of packets, wherein the data blocks with the first packet number are for storing a plurality of first network packets according to a first array data structure, respectively, the first array data structure has a plurality of first packet descriptors corresponding to the first packet number, and the data blocks with the second number of packets do not correspond to any packet descriptor; and when a first data block corresponding to a first packet descriptor successively receives a first network packet, changing the first packet descriptor corresponding to the first data block to indicate a second data block which does not correspond to any packet descriptor. | 11-18-2010 |
20100303079 | METHOD AND APPARATUS FOR ENABLING ID BASED STREAMS OVER PCI EXPRESS - A method and apparatus for enabling ID based streams over Peripheral Component Interconnect Express (PCIe) is herein described. In this regard an apparatus is introduced including a memory ordering logic to order packets to be transmitted over a serial point-to-point interconnect, the memory ordering logic to bypass a stalled first packet with a second packet that arrived after the first packet if the second packet includes an attribute flag set to indicate that the second packet is order independent and if the second packet includes an ID that is different from an ID associated with the first packet. Other embodiments are also described and claimed. | 12-02-2010 |
20100309918 | METHOD AND SYSTEM FOR ORDERING POSTED PACKETS AND NON-POSTED PACKETS TRANSFER - A system for ordering packets. The system includes a first memory, e.g., FIFO, storing transition information for posted packets, e.g., 1 when a posted packet transitions from a non-posted packet and 0 otherwise. A second memory stores transition information for non-posted packets, e.g., 1 when a non-posted packet transitions from a posted packet and 0 otherwise. A counter increments responsive to detecting a transition in the first memory and decrements responsive to detecting a transition in the second memory. A controller orders a posted packet for transmission prior to a non-posted packet if a value of the counter is negative and when a transitional value associated with the non-posted packet is 1, and wherein the controller orders either a posted packet or a non-posted packet otherwise. The first and the second memory may be within a same memory component. | 12-09-2010 |
20100322248 | COMMUNICATIONS NETWORK - A communications network for reliable data transfer from a first node and a second node via two channels. A first unreliable channel transfers data according to an unreliable communications protocol such as RTP. A second reliable channel transfers the same data according to a reliable communications protocol, such as TCP. At the second node, data parts missing from the data received from the first node via the unreliable channel is detected and corresponding data parts received from the first node via the reliable channel used. The RTP channel may be operated over UDP over multicast or unicast. The TCP channel may be supplemented by a multicast group or a peer-to-peer network. | 12-23-2010 |
20110007745 | SYSTEM, METHOD AND APPARATUS FOR PAUSING MULTI-CHANNEL BROADCASTS - A system and method for providing a global pause function in a broadcast multimedia system during a pause mode including an input module having an incoming timestamp counter for providing a time-based marker value to mark when each incoming packet arrives from a tuner and an outgoing timestamp counter for providing a time-based marker value for each outgoing packet to a receiver(s), the outgoing counter being configured for controlling when to release each outgoing packet to the receiver(s). At least one global memory device is provided for storing each received packet. The input module is configured to stop the outgoing counter from incrementing in response to activation of a global pause signal for the duration of the pause mode. Data flow to all of the receiver(s) is simultaneously and automatically stopped when a pause mode is enabled | 01-13-2011 |
20110013634 | Ipsec Encapsulation Mode - Described are embodiments directed to negotiating an encapsulation mode between an initiator and a responder. As part of the negotiation of the security association, an encapsulation mode is negotiated that allows packets to be sent between the initiator and responder without encapsulation. The ability to send packets without encapsulation allows intermediaries, such as a firewall, at the responder to easily inspect the packets and implement additional features such as security filtering. | 01-20-2011 |
20110013635 | METHOD AND DEVICE FOR SIGNAL PROCESSING ON MULTI-PROTOCOL SWITCHING NETWORK - A signal processing method and device of multi-protocol switching network are provided, which belong to the field of network communication. The method enables time division multiplexed (TDM) frames to be transmitted in a form of sub cell information through de-interleaving and slicing processes of the TDM frames. The method and device can switch TDM frames and packets uniformly with a flexible processing manner and low cost, so as to better support the network to evolve from TDM to All IP. | 01-20-2011 |
20110038376 | Data Unit Sender and Data Unit Relay Device - New methods and devices for implementing an ARQ mechanism over a multi-hop connection (sender-relay-receiver) are proposed. A communication protocol is described in accordance with which data units are arranged in a sequence and each sent data unit is identifiable by a sequence position identifier. The sender implements a sending peer, the relay a relay peer and the receiver a receiving peer. Feedback messages are exchanged, which using said sequence position identifiers, carry information on a receipt of sent data units. The communication protocol provides for at least a first type and a second type of receipt information, the first type (RACK) of receipt information being indicative of a correct receipt of a data unit at a relay peer of said communication protocol, and the second type (ACK) of receipt information being indicative of a correct receipt of a data unit at a final destination peer of said communication protocol. | 02-17-2011 |
20110069709 | INTELLIGENT ELECTRONIC DEVICE WITH SEGREGATED REAL-TIME ETHERNET - A system and method for optimizing the handling of data on a priority basis within an intelligent electronic device is disclosed. A FIFO receives messages and the messages are each associated with a subscription identifier. The messages are then routed to and stored in buffers, each associated with a subscription identifier. | 03-24-2011 |
20110080914 | ETHERNET TO SERIAL GATEWAY APPARATUS AND METHOD THEREOF - Provided is an Ethernet-to-serial gateway apparatus including: a buffer manager to verify a characteristic of a data packet received via an Ethernet interface, and to thereby select a buffer for storing the data packet, and to store the data packet in the selected buffer; a packet scheduler to extract a data packet from the buffer according to a predetermined policy; a packet distributor to verify a destination address of the extracted data packet, and to thereby obtain serial interface information associated with at least one target apparatus to which the extracted data packet is transmitted; and a serial communication unit to transmit the extracted data packet to the at least one target apparatus via a corresponding serial interface based on the serial interface information. | 04-07-2011 |
20110103388 | SYSTEM AND METHOD FOR ACHIEVING ACCELERATED THROUGHPUT - Systems and methods for transporting data between two endpoints over an encoded channel are disclosed. Data transmission units (data units) from the source network are received at an encoding component logically located between the endpoints. These first data units are subdivided into second data units and are transmitted to the destination network over the transport network. Also transmitted are encoded or extra second data units that allow the original first data units to be recreated even if some of the second data units are lost. These encoded second data units may be merely copies of the second data units transmitted, parity second data units, or second data units which have been encoded using erasure correcting coding. At the receiving endpoint, the second data units are received and are used to recreate the original first data units. | 05-05-2011 |
20110110376 | ADAPTIVE SCHEDULER FOR COMMUNICATION SYSTEMS APPARATUS, SYSTEM AND METHOD - An apparatus, system, and method may include adaptively scheduling packet processing modules by ordering the packet processing modules based on at least one of traffic composition and computational complexity of the packet processing modules. The apparatus, system and method may analyze at least one of traffic composition information derived from at least one packet data stream and computational complexity information pertaining to packet processing modules, determine an ordering of the packet processing modules based on the analyzing, wherein packets are passed through the packet processing modules until the packet meets criteria associated with a packet processing module or the packet has been passed through all of the packet processing modules, and dynamically rearrange the packet processing modules into the determined ordering. | 05-12-2011 |
20110149974 | DATA PROCESSING APPARATUS, DATA PROCESSING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - A data processing apparatus includes a receiving unit for receiving a packet, a determining unit for determining whether to process the packet data by a self-module, based on first information contained in the packet and indicating a processing order, a processing unit for processing the data if the data should be processed by the self-module, a generating unit for generating a packet containing the first information, and one of the processed data, and second information indicating that the data to be processed is stalled, and a transmitting unit for transmitting, according to the first information, the packet to a module expected to process the packet next. The transmitting unit performs the transmission at a transmission interval longer than a predetermined time, if the first and second information indicate that the packet contains data which should be processed by a module next to the self-module in processing order and is stalled. | 06-23-2011 |
20110170548 | APPARATUS AND METHOD FOR REORDERING DATA PACKETS IN COMMUNICATION SYSTEM - An apparatus and method for reordering data packets in a communication system are provided. The method includes detecting that a first time value of a timer used for reordering data packets needs to be set when a missing data packet occurs in receiving data packets, and, when the timer restarts, setting the first time value to a time value determined by compensating for a second time value which is used when the timer starts. The timer starts when a first Transmission Sequence Number (TSN) of a received data packet is greater than a TSN of a data packet which is expected to be received immediately after data packets received already and when the timer is not in an active state. The timer expires at a point of time when the second time value lapses. The timer stops when a data packet with the same TSN as the first TSN is sent as a reassembly entity before the expiring of the timer. The timer restarts when a received data packet which cannot be sent as the reassembly entity is buffered in a buffer after the stopping or the expiring of the timer. | 07-14-2011 |
20110182294 | IN-ORDER TRAFFIC AGGREGATION WITH REDUCED BUFFER USAGE - One embodiment provides a system that performs in-order traffic aggregation from a number of low-speed ports to a high-speed port. During operation, the system receives at a low-speed port a packet, stores it in a store-and-forward FIFO associated with the low-speed port, extracts a sequence number associated with the stored packet, and stores the extracted sequence number in a sequence-number FIFO associated with the low-speed port. The system further generates an expected sequence number, which maintains a linear order with respect to sequence numbers associated with previously forwarded packets, and determines whether a front end of the sequence-number FIFO matches the expected sequence number. If so, the system removes the front end of the sequence-number FIFO buffer, retrieves a packet associated with it, forwards the retrieved packet on the high-speed port, and updates the expected sequence number by adding 1 to the packet number of the retrieved packet. | 07-28-2011 |
20110188505 | System and Method for Handling Out-of-Order Frames - A system for reordering frames may include at least one processor that enable receiving of an out-of-order frame via a network subsystem. The at least one processor may enable placing data of the out-of-order frame in a host memory, and managing information relating to one or more holes resulting from the out-of-order frame in a receive window. The at least one processor may enable setting a programmable limit with respect to a number of holes allowed in the receive window. The out-of-order frame is received via a TCP offload engine (TOE) of the network subsystem or a TCP-enabled Ethernet controller (TEEC) of the network subsystem. The network subsystem may not store the out-of-order frame on an onboard memory, and may not store one or more missing frames relating to the out-of-order frame. The network subsystem may include a network interface card (NIC). | 08-04-2011 |
20110200049 | INDUSTRIAL NETWORK SYSTEM - A central communication unit | 08-18-2011 |
20110222545 | SYSTEM AND METHOD FOR RECOVERING THE DECODING ORDER OF LAYERED MEDIA IN PACKET-BASED COMMUNICATION - The present invention relates to the packet-based transmission of media that are coded using a layered representation. In particular, it relates to mechanisms for recovering the decoding order of the media packets when such media is transmitted with arbitrary ordering over one or more packet streams. | 09-15-2011 |
20110228783 | IMPLEMENTING ORDERED AND RELIABLE TRANSFER OF PACKETS WHILE SPRAYING PACKETS OVER MULTIPLE LINKS - A method and circuit for implementing ordered and reliable transfer of packets while spraying packets over multiple links, and a design structure on which the subject circuit resides are provided. Each source interconnect chip maintains a spray mask including multiple available links for each destination chip for spraying packets across multiple links of a local rack interconnect system. Each packet is assigned an End-to-End (ETE) sequence number in the source interconnect chip that represents the packet position in an ordered packet stream from the source device. The destination interconnect chip uses the ETE sequence numbers to reorder the received sprayed packets into the correct order before sending the packets to the destination device. | 09-22-2011 |
20110228784 | REORDER ENGINE WITH ERROR RECOVERY - A reorder engine classifies information relating to incoming data items as belonging to either a first, second, or third region. The information relating to the data items may arrive at the reorder engine out of order. The data items each include a sequence number through which the reorder engine may reconstruct the correct order of the data items. Based on the classification, the reorder engine may either process the data items normally or drop certain ones of the data items. The majority of incoming data items will fall in the first region and are processed normally. Data items arriving in the second region indicate that a previous data item is late or delayed. If this previous data item is delayed but does eventually arrive, it will arrive in the third region and is simply ignored. | 09-22-2011 |
20110243139 | BAND CONTROL APPARATUS, BAND CONTROL METHOD, AND STORAGE MEDIUM - A band control apparatus including: a buffer memory configured to hold and output data units on a first-in first-out basis; a counter memory configured to hold a counter value; and a processor configured to add a value to the counter value on a basis of a rule, reduce, when a first one of the data units is output from the buffer memory, cause, when a first condition of a total size of the data units being smaller than a buffer threshold is satisfied, the first data unit to be output when a second condition that the counter value is larger than the size of the first data unit is satisfied and the counter value is larger than a counter threshold, and cause, when the first condition is not satisfied, the data units to be output in sequence with the first data unit first, until a third condition is satisfied. | 10-06-2011 |
20110255542 | ADVANCED PROCESSOR WITH MECHANISM FOR FAST PACKET QUEUING OPERATIONS - An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner. | 10-20-2011 |
20110255543 | METHOD AND SYSTEM FOR PROCESSING DATA - A data processing method and system and relevant devices are provided to improve the processing efficiency of cores. The method includes: storing received packets in a same stream sequentially; receiving a Get_packet command sent by each core; selecting, according to a preset scheduling rule, packets for being processed by each core among the stored packets; receiving a tag switching command sent by each core, where the tag switching command indicates that the core has finished a current processing stage; and performing tag switching for the packets in First In First Out (FIFO) order, and allocating the packets to a subsequent core according to the Get_packet command sent by the subsequent core after completion of the tag switching, so that the packet processing continues until all processing stages are finished. A data processing system and relevant devices are provided. With the present invention, the processing efficiency of cores may be improved. | 10-20-2011 |
20110261821 | IMPLEMENTING GHOST PACKET REMOVAL WITHIN A RELIABLE MESHED NETWORK - A method and circuit for implementing multiple active paths between source and destination devices in an interconnect system while removing ghost packets, and a design structure on which the subject circuit resides are provided. Each packet includes a generation ID and is assigned an End-to-End (ETE) sequence number in the source interconnect chip that represents the packet position in an ordered packet stream from the source device. The packets are transmitted from a source interconnect chip source to a destination interconnect chip on the multiple active paths. The generation ID of a received packet is compared with a current generation ID at a destination interconnect chip to validate packet acceptance. The destination interconnect chip uses the ETE sequence numbers to reorder the accepted received packets into the correct order before sending the packets to the destination device. | 10-27-2011 |
20110261822 | STEERING FRAGMENTED IP PACKETS USING 5-TUPLE BASED RULES - A method, system and/or computer program steer internet protocol (IP) packet fragments that are components of a series of IP packet fragments. A switch receives an IP packet fragment. In response to determining that the fragment is not a lead packet fragment in a series of IP packet fragments that make up an original IP packet, the IP packet fragment is pushed onto a data stack. The switch then receives an IP packet fragment which is determined to be the lead packet fragment in a series of IP packet fragments. The IP 5-tuple from the lead packet fragment is parsed to steer all fragments in the series to a destination port. | 10-27-2011 |
20110286461 | PACKET SORTING DEVICE, RECEIVING DEVICE AND PACKET SORTING METHOD - A packet sorting device includes: a buffer for storing packets belonging to a plurality of communication flows; and a control section which determines, when receiving one of a series of packets, whether the one of the received packets is a disorder packet by a determination process, and sorts the received packets in a correct order by storing the disorder packet and communication flow information thereof in the buffer so that the disorder packet and communication flow identification information are correlated. The disorder packet is one of the received packets which is received in an order different from a transmission order of the packets. The communication flow information identifies the plurality of communication flows. | 11-24-2011 |
20110292945 | Packet Receiving Device, Packet Communication System, and Packet Reordering Method - Under an environment where a packet retransmission is performed, a packet receiving device includes a reordering section, configured to perform a reordering of a receiving packet in a lower layer than a network protocol stack, and a buffer section. Out-of-Order packet among receiving packets is associated with a flow and stored in the buffer section. The reordering section determines whether the receiving packet is an In-Order packet or an Out-of-Order packet. In a case where the receiving packet is an In-Order packet of a flow and an Out-of-Order packet of the flow is stored in the buffer section, the reordering section transfers the receiving packet to the network protocol stack and then transfers all of Out-of-Order packet of the flow stored in the buffer section to the network protocol stack. | 12-01-2011 |
20120008631 | Processor with packet ordering device - An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner. | 01-12-2012 |
20120020360 | METHOD OF MANAGING A PACKET ADMINISTRATION MAP - A method of managing a packet administration map for data packets to be received via a network. A receiver in the network monitors sequence numbers and stores missing sequence numbers within an internal data structure, called a packet administration map. A reversed keying is used which means that the upper limit of the range of contiguous missing data packets is used as the key entry in the administration map. | 01-26-2012 |
20120027019 | MAINTAINING PACKET ORDER USING HASH-BASED LINKED-LIST QUEUES - Ordering logic ensures that data items being processed by a number of parallel processing units are unloaded from the processing units in the original per-flow order that the data items were loaded into the parallel processing units. The ordering logic includes a pointer memory, a tail vector, and a head vector. Through these three elements, the ordering logic keeps track of a number of “virtual queues” corresponding to the data flows. A round robin arbiter unloads data items from the processing units only when a data item is at the head of its virtual queue. | 02-02-2012 |
20120063462 | METHOD, APPARATUS AND SYSTEM FOR FORWARDING VIDEO DATA - The present disclosure relates to the field of video transmission, and discloses a method, an apparatus, and a system for forwarding video data. A network device receives and buffers a media stream, resolves the buffered media stream, obtains Transport Stream (TS) packets in the media stream, and evaluates and identifies a visual sensitivity priority of each TS packet; discards a TS packet of low visual sensitivity, and re-capsulates a TS packet of high visual sensitivity into a new media stream; and sends the re-encapsulated new media stream to a user equipment. The network device is enabled to discard the TS packet of low visual sensitivity in the media stream, which reduces duration of a fast channel change. | 03-15-2012 |
20120063463 | Packet aligning apparatus and packet aligning method - A packet aligning apparatus includes a packet analyzing section configured to receive a reception packet, and extract a sequence number contained in the reception packet; a packet storage section; a write control section configured to determine whether the reception packet is stored in the packet storage section based on the extraction sequence number or transferred to a subsequent block; an expectation managing section configured to generate expectation data which indicates the sequence number of the packet which should be transferred next to the subsequent block, as the expectation; and a read control section configured to read a group of storage packets stored in the packet storage section, to transfer to the subsequent block. The write control section compares the extraction sequence number and the expectation, stores the reception packet in the packet storage section when the extraction sequence number is larger than the expectation. | 03-15-2012 |
20120069849 | SYSTEM AND METHOD FOR DESCRAMBLING DATA - A method and system for negating a series of packed data bytes simultaneously based on conditional flags is used to descramble the data, as opposed to negating each byte with respect to each conditional flag bit. Sets of scrambled binary values are received. A descrambling code that corresponds to a flag bit sequence used to scramble the sets of binary values is generated. Then the sets of scrambled binary values are descrambled using the descrambling code. The method is particularly suitable for use in a wireless base band receiver. | 03-22-2012 |
20120076148 | MODEM AND PACKET PROCESSING METHOD - A modem communicates with a data source over multiple signal channels. The modem receives multiple data packets originally sent by the data source and at least one correcting packet loading reassembly information over the signal channels, and determines if one or more of the data packets originally sent by the data source are lost. The modem determines if the received data packets are enough to recover the one or more lost data packets, and recovers the one or more lost data packets using the received data packets and the reassembly information if the received data packets are enough to recover the one or more lost data packets. The modem removes the correcting packet loading the reassembly information and sends out the received data packets and the one or more recovered data packets. | 03-29-2012 |
20120082164 | Cell-Based Link-Level Retry Scheme - A method for communication includes receiving a packet at a first node for transmission over a link to a second node. The data in the packet is divided into a sequence of cells of a predetermined data size. The cells have respective sequence numbers. The cells are transmitted in sequence over the link, while storing the transmitted cells in a buffer at the first node. The first node receives acknowledgments indicating the respective sequence numbers of the transmitted cells that were received at the second node. Upon receiving an indication at the first node that a transmitted cell having a given sequence number was not properly received at the second node, the stored cells are retransmitted from the buffer starting from the cell with the given sequence number. | 04-05-2012 |
20120087374 | CONTEXT-SWITCHED MULTI-STREAM PIPELINED REORDER ENGINE - A pipelined reorder engine reorders data items received over a network on a per-source basis. Context memories correspond to each of the possible sources. The pipeline includes a plurality of pipeline stages that together simultaneously operate on the data items. The context memories are operatively coupled to the pipeline stages and store information relating to a state of reordering for each of the sources. The pipeline stages read from and update the context memories based on the source of the data item being processed. | 04-12-2012 |
20120093162 | REORDERING PACKETS - There are disclosed processes and apparatus for reordering packets. The system includes a plurality of source processors that transmit the packets to a destination processor via multiple communication fabrics. The source processors and the destination processor are synchronized together. Time stamp logic at each source processor operates to include a time stamp parameter with each of the packets transmitted from the source processors. The system also includes a plurality of memory queues located at the destination processor. An enqueue processor operates to store a memory pointer and an associated time stamp parameter for each of the packets received at the destination processor in a selected memory queue. A dequeue processor determines a selected memory pointer associated with a selected time stamp parameter and operates to process the selected memory pointer to access a selected packet for output in a reordered packet stream. | 04-19-2012 |
20120134362 | RECEIVER, RECEPTION METHOD, PROGRAM AND COMMUNICATION SYSTEM - Disclosed herein is a receiver including: a reception section adapted to sequentially receive a plurality of packets from a transmitter; a first processing section adapted to output a predetermined number of packets received by the reception section all together if the predetermined number of packets are received without any loss by the reception section; a second processing section adapted to accept the predetermined number of packets output all together from the first processing section; a detection section adapted to detect a predetermined request; and a control section adapted to control the first processing section to output a packet, received by the reception section, to the second processing section each time the reception section receives a packet if the detection section detects the predetermined request. | 05-31-2012 |
20120201248 | TRANSMISSION CONTROL METHOD FOR PACKET COMMUNICATION AND PACKET COMMUNICATION SYSTEM - The transmission of packets, in which data signals or control signals are stored in the payload of a packet comprising a payload and a header having a sequence number area for storing sequence numbers, is controlled. Data signal sequence numbers, which are added to a data packet storing data signals in a payload, are generated from a first communication device, are stored into each of the sequence number areas of the data packet, and are sent to a second communication device from the first communication device in order to control the transmission of the data packet on the basis of the data signal sequence numbers. Similarly, control signal sequence numbers, which are added to the control packet storing control signals in a payload, are generated from the first communication device independently from the data signal sequence numbers, are stored into each of the sequence number areas of the control packet, and are sent to the second communication device from the first communication device in order to control the transmission of the control packet on the basis of the control signal sequence numbers. | 08-09-2012 |
20120243543 | Network System and Communication Device - It is ensured to prevent a user from misrepresenting a time stamp, and to send out packets in sequence in which the packets are accepted to all users who are geographically isolated from each other. A unique time stamp impartation function is realized by respective communication devices on a net inside an administrative responsibility range of a communication common carrier, hardly accessible by a user. If a packet with a time stamp imparted by a subscriber terminal, provided thereto, is received, the respective communication devices positioned on the net inside the administrative responsibility range nullify the relevant time stamp in order to prevent misrepresentation of the time stamp, by a user outside the administrative responsibility range, and restores the packet from the subscriber together with the time stamp when transferring the packet to the outside of the administrative responsibility range. | 09-27-2012 |
20120250692 | METHOD AND APPARATUS FOR TEMPORAL-BASED FLOW DISTRIBUTION ACROSS MULTIPLE PACKET PROCESSORS - A method, apparatus and computer program product for temporal-based flow distribution across multiple packet processors is presented. A packet is received and a hash identifier (ID) is computed for the packet. The hash ID is used to index into a State Table and to retrieve a corresponding record. When a time credit field of the record is zero then the time credit field is set to a to a new value; a Packet Processing Engine (PE) whose First-In-First-Out buffer (FIFO) has the lowest fill level is selected; and a PE number field in the state table record is updated with the selected PE number. When the time credit field of the record is non-zero then the packet is sent to a PE based on the value stored in the record; and the time credit field in the record is decremented if the time credit field is greater than zero. | 10-04-2012 |
20120263181 | SYSTEM AND METHOD FOR SPLIT RING FIRST IN FIRST OUT BUFFER MEMORY WITH PRIORITY - A system and method for allocating memory locations in a buffer memory system is described. The system includes a plurality of memory locations for storage and a controller. The controller controls the storage and retrieval of data from the plurality of memory locations and allocate a first portion of the memory locations to a first buffer, wherein the remaining portion of the memory locations defines a second portion. The controller allocates a portion of the second portion to a second buffer and the remaining portion of the second portion defines a third portion. The controller reserves a portion of the third portion for assignment to the second buffer, wherein, the second buffer is assigned a higher priority over the first buffer. The controller selectively allocates one or more memory locations of the third portion to the first buffer or to the second buffer. | 10-18-2012 |
20120263182 | Communication Apparatus, Communication System, Absent Packet Detecting Method and Absent Packet Detecting Program - Any packet loss is detected very quickly by means of only a series of sequence number in a multi-path environment where a transmitter and a receiver are connected to each other by way of a plurality of networks when no inversion of sequence arises in any of the networks. A communication apparatus includes a plurality of sequence buffers arranged at each network to accumulate packets until a sequence acknowledgement and an absence detecting section adapted to determine the occurrence of an absence of a packet when one or more packets are accumulated in all the sequence buffers. With this arrangement, the absence detecting section of the receiver monitors the packets staying in the sequence guaranteeing buffer arranged in each of the network, paying attention to the characteristic that packets are stored in the sequence buffers of all the networks when a packet loss takes place. | 10-18-2012 |
20120281703 | APPARATUS, AN ASSEMBLY AND A METHOD OF OPERATING A PLURALITY OF ANALYZING MEANS READING AND ORDERING DATA PACKETS - A system and a method of operating the system, the system having a plurality of data receiving elements each receiving data packets from a data connection and from another receiving element and forwarding the two data packets to another receiving element in a predetermined order. If, at a point in time, only one data packet is received, a period of time is allowed to elapse, and if a second data packet is received, the two packets are output in the order. If not, the received data packet is output. | 11-08-2012 |
20120327941 | METHOD AND APPARATUS FOR TRANSPORTING PACKETS WITH SPECIFIC TRAFFIC FLOWS HAVING STRICT PACKET ORDERING REQUIREMENTS OVER A NETWORK USING MULTIPATH TECHNIQUES - The method that is disclosed enables specific information network traffic flows to retain packet ordering in a packet network in which multipath techniques are used. In a common network usage a plurality of traffic flows may be aggregated into a larger traffic flow. In such a situation, a finest granularity of individual traffic flow is referred to as a microflow and an aggregation of traffic flows is referred to as a traffic aggregate. The traffic aggregate may take a path from an ordered set of nodes including a first network element referred to as an ingress node through zero or more intermediate network elements referred to as midpoint nodes, to a final network known as the egress node. The ordered set of nodes traversed by such a traffic aggregate is referred to as the path taken by that traffic flow. At any node prior to the egress, the traffic aggregate may be split among multiple links or lower layer paths in reaching the next node in the path. In such a circumstance, the traffic aggregate is split among the available links or lower layer paths. Techniques for splitting traffic are collectively referred to as multipath techniques, or more briefly as multipath. Individual links or lower layer paths within a multipath are referred to as component links. Individual traffic flows may be identified by various existing multipath techniques. A set of existing multipath techniques are able to keep all packets within a given microflow on the same component link. The method disclosed allows specific traffic aggregates within a larger traffic aggregate to be carried on a single component link while allowing other traffic aggregates within the larger traffic aggregate to be spread among multiple component links. | 12-27-2012 |
20130003741 | Dynamic, Condition-Based Packet Redirection - In one embodiment, at a packet-forwarding engine for receiving packet flows and conditionally routing packets in the packet flows to one or more applications, a method includes receiving from a particular one of the applications a request that requests the packet-forwarding engine not to route the particular one of the packet flows to the particular one of the applications and identifies one or more conditions for routing particular ones of the packets in the particular one of the packet flows to the particular one of the applications. The method further includes, receiving a particular packet in the particular one of the packet flows, determining whether one or more of the conditions for routing the particular packet to the particular one of the applications are met, and routing or not routing the particular packet to the particular one of the applications based on the determination. | 01-03-2013 |
20130016725 | METHOD AND SYSTEM FOR INTRA-NODE HEADER COMPRESSIONAANM Julien; MartinAACI LavalAACO CAAAGP Julien; Martin Laval CAAANM Brunner; RobertAACI MontrealAACO CAAAGP Brunner; Robert Montreal CA - One aspect of the invention is directed to a network element (e.g., node/router/switch, etc) which performs internal packet header compression. In particular, an aspect provides a network element comprising a plurality of ingress elements (e.g. line cards), a plurality of egress elements, and system internal network (e.g. a backplane) for switching between the correct Ingress element and egress element, and applying header compression for the purpose of reducing the bandwidth required between the elements. As such internal “metadata” can be added to the compressed header without increasing, and preferably in some embodiments, actually decreasing, the size of the packets. Typically the headers are uncompressed before exiting the egress element. | 01-17-2013 |
20130016726 | MEMORY CONTROL APPARATUS, INFORMATION PROCESSING APPARATUS, AND MEMORY CONTROL METHODAANM NUMAKURA; SatoruAACI MiyagiAACO JPAAGP NUMAKURA; Satoru Miyagi JPAANM IKEDA; JunichiAACI MiyagiAACO JPAAGP IKEDA; Junichi Miyagi JPAANM SUZUKI; MitsuruAACI MiyagiAACO JPAAGP SUZUKI; Mitsuru Miyagi JPAANM TAKEO; KojiAACI MiyagiAACO JPAAGP TAKEO; Koji Miyagi JPAANM SATOH; TetsuyaAACI MiyagiAACO JPAAGP SATOH; Tetsuya Miyagi JPAANM TAKAHASHI; HiroyukiAACI MiyagiAACO JPAAGP TAKAHASHI; Hiroyuki Miyagi JP - A memory control apparatus that controls writing and reading of data to/from a memory. The memory control apparatus includes: a sequence control unit that receives a packet sequence including a write packet including a write request of data and a read packet including a read request of the data, and changes an arrangement of the write packet and the read packet included in the packet sequence so that a first predetermined number of write packets are arranged successively and a second predetermined number of read packets are arranged successively; and a command output unit that receives the packet sequence from the sequence control unit, and outputs a write command according to the write packet and an a read command according to the read packet to the memory, in accordance with an order of arrangement of the write packet and the read packet. | 01-17-2013 |
20130028263 | TWO TIER MULTIPLE SLIDING WINDOW MECHANISM FOR MULTIDESTINATION MEDIA APPLICATIONS - Some media applications use media that contains multiple types of media components in it and media sources with access to this media must send each type of media component to one or more media rendering destination devices. Furthermore there may be multiple destinations that can receive a particular type of media component and the media must be received at each destination without losses. This invention describes a two tier packet buffer structure at the media source with primary and virtual packet buffers that ensures minimal memory use at the media source and minimal network use. Furthermore the use of a sliding window with each virtual packet buffer associated with each destination, independently keeps control and track of destination state, ensuring the correct receipt of media data at each destination. | 01-31-2013 |
20130034102 | DATACASTING SYSTEM WITH INTERMITTENT LISTENER CAPABILITY - A server-client system or architecture that allows datacast applications to reliably transport data objects from a network server over a unidirectional packet network (“datacast network”) to one or more clients, each of which may be listening to the packet stream at different times. The invention allows the clients to listen intermittently to the datacast, yet still receive all of the data objects published by the server in a timely manner, and in a way that is more optimal in terms of client resource use. This ensures that listening clients can receive a complete set of the data objects broadcast while being able to conserve client processing and power resources by not requiring continuous listening by the client to the datacast. | 02-07-2013 |
20130044755 | Scalable Packet Scheduling Policy for Vast Number of Sessions - An apparatus comprising a plurality of queues configured to cache a plurality of packets that correspond to a plurality of sessions, a scheduler configured to schedule the packets from the different queues for forwarding based on a finish time for each packet at the egress of each corresponding queue, and an egress link coupled to the scheduler and configured to forward the scheduled packets from all the queues at a total bandwidth that is shared among the queues, wherein the finish time is calculated dynamically based on the amount of bandwidth allocated for the corresponding queue, and wherein the queues are assigned corresponding weights for sharing the total bandwidth. | 02-21-2013 |
20130058347 | DOWNLOAD METHOD AND SYSTEM BASED ON MANAGEMENT DATA INPUT/OUTPUT INTERFACE - The disclosure provides a download method and system based on a Management Data Input/Output (MDIO) interface, wherein the download method based on the MDIO interface comprises: a master device informing a slave device of using the MDIO interface to start downloading data packets in batches; the master device transmitting data packets in batches to the slave device by using an MDIO frame, wherein the MDIO frame comprises: a data packet address field and/or a data packet serial number field, wherein the data packet address field is used to indicate a relative address of one data packet in the slave device, and the data packet serial number field is used to indicate a location of said one data packet in multiple data packets; the slave device judging that a received data packet is a last data packet of a current batch transmission from the master device, and finishing a current batch download. The invention enables the master device and the slave device to perform the batch data download effectively, and solves the problem in the related art that large batch data transmission cannot be performed for download based on the MDIO interface. | 03-07-2013 |
20130064248 | HANDLING OUT-OF-SEQUENCE PACKETS IN A CIRCUIT EMULATION SERVICE - Various exemplary embodiments relate to a method and related network node having a playout buffer including one or more of the following: receiving, at the network device, a first packet belonging to a first flow, the first packet including a first sequence number (SN); receiving, at the network device, a second packet belonging to the first flow, the second packet including a second SN; determining that the second SN is not in sequence with the first SN; waiting to receive, at the network device, a third packet belonging to the first flow, the third packet including a third SN; and determining that a jump in SNs has occurred for the first flow between the first packet and the second packet based on determining that the third SN is in sequence with the second SN. | 03-14-2013 |
20130077632 | BUFFER CONTROLLER CORRECTING PACKET ORDER FOR CODEC CONVERSION - A buffer controller has a buffer for holding plural sets of data contained in a packet entered from a telecommunications network, a codec converter and a controller. When receiving a packet, the buffer controller has the controller put data, in the packet, in a storage position in the buffer corresponding to the sequence number of the packet, and makes a decision as to whether or not the codec conversion is to be performed. If packets are out of sequence, lost or dropped during communication, the buffer controller can correct the packet order and compensate the packet loss with the minimum delay. | 03-28-2013 |
20130083799 | TCP PROXY INSERTION AND UNINSTALL METHOD, AND SERVICE GATEWAY DEVICE - A TCP proxy insertion and uninstall method is provided, including: during establishment of a TCP connection, forwarding a TCP connection establishing packet between a TCP client and a TCP server through an L3, and recording option information and sequence number information of the TCP connection establishing packet; performing determination on a packet according to a proxy policy; forwarding the received packet if it is determined that no proxy process is required for the packet, and updating the recorded sequence number information according to sequence number information of the received packet; and generating a client pseudo socket and a server pseudo socket according to the option information and sequence number information if it is determined that a proxy process is required for the packet, terminating the received packet by adopting the client pseudo socket and server pseudo socket, processing the terminated packet through an L7 and forwarding the processed packet. | 04-04-2013 |
20130089098 | Changing a Flow Identifier of a Packet in a Multi-Thread, Multi-Flow Network Processor - Described embodiments classify packets received by a network processor. A processing module of the network processor generates tasks corresponding to each received packet. A packet classification processor determines, independent of a flow identifier of the received task, control data corresponding to each task. A multi-thread instruction engine processes threads of instructions corresponding to received tasks, each task corresponding to a packet flow of the network processor and maintains a thread status table and a sequence counter for each flow. Active threads are tracked by the thread status table, and each status entry includes a sequence value and a flow value identifying the flow. Each sequence counter generates a sequence value for each thread by incrementing the sequence counter each time processing of a thread for the associated flow is started, and decrementing the sequence counter each time a thread for the associated flow is completed. | 04-11-2013 |
20130089099 | Modifying Data Streams without Reordering in a Multi-Thread, Multi-Flow Network Communications Processor Architecture - Described embodiments classify packets received by a network processor. A processing module of the network processor generates tasks corresponding to each received packet. A scheduler generates contexts corresponding to tasks received by the packet classification processor from corresponding processing modules, each context corresponding to a given flow, and stores each context in a corresponding per-flow first-in, first-out buffer of the scheduler. A packet modifier generates a modified packet based on threads of instructions, each thread of instructions corresponding to a context received from the scheduler. The modified packet is generated before queuing the packet for transmission as an output packet of the network processor, and the packet modifier processes instructions for generating the modified packet in the order in which the contexts were generated for each flow, without head-of-line blocking between flows. The modified packets are queued for transmission as an output packet of the network processor. | 04-11-2013 |
20130114605 | ARBITER CIRCUIT AND METHOD OF CARRYING OUT ARBITRATION - A method of carrying out arbitration in a packet exchanger including an input buffer temporarily storing a packet having arrived at an input port, and a packet switch which switches a packet between a specific input port and a specific output port, includes the steps of (a) concurrently carrying out a first plurality of sequences in each of the sequences basic processes for at least one of the input buffer and the output port are carried out in a predetermined order, and (b) making an allowance in each of the sequences for packets to be output through output through output ports at different times from one another. | 05-09-2013 |
20130128893 | METHOD AND SYSTEM FOR SLIDING WINDOW PROCESSING OF A DATAGRAM - A method for sliding window processing of a datagram split into packets, may include processing entire strings of adjacent consecutive packets of the datagram regardless the order of the packets using parallel processors. The method may also include processing adjacent ends of the strings of the adjacent consecutive packets while maintaining the order of the adjacent ends to correspond to the order of the consecutive packets. | 05-23-2013 |
20130136135 | Method and Device for Securing Data Packets to be Transmitted via an Interface - A method for securing data packets to be transmitted via an interface includes determining a check number over at least a portion of a first data packet and at least one portion of a second data packet. For this purpose, the first data packet is arranged according to a transfer protocol in a first data frame and the second data packet is arranged according to the transfer protocol in a second data frame. | 05-30-2013 |
20130163598 | Encoding Watermarks In A Sequence Of Sent Packets, The Encoding Useful For Uniquely Identifying An Entity In Encrypted Networks - A method includes sending over the network from a source entity to a destination entity a sequence of a plurality of packets. Each packet in the sequence includes a same identifier corresponding to a network entity on the network. Sending includes modifying a property of the sequence of packets to uniquely identify the sequence of packets. The method includes receiving information indicating the identifier corresponds to the modification of the property. Another method includes examining a sequence of packets sent over a network from a source entity to a destination entity, each packet in the sequence comprising a same identifier corresponding to a network entity on the network. The method includes determining whether a property of the sequence of packets was modified when sent to uniquely identify the sequence of packets; and responsive to the determining, associating the identifier with the network identity. Apparatus and program products are also disclosed. | 06-27-2013 |
20130163599 | ALIGNMENT CIRCUIT AND RECEIVING APPARATUS - A control circuit generates a selection signal indicating a head area of an alignment buffer when the area is an unwritten area, and when the head area is a written area, successively performs comparison between a sequence number stored in the area and a sequence number of a target packet from a head to a tail to search a boundary area and generates a selection signal indicating the detected boundary area. When the boundary area could not be detected even when the search reaches the last written area, the control circuit generates a selection signal indicating the next area of the last written area. The writing circuit shifts data stored in each area by one area from the area indicated by the selection signal in a direction of the tail of the alignment buffer, and writes packet information of the target packet into the area indicated by the selection signal. | 06-27-2013 |
20130170496 | PDCP PACKET TRANSMISSION METHOD - Disclosed is a PDCP packet transmission method, which improves the transmission performance of UE and eNB by suggesting a method for preventing data loss at the application end, caused by an invalid deciphering result which is generated when 2048 or more PDCP SDUs with sequence numbers assigned thereto are discarded, and a wireless transmission/reception unit (WTRU) including a PDCP entity. The method for transmitting PDCP packet, includes the steps of: receiving a PDCP SDU from an upper layer; determining whether or not sequence numbers are assigned to less than a predetermined number of PDCP SDUs subsequent to the last PDCP PDU completely and successfully transmitted from a lower layer; and if sequence numbers are assigned to less than the predetermined number of PDCP SDUs, assigning a sequence number to the PDCP SDU received from the upper layer. | 07-04-2013 |
20130188648 | RELAY APPARATUS, RECEPTION APPARATUS, AND COMMUNICATON SYSTEM - A relay apparatus in a communication system including a transmission apparatus, a reception apparatus, and a plurality of relay apparatuses which include the relay apparatus, including: a receiver configured to receive distributed sequential packets from the transmission apparatus which distributes sequential packets for the reception apparatus among the plurality of relay apparatuses, a memory configured to store distributed sequential packets of the sequential packets in sequential order, the distributed sequential packets are distributed and transmitted to the relay apparatus by the transmission apparatus, and a processor configured to select a discard packet from the distributed sequential packets based on a discard condition, and to add a discard information to a previous packet and to transmit the previous packet, the discard information indicates a sequence number of the discard packet to be discarded, and the previous packet is one of the distributed sequential packets before the discard packet in sequential order. | 07-25-2013 |
20130215899 | Distributed Credit FIFO Link of a Configurable Mesh Data Bus - An island-based integrated circuit includes a configurable mesh data bus. The data bus includes four meshes. Each mesh includes, for each island, a crossbar switch and radiating half links. The half links of adjacent islands align to form links between crossbar switches. A link is implemented as two distributed credit FIFOs. In one direction, a link portion involves a FIFO associated with an output port of a first island, a first chain of registers, and a second FIFO associated with an input port of a second island. When a transaction value passes through the FIFO and through the crossbar switch of the second island, an arbiter in the crossbar switch returns a taken signal. The taken signal passes back through a second chain of registers to a credit count circuit in the first island. The credit count circuit maintains a credit count value for the distributed credit FIFO. | 08-22-2013 |
20130215900 | Software Upgrade Using Layer-2 Management Entity Messaging - Principles, apparatuses, systems, circuits, methods, and computer program products for performing a software upgrade in a MoCA network includes receiving an image of a software upgrade at a server and sending the image in the MoCA network using an L2ME message channel to a client that is enabled to receive the image and store the image in a client memory. The image may be broken up into packets, and a sequence number may be assigned to each packet to assist the client in assembling them. CRC information may also be appended to the packets to enable the client to verify their contents. | 08-22-2013 |
20130230051 | Processing Requests - Measures for operating a data link, including handling transfer of operation from a primary to a backup device in the link without taking down and re-establishing the link. | 09-05-2013 |
20130272311 | Communication Device and Related Packet Processing Method - The present invention discloses a communication device, including a first network interface, for receiving a plurality of packets composed of a plurality of first packets destined to a first communication device and a plurality of second packets, a first reordering engine, for reordering the plurality of first packets, outputting the plurality of reordered first packets, and outputting the plurality of second packets, a second reordering engine, for receiving the plurality of second packets from the first reordering engine, and reordering the plurality of second packets, a second network interface, for receiving the plurality of reordered first packets from the first reordering engine, and transmitting the plurality of reordered first packets to the first communication device, and a processing module, for processing the plurality of reordered second packets. | 10-17-2013 |
20130279509 | APPARATUS AND METHOD FOR RECEIVING AND FORWARDING DATA - A method and apparatus adapted to prevent Head-Of-Line blocking by forwarding dummy packets to queues which have not received data for a predetermined period of time. This prevention of HOL may be on an input where data is forwarded to each of a number of FIFOs or an output where data is de-queued from FIFOs. The dummy packets may be provided with a time stamp derived from a recently queued or de-queued packet. | 10-24-2013 |
20130287031 | METHOD, APPARATUS, AND SYSTEM FOR FORWARDING DATA IN COMMUNICATIONS SYSTEM - Embodiments of the present invention disclose a method, an apparatus, and a system for forwarding data in a communications system. The implementation of the method includes: A data forwarding device forwards a data packet from a source end to a destination end by using a low-speed channel; during a procedure for forwarding the data packet from the source end to the destination end by using the low-speed channel, the data forwarding device receives a control command sent by a service processing node, where the control command is used to indicate that the data packet of the source end does not need to be forwarded to the service processing node; and the data forwarding device forwards the data packet from the source end to the destination end according to the indication of the control command by using a high-speed channel. | 10-31-2013 |
20130301644 | System and Method for Message Sequencing in a Broadband Gateway - A system and method for message sequencing in a broadband gateway comprising a receiver to receive two or more messages, a data storage system to store the two or more messages, an identifier to identify a processing sequence for the two or more messages, and a retriever to retrieve the two or more messages for processing based on the identified processing sequence for providing broadband DSL service to a customer. | 11-14-2013 |
20130315251 | NETWORK SYSTEM AND COMMUNICATION DEVICE - A communication device on a time equivalence assurance network that guarantees the receiving sequence of packets sent to all users so that even if a user session is re-sent, a function operates to alleviate the resultant effects and make the send and receive timing approach the initial timing. An edge communication device on a provider network identifies the sessions of each user subscriber terminal, and stores and monitors the time information attached by a sequence information attachment function. Even if a packet is resent, the packet will at this time be resent with attached sequence information based on the past time originally attached to the packet, and not sequence information based on the current resending time. | 11-28-2013 |
20130336328 | METHOD AND SYSTEM FOR UPDATING REORDER DEPTH IN ROBUST HEADER COMPRESSION - The present document provides a method and system for updating a reorder depth in robust header compression. The method comprises: when determining a reorder occurs in data packets, a decompressor estimating the reorder situation, and determining whether a more robust reorder processing policy needs to be used according to the reorder situation; if it needs not to be used, maintaining a reorder depth value at the decompressor side; if it needs to be used, updating the reorder depth value at the decompressor side to a greater value, and transmitting a feedback packet carrying the updated reorder depth value to a compressor; and after receiving the feedback packet, the compressor updating the reorder depth value at the compressor side according to the reorder depth value in the feedback packet. | 12-19-2013 |
20140003438 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR UPDATING SEQUENCE AND ACKNOWLEDGMENT NUMBERS ASSOCIATED WITH REPLAY PACKETS | 01-02-2014 |
20140003439 | Method for Reducing Processing Latency in a Multi-Thread Packet Processor with at Least One Re-Order Queue | 01-02-2014 |
20140029622 | Reliably Transporting Packet Streams Using Packet Replication - In one embodiment, packet streams are reliably transported through a network using packet replication. A packet stream is received at a duplication point in a network, with two or more copies of each of the packet streams being transported, typically over divergent paths in the network, to a merge point from which a single copy of the packet stream is forwarded or consumed. In one embodiment, this merge point is a packet switching device that includes ingress card(s) and egress line card(s), wherein multiple copies of the packet stream are received by ingress line card(s), with only a single copy provided to an egress line card of the packet switching device. In this manner, a switching fabric or other communication mechanism communicatively coupling the ingress line card(s) to the egress line card, nor the egress line card, is taxed with the burden imposed by additional copies of packet stream. | 01-30-2014 |
20140050221 | INTERCONNECT ARRANGEMENT - An interconnect arrangement includes a plurality of tag allocators. Each tag allocator is configured to receive at least one stream of a plurality of packet units and further configured to tag each packet unit. Each packet unit is tagged with one of a set of n tags where n is greater than two. At least one stream is tagged with a sequence of tags that is different from a sequence of tags used for at least one other of said streams. The interconnect arrangement also includes a router configured to receive a plurality of streams of tagged packet units and to arbitrate between the streams such that packet units having a same tag are output in a group. | 02-20-2014 |
20140056307 | Systems And Methods For Multicore Processing Of Data With In-Sequence Delivery - Methods and systems are described to allow for the parallel processing of packets and other subsets of data that are to be delivered in order after the completion of the parallel processing. The methods and systems may process packets and subsets of data that may vary in size by orders of magnitude. The packets may be transmitted and/or received over data transmission networks that may be orders of magnitude faster than the processing speeds of the parallel processors. Entire packets or subsets of data may be allocated to individual processing units without segmenting the packets between the processing units. A count value may be inserted as metadata to received packets in order to indicate a relative order of arrival. The metadata may be utilized by a multiplexor at the output of the parallel processing units in order to maintain in-sequence delivery of the processed packets. | 02-27-2014 |
20140112346 | SYSTEM AND METHOD PROVIDING FORWARD COMPATIBILITY BETWEEN A DRIVER MODULE AND A NETWORK INTERFACE - Generally, this disclosure provides systems and methods for providing forward compatibility between a driver module and one or more present or future versions of a network interface. The system may include a network interface configured to transfer data between a host system and a network; and a programmable circuit module associated with the network interface, the programmable circuit module configured to provide compatibility between the network interface and a driver module associated with the host system, wherein the driver module includes a first set of capabilities and the network interface includes a second set of capabilities. | 04-24-2014 |
20140133491 | PACKET RETRANSMISSION AND MEMORY SHARING - Through the identification of different packet-types, packets can be handled based on an assigned packet handling identifier. This identifier can, for example, enable forwarding of latency-sensitive packets without delay and allow error-sensitive packets to be stored for possible retransmission. In another embodiment, and optionally in conjunction with retransmission protocols including a packet handling identifier, a memory used for retransmission of packets can be shared with other transceiver functionality such as, coding, decoding, interleaving, deinterleaving, error correction, and the like. | 05-15-2014 |
20140169378 | MAINTAINING PACKET ORDER IN A PARALLEL PROCESSING NETWORK DEVICE - A plurality of packets are received by a packet processing device, and the packets are distributed among two or more packet processing node elements for processing of the packets. The packets are assigned to respective packet classes, each class corresponding to a group of packets for which an order in which the packets were received is to be preserved. The packets are queued in respective queues corresponding to the assigned packet classes and according to an order in which the packets were received by the packet processing device. The packet processing node elements issue respective instructions indicative of processing actions to be performed with respect to the packets, and indications of at least some of the processing actions are stored. A processing action with respect to a packet is performed when the packet has reached a head of a queue corresponding to the class associated with the packet. | 06-19-2014 |
20140185620 | PACKET DECONSTRUCTION/RECONSTRUCTION AND LINK-CONTROL - The present disclosure includes methods, devices, and systems for packet processing. One method embodiment for packet flow control includes deconstructing a transport layer packet into a number of link-control layer packets, wherein each of the link-control layer packets has an associated sequence number, communicating the number of link-control layer packets via a common physical connection for a plurality of peripheral devices, and limiting a number of outstanding link-control layer packets during the communication. | 07-03-2014 |
20140192815 | MAINTAINING PACKET ORDER IN A PARALLEL PROCESSING NETWORK DEVICE - A plurality of packets that belong to a data flow are received and are distributed to two or more packet processing elements, wherein a packet is sent to a first packet processing element. A first instance of the packet is queued at a first packet processing element according to an order of the packet within the data flow. The first instance of the packet is caused to be transmitted when processing of the first instance is completed and the first instance of the packet is at a head of a queue at the first ordering unit. A second instance of the packet is queued at a second ordering unit. The second instance of the packet is caused to be transmitted when processing of the second instance is completed and the second instance of the packet is at a head of a queue at the second ordering unit. | 07-10-2014 |
20140198799 | Scheduling and Traffic Management with Offload Processors - A method for providing scheduling services for network packet processing using a memory bus connected module is disclosed. The method can include transferring network packets to the module through a memory bus connection, reordering network packets received from the memory bus connection with a scheduling circuit and placing the reordered network packets into multiple input/output queues, and modifying reordered network packets placed into multiple input/output queues using multiple offload processors connected to the memory bus. | 07-17-2014 |
20140219284 | METHOD AND SYSTEM FOR REDUCTION OF TIME VARIANCE OF PACKETS RECEIVED FROM BONDED COMMUNICATION LINKS - Method and system for reduction of time variance of packets received from bonded communication links. Embodiments of present inventions can be applied to bonded communication links, including wireless connection, Ethernet connection, Internet Protocol connection, asynchronous transfer mode, virtual private network, WiFi, high-speed downlink packet access, GPRS, LTE, X.25 and etc. The present invention presents methods comprising the steps of determining latency difference among bonded communication links, receiving one or more packets from the bonded communication links, and delivering the one or more packets according to the latency difference. The present invention also present systems comprising one or more network interfaces for receiving one or more packets from the bonded communication links communication network and one or more control modules configured for determining latency difference among a bonded communication links communication network, and for delivering the one or more packets according to the latency difference. | 08-07-2014 |
20140233573 | OUT-OF-ORDER MESSAGE FILTERING WITH AGING - The present disclosure is directed to a system and method for performing out-of-order message filtering with aging. The system and method can be used in a destination device that can receive messages out-of-order (i.e., in a different order than they were transmitted) from a source device. The system and method can filter-out or flag for appropriate handling these messages received out-of-order. To perform the above noted message filtering functionality, the system and method uses a database constructed from a content-addressable memory (CAM) and a random-access memory (RAM) to respectively remember source identifiers and sequence identifiers associated with previously received messages. A source identifier identifies the source of a message, and a sequence identifier identifies the transmission order of the message among the messages transmitted from the particular source. Each message can include a source identifier and a corresponding sequence identifier. | 08-21-2014 |
20140233574 | SYSTEM AND METHOD FOR MANAGING OUT OF ORDER PACKETS IN A NETWORK ENVIRONMENT - A method is provided in one example embodiment and includes creating at a network element an entry designating an out of order (“OOO”) sequence number range associated with a flow and receiving at the network element a packet associated with the flow, wherein the packet corresponds to a first sequence number range, wherein the first sequence number range falls within the OOO sequence number range designated in the entry. The method may further include updating the entry to remove sequence numbers comprising the first sequence number range from the OOO sequence number range and forwarding the packet without awaiting receipt of any other packets associated with the flow. | 08-21-2014 |
20140241369 | System and Method for Data Flow Identification and Alignment in a 40/100 Gigabit Ethernet Gearbox - A system and method for data flow identification and alignment in a 40/100 gigabit Ethernet gearbox. Virtual lane (VL) identifiers can be identified to create an effective wiring diagram for data flows. This wiring diagram enables a multiplexer or de-multiplexer to align the VL identifiers to match physical lane identifiers. | 08-28-2014 |
20140269731 | RELIABLE LINK LAYER FOR CONTROL LINKS BETWEEN NETWORK CONTROLLERS AND SWITCHES - A method for transmission of control data between a network switch and a switch controller is provided. The method includes: configuring a plurality of control data packets by the switch controller, wherein configuring includes disposing a sequence number in each of the plurality of control data packets indicating an order of data packet transmission; storing the plurality of control data packets in a replay buffer in communication with the switch controller; transmitting the plurality of control data packets to the network switch over a secure link between the switch controller and the network switch; and responsive to determining that one or more control data packets were not received by the network switch, retrieving the one or more control data packets from the replay buffer and re-transmitting the one or more control data packets to the network switch. | 09-18-2014 |
20140301400 | TRAIN INFORMATION MANAGEMENT APPARATUS AND TRAIN INFORMATION MANAGEMENT METHOD - A train information management apparatus includes a first system central device that generates first train control information for a car-mounted device in every predetermined period and a second system central device that generates second train control information starting from the time at which the time obtained by multiplying the predetermined period with 1/2 has elapsed from the time point when the first train control information is transmitted. The first system central device generates a first packet every time the first train control information is generated and alternately transmits a first packet to a first system trunk transmission line and a second system trunk transmission line. The second system central device generates and transmits a second packet almost in the same manner except that the second packet is transmitted to a trunk transmission line on the opposite side of the trunk transmission line to which the first packet was transmitted. | 10-09-2014 |
20140307740 | Traffic Manager with Programmable Queuing - A traffic manager includes an execution unit that is responsive to instructions related to queuing of data in memory. The instructions may be provided by a network processor that is programmed to generate such instructions, depending on the data. Examples of such instructions include (1) writing of data units (of fixed size or variable size) without linking to a queue, (2) re-sequencing of the data units relative to one another without moving the data units in memory, and (3) linking the previously-written data units to a queue. The network processor and traffic manager may be implemented in a single chip. | 10-16-2014 |
20140314094 | METHOD AND SYSTEM OF IMPLEMENTING CONVERSATION-SENSITIVE COLLECTION FOR A LINK AGGREGATION GROUP - A method is executed by a network device for implementing conversation-sensitive collection for frames received on a port of a link of a link aggregation group. The network device executes an aggregator to collect the frames for aggregator clients, where each frame is associated with a service identifier and a conversation identifier. The service identifier identifies a data flow at a link level for a service. The conversation identifier identifies the data flow at a link aggregation group level, where each conversation data flow consists of an ordered sequence of frames, and where the conversation-sensitive collection maintains the ordered sequence by discarding frames of conversations not allocated to the port. | 10-23-2014 |
20140314095 | METHOD AND SYSTEM OF UPDATING CONVERSATION ALLOCATION IN LINK AGGREGATION - A method of updating conversation allocation in link aggregation is disclosed. The method starts with verifying that an implementation of a conversation-sensitive link aggregation control protocol (LACP) is operational at a network device of a network for an aggregation port. Then it is determined that operations through enhanced link aggregation control protocol data units (LACPDUs) are possible. The enhanced LACPDUs can be used for updating conversation allocation information, and the determination is based at least partially on a compatibility check between a first set of operational parameters of the network device and a second set of operational parameters of a partner network device. Then a conversation allocation state of an aggregation port of the link aggregation group is updated based on a determination that the conversation allocation state is incorrect, where the conversation allocation state indicates a list of conversations transmitting through the aggregation port. | 10-23-2014 |
20140328349 | DISTRIBUTED SEQUENCE NUMBER CHECKING FOR NETWORK TESTING - A method of line speed sequence number checking of frames includes, in a first process, using a lowest order bit of a sequence number of a frame to assign the frame to an odd or even second process, and dispatching at least the sequence number for processing by the assigned second process. The method includes, in the first process, flagging as a first sequence error type occurrences of assigning consecutively processed frames to the same second process. The method includes, in the second processes, checking the sequence number of an incoming frame, flagging as a second sequence error type non-consecutive sequence numbers in consecutively processed incoming frames, and dispatching the frame for additional downstream processing. The method is applicable to a hierarchy of processes, and to multiplexed flows. The method can use a modulo N of the sequence number to assign the frame to one of N processes. | 11-06-2014 |
20140376554 | TRAIN-INFORMATION MANAGING APPARATUS - A train-information managing apparatus receives a series of control information attached to serial numbers, which are transmitted from a central apparatus, in order of the transmission and carries out control in the order, and even when the central apparatus is changed or the serial numbers are reset, prevents a blank period of control from occurring. Therefore, when a reception serial number n included in received data is a preferential serial number (n≦M) and an old number n | 12-25-2014 |
20150023355 | GENERATION DEVICE, REPRODUCTION DEVICE, DATA STRUCTURE, GENERATION METHOD, REPRODUCTION METHOD, CONTROL PROGRAM, AND RECORDING MEDIUM - There is provided a transmitter by which a reception side easily detects packet loss of a transport packet. The transmitter ( | 01-22-2015 |
20150023356 | ALIGNMENT CIRCUIT AND RECEIVING APPARATUS - A control circuit generates a selection signal indicating a head area of an alignment buffer when the area is an unwritten area, and when the head area is a written area, successively performs comparison between a sequence number stored in the area and a sequence number of a target packet from a head to a tail to search a boundary area and generates a selection signal indicating the detected boundary area. When the boundary area could not be detected even when the search reaches the last written area, the control circuit generates a selection signal indicating the next area of the last written area. The writing circuit shifts data stored in each area by one area from the area indicated by the selection signal in a direction of the tail of the alignment buffer, and writes packet information of the target packet into the area indicated by the selection signal. | 01-22-2015 |
20150063358 | RETRANSMISSION AND MEMORY CONSUMPTION TRACKING OF DATA PACKETS IN A NETWORK DEVICE - A method of handling retransmission and memory consumption tracking of data packets includes storing data packets from different data channels in respective transmitter ring buffers allocated to the data channels when the data packets are not marked for retransmission, and facilitating retransmission of data packets from a specified ring buffer corresponding to a retransmission sequence number. The method also may include storing received data packets out of sequence in respective receiver ring buffers, marking a descriptor indicating a tail location of the stored data packets, and reclaiming memory space in the ring buffer based on the marked descriptor. The method may include storing a payload address associated with received data packets, marking a descriptor associated with the payload address to indicate the stored data packets have been consumed for processing, and reclaiming memory space when a register contains an indication of the stored payload address based on the marked descriptor. | 03-05-2015 |
20150071293 | TWO TIER MULTIPLE SLIDING WINDOW MECHANISM FOR MULTIDESTINATION MEDIA APPLICATIONS - Some media applications use media that contains multiple types of media components in it and media sources with access to this media must send each type of media component to one or more media rendering destination devices. Furthermore there may be multiple destinations that can receive a particular type of media component and the media must be received at each destination without losses. This invention describes a two tier packet buffer structure at the media source with primary and virtual packet buffers that ensures minimal memory use at the media source and minimal network use. Furthermore the use of a sliding window with each virtual packet buffer associated with each destination, independently keeps control and track of destination state, ensuring the correct receipt of media data at each destination. | 03-12-2015 |
20150078388 | SEQUENCE NUMBER RETRIEVAL FOR VOICE DATA WITH REDUNDANCY - A sequence number is used to indicate where a payload of a voice data packet should fit in a data stream and a technique is described for retrieving the sequence number for redundant payloads. A receiver maintains a history of previously received timestamps and sequence numbers for previous payloads. A received packet is unpacked to obtain a primary payload and its associated sequence number and timestamp, and a redundant payload and its associated timestamp offset. The primary payload sequence number and timestamp are stored in the history. A time-span of the data stream covered by the packet is found using the timestamp offset, and a portion of the history selected based on the time-span. A timestamp parameter for the redundant payload is calculated using the primary payload timestamp and the timestamp offset, and is compared to timestamps in the selected portion of the history to derive the redundant payload sequence number. The history is updated to include the timestamp parameter and sequence number of the redundant payload. | 03-19-2015 |
20150110118 | Method of processing disordered frame portion data units - A method of encapsulating data units of at least one encoded video frame into a data stream, said data units representing frame portions of the video frame, wherein said data stream is associated with an ordering information indicating the compliance of the order of the data units with a nominal data unit decoding order. Embodiments of the invention provide flexible transmission with robust and flexible decoders. | 04-23-2015 |
20150334056 | CELL PROCESSING METHOD AND APPARATUS - A cell processing method and apparatus are provided. The method includes: obtaining, by a first sending end, a first timestamp compensation time; adding, by the first sending end, the first timestamp compensation time to a first timestamp carried in a first cell, where the first timestamp is a sending time of the first cell; and sending, by the first sending end to a receiving end, the first cell that is added with the first timestamp compensation time, so that the receiving end forwards the first cell according to the first timestamp that is added with the first timestamp compensation time. In the present invention, a first timestamp compensation time is added to a first timestamp carried in a first cell, which improves cell forwarding efficiency of the receiving end and prevents the occurrence of cell accumulation in a link. | 11-19-2015 |
20160057258 | SOFTWARE UPGRADE USING LAYER-2 MANAGEMENT ENTITY MESSAGING - Principles, apparatuses, systems, circuits, methods, and computer program products for performing a software upgrade in a MoCA network includes receiving an image of a software upgrade at a server and sending the image in the MoCA network using an L2ME message channel to a client that is enabled to receive the image and store the image in a client memory. The image may be broken up into packets, and a sequence number may be assigned to each packet to assist the client in assembling them. CRC information may also be appended to the packets to enable the client to verify their contents. | 02-25-2016 |
20160127518 | SINGLE-PASS/SINGLE COPY NETWORK ABSTRACTION LAYER UNIT PARSER - Technologies for a single-pass/single copy network abstraction layer unit (“NALU”) parser. Such a NALU parser typically reuses source and/or destination buffers, optionally changes endianess of NALU data, optionally processes emulation prevention codes, and optionally processes parameters in slice NALUs, all as part of a single pass/single copy process. The disclosed NALU parser technologies are further suitable for hardware implementation, software implementation, or any combination of the two. | 05-05-2016 |
20160182390 | AIRCRAFT COMMUNICATION SYSTEMS AND METHODS | 06-23-2016 |
20160205047 | HIERARCHICAL CACHING SYSTEM FOR LOSSLESS NETWORK PACKET CAPTURE APPLICATIONS | 07-14-2016 |
20160380901 | METHODS AND APPARATUS FOR PREVENTING HEAD OF LINE BLOCKING FOR RTP OVER TCP - Methods and apparatus for processing and using TCP packets to communicate RTP packets are described. Head of line blocking is avoided by operating a TCP packet processing module to output RTP packet data to an application irrespective of whether or not a preceding TCP packet was received. Since output of packet data to an application using RTP packets is not delayed when there is a missing TCP packet, head of line blocking is avoided. RTP packet data is subjected to pattern matching in order to identify and process RTP packets in the case where RTP header information such as packet length information is missing due to the failure to receive a TCP packet. The methods are particularly well suited for the communication of audio and/or video by devices operating behind firewalls which block UDP or other types of packets other than TCP packets. | 12-29-2016 |