Entries |
Document | Title | Date |
20080198866 | Hybrid Method and Device for Transmitting Packets - A method for transmitting packets, the method includes receiving multiple packets at multiple queues. The method is characterized by dynamically defining fixed priority queues and weighted fair queuing queues, and scheduling a transmission of packets in response to a status of the multiple queues and in response to the definition. A device for transmitting packets, the device includes multiple queues adapted to receive multiple packets. The device includes a circuit that is adapted to dynamically define fixed priority queues and weighted fair queuing queues out of the multiple queues and to schedule a transmission of packets in response to a status of the multiple queues and in response to the definition. | 08-21-2008 |
20080205422 | Method And Structure To Support System Resource Access Of A Serial Device Implementing A Lite-Weight Protocol - On-chip resources of a serial buffer are accessed using priority packets of a Lite-weight protocol. A priority packet path is provided on the serial buffer to support priority packets. Normal data packets are processed on a normal data packet path, which operates in parallel with the priority packet path. The system resources of the serial buffer can be accessed in response to the priority packets, without blocking the flow of normal data packets. Thus, normal data packets may flow through the serial buffer with the maximum bandwidth supported by the serial interface. The Lite-weight protocol also supports read accesses to queues of the serial buffer (which reside on the normal data packet path). The Lite-weight protocol also supports doorbell commands for status/error reporting. | 08-28-2008 |
20080205423 | Method and Apparatus for Communicating Variable-Sized Packets in a Communications Network - Methods and apparatus for managing a packet buffer memory are disclosed. One method includes providing a memory arranged as a plurality of cells identified by cell id, each cell having a granularity of k individual memory addresses. A cell list indexed by cell id is provided. The cell list includes a free cell list identifying cells available for storing data as a linked list, wherein a beginning of the free cell list identifies a starting cell id. A cell list indexed by cell id is provided. The free cell list includes cells available for storing data as a linked list, wherein a beginning of the free cell list identifies a starting cell id. Each portion of a packet is stored in cells indicated by and in a sequence indicated by traversing the cell list beginning with the starting cell id. | 08-28-2008 |
20080212600 | Router and queue processing method thereof - A queue processing method and a router perform cache update and queue processing based upon whether or not the packet capacity stored in the queue exceeds a rising threshold, or whether the packet capacity stored in the queue is below a falling threshold after the packet capacity stored in the queue has exceeded the rising threshold. This queue processing method and router makes it possible to eliminate overhead associated with the update of flow information by using two caches, while concomitantly removing the inequality of packet flows via RED queue management with the expedient of using two caches. | 09-04-2008 |
20080219279 | SCALABLE AND CONFIGURABLE QUEUE MANAGEMENT FOR NETWORK PACKET TRAFFIC QUALITY OF SERVICE - Various embodiments are directed to scalable and configurable queue management for network packet traffic Quality of Service (QoS). In one or more embodiments, the queue management may be implemented by a network processor comprising a queue manager to assert interrupts indicating that one or more queues require service, and a core processor to apply an interrupt mask to a status register value identifying the one or more queues that require service and to provide service during a particular service cycle to only those queues that are not masked out. Other embodiments are described and claimed. | 09-11-2008 |
20080225872 | DYNAMICALLY DEFINING QUEUES AND AGENTS IN A CONTACT CENTER - In one embodiment, an automatic call distributor apparatus is provided, including a network interface operable to receive a request for a service, and a processor operable to assign the request to a queue and to associate a number of resources with the queue based upon a determination of at least one dynamic parameter of the queue. Advantageously, resources may be allocated to queues in a flexible, efficient, and dynamic manner. | 09-18-2008 |
20080225873 | RELIABLE NETWORK PACKET DISPATCHER WITH INTERLEAVING MULTI-PORT CIRCULAR RETRY QUEUE - Disclosed is a method and apparatus for managing network data packet transmission. A retry buffer is maintained that includes a single first in, first out retransmission retry buffer. A first data packet is inserted into the retry buffer in response to transmitting the first data packet to a remote node. A determination that a second data packet is not able to be transmitted to the remote node causes the second data packet to be inserted into the retry buffer. A third data packet is retrieved from the retry buffer and a determination that it is not to be transmitted to the remote node causes the third data packet to be reinserted into the retry buffer. | 09-18-2008 |
20080225874 | Stateful packet filter and table management method thereof - A stateful packet filter and a table management method thereof The stateful packet filter includes an index buffer storing a session table index address from a session table, which is searched for determining a session of a received packet when a packet is received; and a table manager updating a state table by using the session table index address, stored in the index buffer, as a state table address value. | 09-18-2008 |
20080232386 | PRIORITY BASED BANDWIDTH ALLOCATION WITHIN REAL-TIME AND NON-REAL-TIME TRAFFIC STREAMS - A method and system for transmitting packets in a packet switching network. Packets received by a packet processor may be prioritized based on the urgency to process them. Packets that are urgent to be processed may be referred to as real-time packets. Packets that are not urgent to be processed may be referred to as non-real-time packets. Real-time packets have a higher priority to be processed than non-real-time packets. A real-time packet may either be discarded or transmitted into a real-time queue based upon its value priority, the minimum and maximum rates for that value priority and the current real-time queue congestion conditions. A non-real-time packet may either be discarded or transmitted into a non-real-time queue based upon its value priority, the minimum and maximum rates for that value priority and the current real-time and non-real-time queue congestion conditions. | 09-25-2008 |
20080240139 | Method and Apparatus for Operating Fast Switches Using Slow Schedulers - The invention includes an apparatus and method for switching packets through a switching fabric. The apparatus includes a plurality of input ports and output ports for receiving arriving packets and transmitting departing packets, a switching fabric for switching packets from the input ports to the output ports, and a plurality of schedulers controlling switching of packets through the switching fabric. The switching fabric includes a plurality of virtual output queues associated with a respective plurality of input-output port pairs. One of the schedulers is active during each of a plurality of timeslots. The one of the schedulers active during a current timeslot provides a packet schedule to the switching fabric for switching packets through the switching fabric during the current timeslot. The packet schedule is computed by the one of the schedulers active during the current timeslot using packet departure information for packets departing during previous timeslots during which the one of the schedulers was active and packet arrival information for packets arriving during previous timeslots during which the one of the schedulers was active. | 10-02-2008 |
20080240140 | Network interface with receive classification - A network interface that provides improved processing of received packets in a networked computer by classifying packets as they are received. Further, both the characteristics used by the network interface to classify packets and the processing performed on those packets once classified may be programmed. The network interface contains multiple receive queues and one type of processing that may be performed is assigning packets to queues based on classification. A network stack within an operating system of the networked computer can route packets classified by the network interface to application level destinations with reduced processing. Additionally, the priority with which packets of certain classifications are processed may be used to allocate processing power to certain types of packets. As a specific example, a computer subjected to a particular type of denial of service attack sometimes called a “SYN attack” may lower the priority of processing SYN packets to reduce the effect of such an attack. | 10-02-2008 |
20080247409 | Queuing and Scheduling Architecture Using Both Internal and External Packet Memory for Network Appliances - Enhanced memory management schemes are presented to extend the flexibility of using either internal or external packet memory within the same network device. In the proposed schemes, the user can choose either static or dynamic schemes, both or which are capable of using both internal and external memory, depending on the deployment scenario and applications. This gives the user flexible choices when building unified wired and wireless networks that are either low-cost or feature-rich, or a combination of both. A method for buffering packets in a network device, and a network device including processing logic capable of performing the method are presented. The method includes initializing a plurality of output queues, determining to which of the plurality of output queues a packet arriving at the network device is destined, storing the packet in one or more buffers, where the one or more buffers is selected from a packet memory group including an internal packet memory and an external packet memory, and enqueuing the one or more buffers to the destined output queue. | 10-09-2008 |
20080247410 | Creating A Low Bandwidth Channel Within A High Bandwidth Packet Stream - Creating a low-bandwidth channel in a high-bandwidth channel. By taking advantage of extra bandwidth in a high-bandwidth channel, a low-bandwidth channel is created by inserting extra packets. When an inter-packet gap of the proper duration is detected, the extra packet is inserted and any incoming packets on the high-bandwidth channel are stored in an elastic buffer. Observing inter-packet gaps, minimal latency is introduced in the high-bandwidth channel when there is no extra packet in the process of being sent, and the effects of sending a packet on the low-bandwidth channel are absorbed and distributed among other passing traffic. | 10-09-2008 |
20080253387 | Method and apparatus for improving SIP server performance - A method and apparatus for improving SIP server performance is disclosed. The apparatus comprises an enqueuer for determining whether a request packet entering into the server is a new request or a retransmitted request and its retransmission times and for enqueuing the request packet into different queues based on results of the determining step and a dequeuer for dequeuing the packet in the queues for processing based on a scheduling policy. The apparatus may further include a policy controller for communicating with the server, enqueuer, dequeuer, queues and user, to dynamically and automatically set, or set based on the user's instructions, the scheduling policy, number of different queues, each queue's capacity, scheduling, etc. based on the network and/or server load and/or based on different server applications. | 10-16-2008 |
20080259947 | Method and System for High-Concurrency and Reduced Latency Queue Processing in Networks - A method and a system for controlling a plurality of queues of an input port in a switching or routing system. The method supports the regular request-grant protocol along with speculative transmission requests in an integrated fashion. Each regular scheduling request or speculative transmission request is stored in request order using references to minimize memory usage and operation count. Data packet arrival and speculation event triggers can be processed concurrently to reduce operation count and latency. The method supports data packet priorities using a unified linked list for request storage. A descriptor cache is used to hide linked list processing latency and allow central scheduler response processing with reduced latency. | 10-23-2008 |
20080267203 | DYNAMIC MEMORY QUEUE DEPTH ALGORITHM - A method of modifying a priority queue configuration of a network switch is described. The method comprises configuring a priority queue configuration, monitoring a network parameter, and adjusting the priority queue configuration based on the monitored network parameter. | 10-30-2008 |
20080267204 | Compact Load Balanced Switching Structures for Packet Based Communication Networks - A switching node is disclosed for the routing of packetized data employing a multi-stage packet based routing fabric combined with a plurality of memory switches employing memory queues. The switching node allowing reduced throughput delays, dynamic provisioning of bandwidth and packet prioritization. | 10-30-2008 |
20080267205 | Traffic management device and method thereof - A traffic management device and the method thereof are disclosed. The traffic management device includes a control logic unit, a first counting unit, and a second counting unit. The traffic management method follows the dual leaky bucket mechanism. A first count value and a second count value are generated by the first counting unit and the second counting unit, respectively, such that the control logic unit controls the average rate by checking whether the first count value falls within the range of a first threshold and controls the peak rate by checking whether the second count value falls within the range of a second threshold. When both the conditions are satisfied, packets in the queue are transmitted. Thus, the network flow is controlled effectively. | 10-30-2008 |
20080267206 | CAM BASED SYSTEM AND METHOD FOR RE-SEQUENCING DATA PACKETS - An embodiment of the system operates in a parallel packet switch architecture having at least one egress adapter arranged to receive data packets issued from a plurality of ingress adapters and switched through a plurality of independent switching planes. Each received data packet belongs to one sequence of data packets among a plurality of sequences where the data packets are numbered with a packet sequence number (PSN) assigned according to at least a priority level of the data packet. Each data packet received by the at least one egress adapter has a source identifier to identify the ingress adapter from which it is issued. The system for restoring the sequences of the received data packets operates within the egress adapter and comprises buffer for temporarily storing each received data packet at an allocated packet buffer location, a controller, and a determination means coupled to a storing means and extracting means. | 10-30-2008 |
20080273545 | Channel service manager with priority queuing - A system and method are provided for prioritizing network processor information flow in a channel service manager (CSM). The method receives a plurality of information streams on a plurality of input channels, and selectively links input channels to CSM channels. The information streams are stored, and the stored the information streams are mapped to a processor queue in a group of processor queues. Information streams are supplied from the group of processor queues to a network processor in an order responsive to a ranking of the processor queues inside the group. More explicitly, selectively linking input channels to CSM channels includes creating a fixed linkage between each input port and an arbiter in a group of arbiters, and scheduling information streams in response to the ranking of the arbiter inside the group. Finally, a CSM channel is selected for each information stream scheduled by an arbiter. | 11-06-2008 |
20080279207 | Method and apparatus for improving performance in a network using a virtual queue and a switched poisson process traffic model - A method for improving network performance using a virtual queue is disclosed. The method includes measuring characteristics of a packet arrival process at a network element, establishing a virtual queue for packets arriving at the network element, and modeling the packet arrival process based on the measured characteristics and a computed performance of the virtual queue. | 11-13-2008 |
20080279208 | SYSTEM AND METHOD FOR BUFFERING DATA RECEIVED FROM A NETWORK - A system for buffering data received from a network comprises a network socket, a plurality of buffers, a buffer pointer pool, receive logic, and packet delivery logic. The buffer pointer pool has a plurality of entries respectively pointing to the buffers. The receive logic is configured to pull an entry from the pool and to perform a bulk read of the network socket. The entry points to one of the buffers, and the receive logic is further configured to store data from the bulk read to the one buffer based on the entry. The packet delivery logic is configured to read, based on the entry, the one buffer and to locate a missing packet sequence in response to a determination, by the packet delivery logic, that the one buffer is storing an incomplete packet sequence. The packet delivery logic is further configured to form a complete packet sequence based on the incomplete packet sequence and the missing packet sequence. | 11-13-2008 |
20080285578 | CONTENT-BASED ROUTING OF INFORMATION CONTENT - A system to route media information content may include a router that analyzes predetermined content of a plurality of data packets of the media information content and prioritizes forwarding the plurality of data packets from the router based on applying at least one rule to the predetermined content. | 11-20-2008 |
20080285579 | Digital Broadcast Network Best Effort Services - In accordance with an embodiment, a best-effort service is divided into packets for best-effort digital broadcast transmission. The packets are encapsulated with an encapsulation protocol that uses a packet order defining field. The encapsulated packets are inserted into an unused portion of a slot of a digital broadcast transmission frame. Then, the encapsulated packets are repeatedly inserted into the unused portion of the slot of the digital broadcast transmission frame in a packet-carousel fashion. And the transmission frame is digitally broadcast. In accordance with an embodiment, a digital broadcast transmission is received. Encapsulated packets that have been repeatedly broadcast in a packet-carousel fashion are accessed from a best-effort portion of a digital broadcast transmission frame slot. And a best-effort service is composed from the encapsulated packets by combining the encapsulated packets in an order based on a packet order defining field of the encapsulated packets. | 11-20-2008 |
20080285580 | ROUTER APPARATUS - A router apparatus allocates a queue in a storage device and transmits transmission-target data after temporarily storing the transmission-target data in the queue. The router apparatus determines whether the size of a usable data area assigned to the queue is equal to or greater than a threshold, and supplements, on the basis of the determination, the data area of the supplementation-target queue having the usable data area whose size is smaller than the threshold with a data area of a queue other than the supplementation-target queue. | 11-20-2008 |
20080291933 | METHOD AND APPARATUS FOR PROCESSING PACKETS - A computer implemented method, apparatus, and computer usable program code for processing packets for transmission. A set of interface specific network buffers is identified from a plurality of buffers containing data for a packet received for transmission. A data structure describing the set of interface specific network buffers within the plurality of buffers is created, wherein a section in the data structure for an interface specific network buffer in the set of interface specific network buffers includes information about a piece of data in interface specific network buffer, wherein the data structure is used to process the packet for transmission. | 11-27-2008 |
20080291934 | Variable Dynamic Throttling of Network Traffic for Intrusion Prevention - Methods, apparatus, and computer program products for variable dynamic throttling of network traffic for intrusion prevention are disclosed that include initializing, as throttling parameters, a predefined time interval, a packet count, a packet count threshold, a throttle rate, a keepers count, and a discards count; starting a timer, the timer remaining on no longer than the predefined time interval; maintaining, while the timer is on, statistics including the packet count, the keepers count, and the discards count; for each data communications packet received by the network host, determining, in dependence upon the statistics and the throttle rate, whether to discard the packet and determining whether the packet count exceeds the packet count threshold; and if the packet count exceeds the packet count threshold: resetting the statistics, incrementing the throttle rate, and restarting the timer. | 11-27-2008 |
20080291935 | Methods, Systems, and Computer Program Products for Selectively Discarding Packets - A method, system, and computer program product are provided for selectively discarding packets in a network device. The method includes receiving an upstream bandwidth saturation indicator for a queue in the network device, and identifying one or more codecs employed in packets in the queue when the upstream bandwidth saturation indicator indicates saturation. The method further includes determining a packet discarding policy based on the one or more codecs, and discarding packets in accordance with the packet discarding policy. | 11-27-2008 |
20080291936 | TEMPORARY BLOCK FLOW CONTROL IN WIRELESS COMMUNICATION DEVICE - A wireless communication device ( | 11-27-2008 |
20080298380 | Transmit Scheduling - There are disclosed apparatus and methods for scheduling packet transmission. At least one scheduled traffic queue holds a plurality of scheduled packets, each scheduled packet having an associated scheduled transmit time. At least one unscheduled traffic queue holds plurality of unscheduled packets. A packet selector causes transmission of scheduled packets from the scheduled traffic queue at the associated scheduled transmit time, while causing transmission of unscheduled packets from the unscheduled traffic queue during the time intervals between transmissions of scheduled packets. | 12-04-2008 |
20080298381 | APPARATUS FOR QUEUE MANAGEMENT OF A GLOBAL LINK CONTROL BYTE IN AN INPUT/OUTPUT SUBSYSTEM - Apparatus for communicating global link control words (LCW) between chips. A queue stores LCWs and has an input for receiving an LCW from a previous chip, and an output for outputting a stored LCW to a subsequent chip. A management circuit compares an incoming LCW with a previously stored LCW, and a combiner circuit combines the incoming LCW with a previously stored LCW and stores the combined LCW in the queue when the management circuit determines that the incoming LCW can be combined with the previously stored LCW. | 12-04-2008 |
20080304503 | TRAFFIC MANAGER AND METHOD FOR PERFORMING ACTIVE QUEUE MANAGEMENT OF DISCARD-ELIGIBLE TRAFFIC - A traffic manager and a method are described herein that are capable of performing an active queue management of discard-eligible traffic for a shared memory device (with a per-CoS switching fabric) that provides fair per-class backpressure indications. | 12-11-2008 |
20080310439 | COMMUNICATING PRIORITIZED MESSAGES TO A DESTINATION QUEUE FROM MULTIPLE SOURCE QUEUES USING SOURCE-QUEUE-SPECIFIC PRIORITY VALUES - There is disclosed a method, apparatus and computer program for communicating messages between a first messaging system and a second messaging system. The messaging system comprises a set of source queues with each source queue owning messages retrievable in priority order. It is determined that a message should be transferred from the first messaging system to the second messaging system. A source queue is selected which contains a message having at least an equal highest priority when compared with messages on the source queues. A message having the at least equal highest priority from the selected source queue of the first messaging system is then transferred to a target queue at the second messaging system. | 12-18-2008 |
20080317057 | Methods for Processing Two Data Frames With Scalable Data Utilization - The present invention provides a framework for the processing of blocks between two data frames and in particular application to motion estimation calculations in which a balance among the performance of a motion search algorithm, the size of on-chip memory to store the reference data, and the required data transfer bandwidth between on-chip and external memory can be optimized in a scalable manner, such that the total system cost with hierarchical embedded memory structure can be optimized in a flexible manner. The scope of the present invention is not limited to digital video encoding in which motion vector is part of information to be encoded, but is applicable to any other implementation in which difference between any two data frames are to be computed. | 12-25-2008 |
20090003369 | Method and receiver for determining a jitter buffer level - The invention relates to a method and a receiver having control logic means for determining a target packet level of a jitter buffer adapted to receive packets with digitized signal samples, which packets are subject to delay jitter, from a packet data network. According to the invention, the jitter buffer is made adaptive to current network conditions, i.e., the nature and magnitude of the jitter observed by the receiver, by collecting statistical measures that describe these conditions. The target buffer level is determined with regard to the effect of packet losses in terms of duration of the discontinued playback of the true signal. This effect is derived from statistical measures of the network conditions as perceived by the receiving side and as reflected by a probability mass function which is continuously updated with packet inter-arrival times. The target buffer level is the result of minimization of a cost function which weights the internal buffer delay and an expected length of buffer underflow. | 01-01-2009 |
20090003370 | SYSTEM AND METHOD FOR IMPROVED PERFORMANCE BY A DVB-H RECEIVER - A system and method for improved performance by a DVB-H receiver is described that allows good Internet Protocol (IP) packets in a Multiprotocol Encapsulation-Forward Error Correction (MPE-FEC) frame to be salvaged even when there are other IP packets in the frame that may have bytes in error after the performance of MPE-FEC operations. To achieve this, the system and method provides a means for ascertaining where IP packets loaded into a memory begin and end in a manner that can be relied upon even when individual bytes of the IP packets, such as certain bytes of the IP packet header used to determine total packet length, may be in error. | 01-01-2009 |
20090003371 | METHOD FOR TRANSMITTING PACKET AND NETWORK SYSTEM THEREOF - A method for transmitting packets and a network system thereof are provided. In the present invention, each packet entering the network system is added an assigning tag to indicate the arrival time of the packet, and at least two queues in a node of the network system are used for respectively sorting the local packets of the node and the relayed packets of the preceding node. The order of the packet for transmitting can be decided by comparing the assigning tags of the two packets positioned at first order in different queues. Therefore, a condition of First-In First-Out (FIFO) is satisfied in the network system, and the sequence for transmitting packets is arbitrated fair. | 01-01-2009 |
20090016370 | Creating a Telecommunications Channel from Multiple Channels that Have Differing Signal-Quality Guarantees - A technique is disclosed that enables the adaptive pooling of M transmission paths that offer a first signal-quality guarantee, or no guarantee at all, with N transmission paths that offer a second signal-quality guarantee. Through this adaptive pooling, a telecommunications channel is created that meets the quality of service or waveform quality required for a packet stream being transmitted, while not excessively exceeding the required quality. The technique adaptively recaptures any excess signal quality from one path and uses it to boost the quality of an inferior path. A node of the illustrative embodiment selects the paths to handle a current segment of source packets, based on one or more parameters that are disclosed herein. The node adapts to changing conditions by adjusting the transmission characteristics for each successive segment of packets from the source packet stream. | 01-15-2009 |
20090022171 | INTERRUPT COALESCING SCHEME FOR HIGH THROUGHPUT TCP OFFLOAD ENGINE - An interrupt coalescing scheme for high throughput TCP offload engine and method thereof are disclosed. An interrupt descriptor queue is used, that TCP offload engine saves TCP connection information and interrupt information in an interrupt event descriptor per interrupt. Meanwhile the software processes an interrupt by reading interrupt event descriptors asynchronously. The software may process multiple interrupt event descriptors in one interrupt context. | 01-22-2009 |
20090028171 | FIFO BUFFER WITH ADAPTIVE THRESHOLD LEVEL - A system comprising a FIFO data buffer having a programmable threshold level, which is initially set to a worst case scenario level, so that the FIFO data buffer does not empty of data. The system also comprises a hardware device which is configured to adjust the threshold level in the FIFO data buffer to a level equal to the current threshold level minus the amount of remaining data in the FIFO data buffer at the time new data enters the FIFO data buffer. The hardware device is also configured to adjust the threshold level to the initial threshold level if the FIFO data buffer underflows. The hardware device may be coupled to the FIFO data buffer, implemented in the FIFO data buffer, or implemented in the display subsystem. The system may be implemented in a mobile communications device. | 01-29-2009 |
20090034548 | Hardware Queue Management with Distributed Linking Information - A network element including a processor with logic for managing packet queues by way of packet descriptor index values that are mapped to addresses in the memory space of the packet descriptors. A linking memory is implemented in the same integrated circuit as the processor, and has entries corresponding to the descriptor index values. Each entry can store the next descriptor index in a packet queue, to form a linked list of packet descriptors. Queue manager logic receives push and pop requests from host applications, and updates the linking memory to maintain the queue. The queue manager logic also maintains a queue control register for each queue, including head and tail descriptor index values. | 02-05-2009 |
20090034549 | Managing Free Packet Descriptors in Packet-Based Communications - A network element including a processor with logic for managing packet queues including a queue of free packet descriptors. Upon the transmission of a packet by a host application, the packet descriptor for the transmitted packet is added to the free packet descriptor queue. If the new free packet descriptor resides in on-chip memory, relative to queue manager logic, it is added to the head of the free packet descriptor queue; if the new free packet descriptor resides in external memory, it is added to the tail of the free packet descriptor queue. Upon a packet descriptor being requested, by a host application, to be associated with valid data to be added to an active packet queue, the queue manager logic pops the packet descriptor currently at the head of the free descriptor queue. In this manner, packet descriptors in on-chip memory are preferentially used relative to packet descriptors in external memory, thus improving system performance. | 02-05-2009 |
20090034550 | METHOD AND SYSTEM FOR ROUTING FIBRE CHANNEL FRAMES - A method and system for transmitting frames using a fibre channel switch element is provided. The switch element includes a port having a receive segment and a transmit segment, wherein the fibre channel switch element determines if a port link has been reset; determines if a flush state has been enabled for the port; and removes frames from a buffer, if the flush state has been enabled for the port. For a flush state operation, frames are removed from a receive buffer of the fibre channel port as if it is a typical fibre channel frame transfer. The removed frames are sent to a processor for analysis. The method also includes, setting a control bit for activating frame removal from the transmit buffer; and diverting frames that are waiting in the transmit buffer and have not been able to move from the transmit buffer. | 02-05-2009 |
20090034551 | SYSTEM AND METHOD FOR RECEIVE QUEUE PROVISIONING - Systems and methods that provide receive queue provisioning are provided. In one embodiment, a communications system may include, for example, a first queue pair (QP), a second QP, a general pool and a resource manager. The first QP may be associated with a first connection and with at least one of a first limit value and an out-of-order threshold. The first QP may include, for example, a first send queue (SQ). The second QP may be associated with a second connection and with a second limit value. The second QP may include, for example, a second SQ. The general pool may include, for example, a shared receive queue (SRQ) that is shared by the first QP and the second QP. The resource manager may provide, for example, provisioning for the SRQ and may manage the first limit value and the second limit value. | 02-05-2009 |
20090046734 | Method for Traffic Management, Traffic Prioritization, Access Control, and Packet Forwarding in a Datagram Computer Network - The invention provides an enhanced datagram packet switched computer network. The invention processes network datagram packets in network devices as separate flows, based on the source-destination address pair in the datagram packet. As a result, the network can control and manage each flow of datagrams in a segregated fashion. The processing steps that can be specified for each flow include traffic management, flow control, packet forwarding, access control, and other network management functions. The ability to control network traffic on a per flow basis allows for the efficient handling of a wide range and a large variety of network traffic, as is typical in large-scale computer networks, including video and multimedia traffic. The amount of buffer resources and bandwidth resources assigned to each flow can be individually controlled by network management. In the dynamic operation of the network, these resources can be varied—based on actual network traffic loading and congestion encountered. The invention also teaches an enhanced datagram packet switched computer network which can selectively control flows of datagram packets entering the network and traveling between network nodes. This new network access control method also interoperates with existing media access control protocols, such as used in the Ethernet or 802.3 local area network. An aspect of the invention is that it does not require any changes to existing network protocols or network applications. | 02-19-2009 |
20090046735 | METHOD FOR PROVIDING PRIORITIZED DATA MOVEMENT BETWEEN ENDPOINTS CONNECTED BY MULTIPLE LOGICAL CHANNELS - A data network and a method for providing prioritized data movement between endpoints connected by multiple logical channels. Such a data network may include a first node comprising a first plurality of first-in, first-out (FIFO) queues arranged for high priority to low priority data movement operations; and a second node operatively connected to the first node by multiple control and data channels, and comprising a second plurality of FIFO queues arranged in correspondence with the first plurality of FIFO queues for high priority to low priority data movement operations via the multiple control and data channels; wherein an I/O transaction is accomplished by one or more control channels and data channels created between the first node and the second node for moving commands and data for the I/O transaction during the data movement operations, in the order from high priority to low priority. | 02-19-2009 |
20090059941 | DYNAMIC DATA FILTERING - Networks, systems and methods for dynamically filtering data are disclosed. Streams of data may be buffered or stored in a queue when inbound rates exceed distribution or publication limitations. Inclusive messages in the queue may be removed, replaced or aggregated, reducing the number of messages to be published when distribution limitations are no longer exceeded. | 03-05-2009 |
20090073999 | Adaptive Low Latency Receive Queues - A receive queue provided in a computer system holds work completion information and message data together. An InfiniBand hardware adapter sends a single CQE+message data to the computer system that includes the completion information and data. This information is sufficient for the computer system to receive and process the data message, thereby providing a highly scalable low latency receiving mechanism. | 03-19-2009 |
20090080451 | PRIORITY SCHEDULING AND ADMISSION CONTROL IN A COMMUNICATION NETWORK - Techniques for performing priority scheduling and admission control in a communication network are described. In an aspect, data flows may be prioritized, and packets for data flows with progressively higher priority levels may be placed at points progressively closer to the head of a queue and may then experience progressively shorter queuing delays. In another aspect, a packet for a terminal may be transferred from a source cell to a target cell due to handoff and may be credited for the amount of time the packet has already waited in a queue at the source cell. In yet another aspect, all priority and non-priority data flows may be admitted if cell loading is light, only priority data flows may be admitted if the cell loading is heavy, and all priority data flows and certain non-priority data flows may be admitted if the cell loading is moderate. | 03-26-2009 |
20090086746 | DIRECT MESSAGING IN DISTRIBUTED MEMORY SYSTEMS - A system and method for sending a cache line of data in a single message is described. An instruction issued by a processor in a multiprocessor system includes an address of a message payload and an address of a destination. Each address is translated to a physical address and sent to a scalability interface associated with the processor and in communication with a system interconnect. Upon translation the payload of the instruction is written to the scalability interface and thereafter communicated to the destination. According to one embodiment, the translation of the payload address is accomplished by the processor while in another embodiment the translation occurs at the scalability interface. | 04-02-2009 |
20090086747 | Queuing Method - A method of queuing data packets, said data packets comprising data packets of a first packet type and data packets of a second packet type. The method comprises grouping received packets of said first and second packet types into an ordered series of groups, each group comprising at least one packet, maintaining a group counter indicating the number of groups at the beginning of the series of groups comprising only packets of the second packet type, and transmitting a packet. A packet of the second packet type is available for transmission if but only if the group counter is indicative that the number of groups at the beginning of the series of groups comprising only packets of the second packet type is greater than zero. | 04-02-2009 |
20090097493 | QUEUING MIXED MESSAGES FOR CONFIGURABLE SEARCHING - The present invention provides a method and an apparatus for forming a queue that enables a real time search of a first and a second plurality of messages which enter the queue in a linear order. The method comprises providing a sequential data structure to populate the queue with the first and second plurality of messages. The method comprises using the sequential data structure to selectively configure the queue for traversing in a search order different than the linear order in which the first and second plurality of messages reach the queue. | 04-16-2009 |
20090097494 | PACKET FORWARDING METHOD AND DEVICE - A packet forwarding mechanism using a packet map is disclosed. The method includes the packet map storing a packet forwarding information of each packet, where the packet map uses a single bit to indicate whether a packet is forwarding through a specific output port. In this way, the packet forwarding information can be stored in a very simple form such that less memory space is required for storing the packet forwarding information. | 04-16-2009 |
20090109988 | Video Decoder with an Adjustable Video Clock - A method, an apparatus, and logic encoded in a computer-readable medium to carry out a method. The method includes receiving packets containing compressed video information, storing the received packets in a buffer memory, timestamping the received packets according to an adjustable clock; and removing packets from the buffer for decoding and playout of the video information, the removing according to playback order and at a time determined by the adjustable clock. The method includes adjusting the adjustable clock from time to time according to a measure the amount of time that the packets reside in the buffer memory, such that time latency caused by the buffer memory is limited. An overrun or an underrun of the buffer memory is unlikely. | 04-30-2009 |
20090116503 | METHODS AND SYSTEMS FOR PERFORMING TCP THROTTLE - The present invention relates to systems and methods of accelerating network traffic. The method includes receiving a plurality of network packets and setting a threshold for a buffer. The threshold indicates a low water mark for the buffer. The method further includes storing the plurality of network packets in the buffer at least until the buffer's capacity is full, removing packets from the buffer, and transmitting the removed packets via a downstream link to an associated destination. Furthermore, the method includes that in response to removing packets from the buffer such that the buffer's capacity falls below the threshold, receiving additional packets and storing the additional packets at least until the buffer's capacity is full. | 05-07-2009 |
20090116504 | PACKET PROCESSING APPARATUS FOR REALIZING WIRE-SPEED, AND METHOD THEREOF - Provided are a packet processing apparatus for realizing a wire-speed, and a method thereof. The packet processing apparatus realizes a wire-speed by making an inputted packet be processed in another packet processing apparatus instead of processing the inputted packet for itself. The packet processing apparatus for realizing a wire-speed by having an inputted packet processed in a packet processor of another packet processing apparatus by making an inputted packet detour a packet processor into a detour path, includes: a packet classifier for classifying and storing the inputted packet in a multi-queue based on a priority; a queue manager for including the multi-queue, determining a detour packet among packets stored in the multi-queue and marking the packet as a detour packet; and a packet scheduler for transmitting the packet designated as the detour packet to the detour path. The apparatus is used for a packet communication system. | 05-07-2009 |
20090122804 | MEDIA ACCESS CONTROL APPARATUS AND METHOD FOR GUARANTEEING QUALITY OF SERVICE IN WIRELESS LAN - A media access control (MAC) apparatus and corresponding methods for guaranteeing quality-of-service in a wireless local area network (LAN) are presented. The MAC method includes the steps of extracting, performing, determining, a first transmitting step, and a second transmitting step. The extracting step includes extracting a user priority from a frame received from an upper layer and separately storing a voice frame and a non-voice frame according to an access category (AC). The performing step includes independently performing backoff operations for the voice frame and the non-voice frame. The determining step includes determining whether the backoff operations for the voice frame and the non-voice frame have simultaneously ended. The first transmitting step includes transmitting the voice frame having a higher priority first and performing the backoff operation for the non-voice frame if the backoff operations have simultaneously ended. The second transmitting step includes transmitting a frame whose backoff operation ends if the backoff operations have not simultaneously ended. | 05-14-2009 |
20090129400 | PARSING AND FLAGGING DATA ON A NETWORK - Described are computer-based methods and apparatuses, including computer program products, for parsing, flagging, and/or reconstructing data on a network. Data packets associated with user requests are distributed among a plurality of data centers for processing. The data packets are captured at the data centers for fraud detection. The captured data packets are preprocessed at the data center. The preprocessing includes disregarding data packets that are not applicable to fraud detection. The preprocessing includes indicating if data packets are applicable to fraud detection. The indicating of the applicable data packets includes parsing the data packets using particular rules optimized for fraud detection. The data packets are processed at each data center to reconstruct part of the data associated with a user. The processing of the data packets includes reconstructing the data packets based on customer information from network information and/or cookie information. The reconstructed data packets are transmitted to a central processing center (e.g., central data center). The central processing center receives reconstructed data packets from the plurality of data centers and unifies the reconstructed data packets into data associated with a user. | 05-21-2009 |
20090141731 | BANDWIDTH ADMISSION CONTROL ON LINK AGGREGATION GROUPS - A device may receive a bandwidth (B) available on each link of a link aggregation group (LAG) that includes a number (N) of links, assign a primary LAG link and a redundant LAG link to a virtual local area network (VLAN), set an available bandwidth for primary link booking to (B−B/N), and set an available bandwidth for redundant link booking to (B/N). | 06-04-2009 |
20090141732 | METHODS AND APPARATUS FOR DIFFERENTIATED SERVICES OVER A PACKET-BASED NETWORK - Methods and apparatus for the provision of differentiated services in a packet-based network may be provided in a communications device such as a switch or router having input ports and output ports. Each output port is associated with a set of configurable queues that store incoming data packets from one or more input ports. A scheduling mechanism retrieves data packets from individual queues in accord with a specified configuration, providing both pure priority and proportionate de-queuing to achieve a guaranteed QoS over a connectionless network. | 06-04-2009 |
20090154483 | A 3-LEVEL QUEUING SCHEDULER SUPPORTING FLEXIBLE CONFIGURATION AND ETHERCHANNEL - In one embodiment, a scheduler for a queue hierarchy only accesses sub-groups of bucket nodes in order to determine the best eligible queue bucket to transmit next. Etherchannel address hashing is performed after scheduling so that an Etherchannel queue including a single queue in the hierarchy is implemented to guarantee quality of service. | 06-18-2009 |
20090161684 | System and Method for Dynamically Allocating Buffers Based on Priority Levels - Methods and systems consistent with the present invention provide dynamic buffer allocation to a plurality of queues of differing priority levels. Each queue is allocated fixed minimum number of buffers that will not be de-allocated during buffer reassignment. The rest of the buffers are intelligently and dynamically assigned to each queue depending on their current need. The system then monitors and learns the incoming traffic pattern and resulting drops in each queue due to traffic bursts. Based on this information, the system readjusts allocation of buffers to each traffic class. If a higher priority queue does not need the buffers, it gradually relinquishes them. These buffers are then assigned to other queues based on the input traffic pattern and resultant drops. These buffers are aggressively reclaimed and reassigned to higher priority queues when needed. In this way, methods and systems consistent with the present invention dynamically balance requirements of the higher priority queues versus optimal allocation. | 06-25-2009 |
20090161685 | METHOD, SYSTEM AND NODE FOR BACKPRESSURE IN MULTISTAGE SWITCHING NETWORK - The present invention provides a backpressure method, system, and intermediate stage switching node of a multistage switching network and an intermediate stage switching node. The method includes: (i) the intermediate stage switching node receives a first backpressure information; and (ii) the intermediate stage switching node sends at least part of the first backpressure information to an upper stage switching node, wherein there is no response sent by the intermediate switching node to at least part of the first backpressure information. | 06-25-2009 |
20090168790 | Dynamically adjusted credit based round robin scheduler - A credit based queue scheduler dynamically adjusts credits depending upon at least a moving average of incoming packet size to alleviate the impact of traffic burstiness and packet size variation, and increase the performance of the scheduler by lowering latency and jitter. For the case when no service differentiation is required, the credit is adjusted by computing a weighted moving average of incoming packets for the entire scheduler. For the case when differentiation is required, the credit for each queue is determined by a product of a sum of credits given to all queues and priority levels of each queue. | 07-02-2009 |
20090168791 | TRANSMISSION DEVICE AND RECEPTION DEVICE - A transmission device ( | 07-02-2009 |
20090168792 | Method and Apparatus for Data Traffic Smoothing - A method and device for data traffic smoothing are provided. Arriving data packets are buffer-stored and passed on by taking account of an overhead of management information which is attached to the data packet in a protocol conversion process, which is carried out later. This protocol conversion process is carried out at a later time, for example by a DSL modem. The data transmission rate measured from the point of view of the network element carrying out the data traffic smoothing is not the criterion to be adjusted, but the data transmission rate after protocol conversion. A quality of service both for low and high data packet lengths is ensured, and the bandwidth of a DSL connection can therefore be exploited fully both for the VOIP and for data transmission. | 07-02-2009 |
20090168793 | Prioritising Data Transmission - Transmitting from a mobile terminal to a telecommunication network data stored in a plurality of queues, each queue having a respective transmission priority, includes setting the data in each of said queues to be either primary data or secondary data, or a combination of primary data and secondary data. The data may be transmitted from the queues in an order in dependence upon the priority of the queue and whether the data in that queue are primary data or secondary data. Resources for data transmission may be allocated such that the primary data of each of said queues are transmitted at a minimum predetermined rate and such that the secondary data of each of said queues are transmitted at a maximum predetermined rate, greater that said minimum predetermined rate. | 07-02-2009 |
20090175286 | SWITCHING METHOD - A method of switching data packets between an input and a plurality of outputs of a switching device. The switching device comprises a memory arranged to store a plurality of data structures, each data structure being associated with one of said outputs. The method comprises receiving a first data packet at said input, and storing said first data packet in a data structure associated with an output from which said data packet is to be transmitted. If said first data packet is intended to be transmitted from a plurality of said outputs, indication data is stored in each data structure associated with an output from which said first data packet is to be transmitted, but said first data packet is stored in only one of said data structures. The first data packet is transmitted from said data structure to the or each output from which the first data packet is to be transmitted. | 07-09-2009 |
20090190604 | Method and System for Dynamically Adjusting Acknowledgement Filtering for High-Latency Environments - A system and method for adjusting the filtering of acknowledgments (ACKS) in a TCP environment. State variables are used to keep track of, first, the number of times an ACK has been promoted into (a variable which can be stored on a per-packet basis along with the session ID), and second, the number of times an ACK is allowed to be promoted into (which can be global, or can be stored per-session). | 07-30-2009 |
20090190605 | DYNAMIC COLOR THRESHOLD IN A QUEUE - A network device for dynamically allocating memory locations to plurality of queues. The network device includes means for determining an amount of memory buffers that is associated with a port, for assigning a fixed allocation of memory buffers to each of a plurality of queues associated with the port and for sharing remaining memory buffers among the plurality of queues. The remaining memory buffers are used by each of the plurality of queues after the fixed allocation of memory buffers assigned to the queue is used. The network device also includes means for setting a limit threshold for each of the plurality of queues. The limit threshold determines how much of the remaining memory buffer may be used by each of the plurality of queues. The network device further includes means for defining at least one color threshold for packets including a specified color marking and for defining a virtual maximum threshold. When one of the plurality of queues requests access to the remaining memory buffers and the remaining memory buffers is less than the limit threshold for the queue, the virtual maximum threshold is defined for the queue. The virtual maximum threshold replaces the limit threshold and packets associated with the at least one color threshold are processed in proportion with other color thresholds based on the virtual maximum threshold ceiling. | 07-30-2009 |
20090201942 | METHOD AND APPARATUS FOR MARKING AND SCHEDULING PACKETS FOR TRANSMISSION - A method and system for profile-marking and scheduling of packets are disclosed. Using a dual-rate scheduler, the profile state of a packet being scheduled for transmission by a flow traffic descriptor is determined based on the traffic rate of the flow traffic descriptor, which is associated with the queue that the packet belongs to. The profile state of the packet is marked prior to the transmission of the packet. | 08-13-2009 |
20090207850 | System and method for data packet transmission and reception - A system transmits a data packet from a transmitting apparatus to a receiving apparatus. The receiving apparatus includes a receive buffer, and a size specifying information transmitting unit that transmits size specifying information to the transmitting apparatus. The transmitting apparatus includes a transmit buffer, a credit storage unit that stores, as a credit, a value corresponding to a total size of all data packets stored in the receive buffer, a credit adding unit that adds a credit to the stored credit on transmitting a data packet, a credit subtracting unit that specifies a size of a read-out data packet on receiving the size specifying information, subtracts a credit corresponding to the specified size from a stored credit, and a transmission controlling unit that controls data packet transmission based on a credit stored in the credit storage unit. | 08-20-2009 |
20090213863 | ROUTER, NETWORK COMPRISING A ROUTER, METHOD FOR ROUTING DATA IN A NETWORK - A router for a network is arranged for guiding data traffic from one of a first plurality Ni of inputs (I) to one or more of a second plurality No of outputs (O). The inputs each have a third plurality m of input queues for buffering data. The third plurality m is greater than 1, but less than the second plurality No. The router comprises a first selection facility for writing data received at an input to a selected input queue of said input, and a second selection facility for providing data from an input queue to a selected output. Pairs of packets having mutually different destinations Oj and Ok are arranged in the same queue for a total number of Nj,k inputs characterized in that Nj,k08-27-2009 | |
20090213864 | INBOUND BLOCKING OF DATA RECEIVED FROM A LAN IN A MULTI-PROCESSOR VIRTUALIZATION ENVIRONMENT - An incoming LAN traffic management system comprising: an I/O adapter configured to receive incoming packets from an Ethernet; a plurality of hosts coupled to the I/O adapter and each having a host buffer; a data router configured to block information received by the I/O adapter into memory locations from an SBAL associated with at least one of the plurality of hosts and in accordance with blocking parameters for the at least one of the plurality of hosts, the data router including an expiration engine configured to expire the SBAL before it is full if at least one predetermined threshold is exceeded. | 08-27-2009 |
20090213865 | Techniques for channel access and transmit queue selection - Various embodiments are disclosed for techniques to perform channel access decisions and to select a transmit queue. These decisions may be performed, for example, based upon the age and number of packets in a queue. These techniques may allow a node to improve the length of data bursts transmitted by the node, although the invention is not limited thereto. | 08-27-2009 |
20090219942 | Transmission of Data Packets of Different Priority Levels Using Pre-Emption - A method for transmitting data packets of at least two different priority levels via one or more bearer channels is described. The method comprises the steps of fragmenting a data packet into a plurality of corresponding code words, each code word comprising a sync code, with the sync code being adapted for indicating a priority level of the corresponding data packet, and of transmitting the code words via the one or more bearer channels. In case high priority code words corresponding to a high priority data packet arrive during transmission of low priority code words corresponding to a low priority data packet, the following steps are performed: interrupting transmission of low priority code words, transmitting the high priority code words corresponding to the high priority data packet, and resuming the transmission of the low priority code words via the one or more bearer channels. | 09-03-2009 |
20090238197 | Ethernet Virtualization Using Assisted Frame Correction - A method for Ethernet virtualization using assisted frame correction. The method comprises receiving at a host adapter data packets from a network, storing the received data packets in host memory, storing the received data packets in a hardware queue located on the host adapter, setting a status indicator reflecting the status of the data packets based on results of the checking, and sending the status indicator to the host memory. | 09-24-2009 |
20090238198 | Packing Switching System and Method - A packing switching system and method is disclosed. A pipelined processor processes image pixels to generate a number of bit streams. Subsequently, a packing unit packs the bit streams into packets in a way that the bit stream or streams with minimum pixel order number are packed before other bit stream or streams. | 09-24-2009 |
20090238199 | WIDEBAND UPSTREAM PROTOCOL - Some embodiments of the present invention may include a method to stream packets into a queue for an upstream transmission, send multiple requests for upstream bandwidth to transmit data from the queue and receiving multiple grants to transmit data, and transmit data from the queue to the upstream as grants are received. Another embodiment may provide a network comprising a cable modem termination system (CMTS), and a cable modem wherein the cable modem may transmit data to the CMTS with a streaming protocol that sends multiple requests for upstream bandwidth to transmit data and receives multiple grants to transmit data, and transmits data to the CMTS as grants are received. | 09-24-2009 |
20090245271 | SIGNAL PACKET RELAY DEVICE - A packet-signaling relay device selectively relays incoming signal packets, and includes a random number generation unit which generates a random number, a delete threshold generation unit which generates a delete threshold based on an objective delete probability, a comparison unit which compares the random number and the delete threshold to generate a comparison result, and a delete determination unit which generates a delete/storage determination result based on the comparison result. The packet-signaling relay device further includes a packet receiving-and-storing unit which is responsive to the comparison result to selectively delete or store incoming signal packets, and a sending unit for sending the signal packets stored in the packet receiving-and-storing unit. | 10-01-2009 |
20090257441 | PACKET FORWARDING APPARATUS AND METHOD FOR DISCARDING PACKETS - The packet forwarding apparatus of the present invention includes a packet buffer for temporarily storing packets to be forwarded, a timer for measuring the time of every predetermined unit period, a plurality of first queues corresponding to each of a plurality of address groups that form the packet buffer, a plurality of second queues that are provided corresponding to the property of the packets, a first controller for executing the writing of the packets, and a second controller for executing the discarding of the packets. According to this invention, through managing the first queues and the second queues, packets in the packet buffer can be discarded without the packets being read from the packet buffer. | 10-15-2009 |
20090262747 | RELAYING APPARATUS AND PACKET RELAYING METHOD - A packet storing unit stores relay instructions for received packets in different queues depending on priority and a VLAN number. DRR schedulers take out relay instructions from respective queues through a DRR technique. A priority control transmission scheduler transmits the packets to another apparatus according to the relay instructions in a descending order of priority. | 10-22-2009 |
20090262748 | RELAYING APPARATUS AND PACKET RELAYING APPARATUS - Each transmission port module includes a plurality of queues in association with combinations of a priority and a VLAN number. An accumulated-amount storage unit stores a total size of packets accumulated in queues associated with the same priority. A threshold storage unit stores a threshold of a total packet accumulated amount for each queue. When a packet is received, whether to discard the packet is determined based on a total packet accumulated amount stored in the accumulated-amount storage unit in association with a priority set for the packet and the threshold stored in the threshold storage unit in association with a storage-destination queue of the packet. | 10-22-2009 |
20090262749 | ESTABLISHING OPTIMAL LATENCY IN STREAMING DATA APPLICATIONS THAT USE DATA PACKETS - Embodiments for an apparatus and method are provided that can build latency in streaming applications that use data packets. In an embodiment, a system has an under-run forecasting mechanism, a statistics monitoring mechanism, and a playback queuing mechanism. The under-run forecasting mechanism determines an estimate of when a supply of data packets to convert will be exhausted. The statistics monitoring mechanism measures the arrival fluctuations of the supply of data packets. The playback queuing mechanism can build the latency. | 10-22-2009 |
20090268747 | COMMUNICATION APPARATUS - To provide a communication apparatus which is capable of voluntarily controlling, according to its own reception capability, data transmission traffic, while reducing the burden for the control. The communication apparatus includes: a communication unit ( | 10-29-2009 |
20090274161 | NETWORK ROUTING METHOD AND SYSTEM UTILIZING LABEL-SWITCHING TRAFFIC ENGINEERING QUEUES - The present invention is directed to a scalable packet-switched network routing method and system that utilizes modified traffic engineering mechanisms to prioritize tunnel traffic and non-tunnel traffic. The method includes the steps of receiving a request to establish a traffic engineering tunnel across the packet-switched network. Then at a router traversed by the traffic engineering tunnel, a queue for packets carried inside the traffic engineering tunnel is created. Subsequently, bandwidth for the queue is reserved in accordance with the request to establish the traffic engineering tunnel, wherein the queue created for packets carried inside the traffic the traffic engineering tunnel is given priority over other traffic at the router and the reserved bandwidth for the queue can only be used by packets carried inside the traffic engineering tunnel. | 11-05-2009 |
20090279558 | Network routing apparatus for enhanced efficiency and monitoring capability - According to an embodiment of the invention, a network device such as a router or switch provides efficient data packet handling capability. The network device includes one or more input ports for receiving data packets to be routed, as well as one or more output ports for transmitting data packets. The network device includes an integrated port controller integrated circuit for routing packets. The integrated circuit includes an interface circuit, a received packets circuit, a buffer manager circuit for receiving data packets from the received packets circuit and transmitting data packets in one or more buffers and reading data packets from the one or more buffers. The integrated circuit also includes a rate shaper counter for storing credit for a traffic class, so that the integrated circuit can support input and/or output rate shaping. The integrated circuit may be associated with an IRAM, a CAM, a parameter memory configured to hold routing and/or switching parameters, which may be implemented as a PRAM, and an aging RAM, which stores aging information. The aging information may be used by a CPU coupled to the integrated circuit via a system interface circuit to remove entries from the CAM and/or the PRAM when an age count exceeds an age limit threshold for the entries. | 11-12-2009 |
20090279559 | Method and apparatus for aggregating input data streams - A method and apparatus aggregate a plurality of input data streams from first processors into one data stream for a second processor, the circuit and the first and second processors being provided on an electronic circuit substrate. The aggregation circuit includes (a) a plurality of ingress data ports, each ingress data port adapted to receive an input data stream from a corresponding first processor, each input data stream formed of ingress data packets, each ingress data packet including priority factors coded therein, (b) an aggregation module coupled to the ingress data ports, adapted to analyze and combine the plurality of input data steams into one aggregated data stream in response to the priority factors, (c) a memory coupled to the aggregation module, adapted to store analyzed data packets, and (d) an output data port coupled to the aggregation module, adapted to output the aggregated data stream to the second processor. | 11-12-2009 |
20090285228 | MULTI-STAGE MULTI-CORE PROCESSING OF NETWORK PACKETS - Techniques for multi-stage multi-core processing of network packets are described herein. In one embodiment, work units are received within a network element, each work unit representing a packet of different flows to be processed in multiple processing stages. Each work unit is identified by a work unit identifier that uniquely identifies a flow in which the associated packet belongs and a processing stage that the associated packet is to be processed. The work units are then dispatched to multiple core logic, such that packets of different flows can be processed concurrently by multiple core logic and packets of an identical flow in different processing stages can be processed concurrently by multiple core logic, in order to determine whether the packets should be transmitted to one or more application servers of a datacenter. Other methods and apparatuses are also described. | 11-19-2009 |
20090285229 | METHOD FOR SCHEDULING OF PACKETS IN TDMA CHANNELS - The method of the invention is implemented in ad hoc communications network employing at least two-hop routing and wherein each node in the network employs an omnidirectional send/receive capability. Each node keeps a near neighbour database (NND) updated by receiving of messages. Each Othernode in the network, the message of which was received by Mynode in a time period T, is a candidate for becoming a relay for transmitting Mynode's messages. The probability of an Othernode to become a relay for Mynode is higher for a larger amount of candidates Othernode has in its NND. The probability for the Othernode to become a relay is higher the larger its distance from Mynode. | 11-19-2009 |
20090285230 | DELAY VARIATION BUFFER CONTROL TECHNIQUE - A delay variation buffer controller allowing proper cell delay variation control reflecting an actual network operation status is disclosed. A detector detects an empty status of the data buffer when data is read out from the data buffer at intervals of a controllable time period. A counter counts the number of contiguous times the empty status was detected. A proper time period is calculated depending on a value of the counter at a time when the empty status is not detected and the value of the counter is not zero. A timing corrector corrects the controllable time period to match the proper time delay and setting the controllable time delay to a predetermined value when the empty status is not detected and the value of the counter is zero. | 11-19-2009 |
20090285231 | PRIORITY SCHEDULING USING PER-PRIORITY MEMORY STRUCTURES - A system schedules traffic flows on an output port using circular memory structures. The circular memory structures may include rate wheels that include a group of sequentially arranged slots. The traffic flows may be assigned to different rate wheels on a per-priority basis. | 11-19-2009 |
20090285232 | Service Interface for QoS-Driven HPNA Networks - An in-band signaling model media control (MC) terminal for an HPNA network includes a frame classification entity (FCE) and a frame scheduling entity (FSE) and provides end-to-end Quality of Service (QoS) by passing the QoS requirements from higher layers to the lower layers of the HPNA network. The FCE is located at an LLC sublayer of the MC terminal, and receives a data frame from a higher layer of the MC terminal that is part of a QoS stream. The FCE classifies the received data frame for a MAC sublayer of the MC terminal based on QoS information contained in the received data frame, and associates the classified data frame with a QoS stream queue corresponding to a classification of the data frame. The FSE is located at the MAC sublayer of the MC terminal, and schedules transmission of the data frame to a destination for the data frame based on a QoS requirement associated with the QoS stream. | 11-19-2009 |
20090290592 | RING BUFFER OPERATION METHOD AND SWITCHING DEVICE - A buffer operation method, for use with a buffer organized as a plurality of sections, two or more continuous ones of the sections being defined as a monitor block, the method including: receiving a data packet and dividing the same into a plurality of divisions; storing the divisions in a given one of the sections; moving, in the case where the given section is behind the monitor block, the monitor block so that a tail end thereof corresponds to the given section; monitoring whether the plurality of divisions required for reassembly of the packet are stored in the monitor block; and transferring, once all the required plurality of divisions are collected in the monitor block, the same from the buffer for subsequent reassembly of the packet. | 11-26-2009 |
20090296729 | DATA OUTPUT APPARATUS, COMMUNICATION APPARATUS AND SWITCH APPARATUS - A data communication apparatus has a data retainer, a retain state manager, a guaranteed bandwidth manager, a surplus bandwidth manager managing outputting of output data having a destination retained in the data retainer to an output line on a per-destination basis when the output data is outputted to the output line with the use of a surplus bandwidth that is a surplus over a sum of guaranteed bandwidths, and a scheduler scheduling outputting of data retained in the data retainer to the output line, based on results of managements by the guaranteed bandwidth manager and the surplus bandwidth manager and a retain state managed by the retain state manager. The apparatus manages the bandwidth with improved accuracy at the time of communication using a surplus bandwidth, | 12-03-2009 |
20090304014 | METHOD AND APPARATUS FOR LOCAL ADAPTIVE PROVISIONING AT A NODE | 12-10-2009 |
20090304015 | Method and devices for installing packet filters in a data transmission - A method is described for associating a data packet (DP) with a packet bearer (PB) in a user equipment (UE | 12-10-2009 |
20090304016 | METHOD AND SYSTEM FOR EFFICIENTLY USING BUFFER SPACE - A method and system for transferring iSCSI protocol data units (“PDUs”) to a host system is provided. The system includes a host bus adapter with a TCP/IP offload engine. The HBA includes, a direct memory access engine operationally coupled to a pool of small buffers and a pool of large buffers, wherein an incoming PDU size is compared to the size of a small buffer and if the PDU fits in the small buffer, then the PDU is placed in the small buffer. If the incoming PDU size is compared to a large buffer size and if the incoming PDU size is less than the large buffer size then the incoming PDU is placed in the large buffer. If the coming PDU size is greater than a large buffer, then the incoming PDU is placed is more than one large buffer and a pointer to a list of large buffers storing the incoming PDU is placed in a small buffer. | 12-10-2009 |
20090304017 | APPARATUS AND METHOD FOR HIGH-SPEED PACKET ROUTING SYSTEM - An apparatus and method for packet routing in a high-speed packet routing system. The apparatus includes an input unit and a control unit. The input unit temporarily stores an input packet and outputs the temporarily stored input packet to an output port determined by a previous router. The control unit determines an output port of a next router for the input packet. | 12-10-2009 |
20090316711 | PACKET SWITCHING - In an embodiment, an apparatus is provided that may include an integrated circuit including switch circuitry to determine, at least in part, an action to be executed involving a packet. This determination may be based, at least in part, upon flow information determined, at least in part, from the packet, and packet processing policy information. The circuitry may examine the policy information to determine whether a previously-established packet processing policy has been established that corresponds, at least in part, to the flow information. If the circuitry determines, at least in part, that the policy has not been established and the packet is a first packet in a flow corresponding at least in part to the flow information, the switch circuitry may request that at least one switch control program module establish, at least in part, a new packet processing policy corresponding, at least in part, to the flow information. | 12-24-2009 |
20090316712 | Method and apparatus for minimizing clock drift in a VoIP communications network - A method and apparatus for minimizing clock drift between un-synchronized clocks which may occur at opposing ends of a communication link established in, for example, a Voice over Internet Protocol (VoIP) communications network, especially for use with, for example, a FAX or modem terminal device. The illustrative system employs two or more clocks, wherein at least one of these clocks operates at an intentionally higher frequency than the nominal clock frequency (e.g., 8 kHz) and wherein at least one of these clocks operates at an intentionally lower frequency than the nominal clock frequency. In operation, the illustrative system alternatively chooses one of the clocks, in order to attempt to match the clock of the far-end terminal device on average. The state and/or history of the receiving device's associated jitter buffer may be advantageously used to determine which clock to select. | 12-24-2009 |
20090316713 | COMMUNICATION APPARATUS IN LABEL SWITCHING NETWORK - In a label switching network using a plurality of labels, a communication apparatus receives signaling information for setting a first label switching tunnel. This signaling information includes one or more values of one or more labels representing one or more pseudowires accommodated in a first label switching tunnel, the bandwidth information of the pseudowire, and the bandwidth-sharing identifier. A bandwidth management table holding correspondence relationships between values of a plurality of labels, the bandwidth information, and the bandwidth-sharing identifiers are generated. One or more second label switching tunnels may be accommodated instead of one or more pseudowires. | 12-24-2009 |
20090316714 | PACKET RELAY APPARATUS - In a packet relay apparatus equipped with a hierarchical bandwidth control function, a queuing unit of a bandwidth controller for controlling a bandwidth of a packet to be transmitted recognizes user information for identifying a user from VLAN ID of a received Tag-VLAN packet, acquires queue information representative of a queue position by referring to a priority mapping table by using a user priority order in the packet, and queues the packet to the queue identified by the user information and queue information. Bandwidth control can therefore be performed without searching a QoS information management table. | 12-24-2009 |
20090323710 | METHOD FOR PROCESSING INFORMATION FRAGMENTS AND A DEVICE HAVING INFORMATION FRAGMENT PROCESSING CAPABILITIES - A device and method for processing information fragments, the method includes: receiving multiple information fragments from multiple communication paths; wherein the each information fragment is associated with a cyclic serial number indicating of a generation time of the information fragment; storing the multiple information fragments in multiple input queues, each input queue being associated with a communication path out of the multiple communication paths; determining whether at least one serial number associated with at least one valid information fragment positioned in a head of one of the multiple input queues is located within a pre-rollout serial number range; mapping, in response to the determination, serial numbers associated with each of the valid information fragment positioned in the heads of the multiple input queues to at least one serial number range that differs from the pre-rollout serial number range; and sending to an output queue information fragment metadata associated with a minimal valued serial number out of the serial numbers associated with each of the valid information fragment positioned in the heads of the multiple input queues. | 12-31-2009 |
20100008376 | METHODS, SYSTEMS AND COMPUTER PROGRAM PRODUCTS FOR PACKET PRIORITIZATION BASED ON DELIVERY TIME EXPECTATION - Methods, systems and computer program products for packet prioritization based on delivery time expectation. Exemplary embodiments include receiving a packet for routing, estimating a TimeToDestination for the packet, the estimating performed by a Internet Control Message Protocol, reading a TimeToDeliver field from each the Internet Protocol Header of the packet to extract data on when the packet needs to be at the destination, determining a MaxQueueDelay for the packet, the MaxQueueDelay calculated by subtracting the TimeToDeliver from the TimeToDestination, passing a lower priority packet if the lower priority packet has a lower MaxQueueDelay, and decrementing the TimeToDeliver by an amount of time the network router has had the packet in the queue before passing the packet to a next router, thereby communicating to the next router how much time is left before the packet must be delivered. | 01-14-2010 |
20100008377 | QUEUE MANAGEMENT BASED ON MESSAGE AGE - A system for managing inbound messages in a server complex including one or more message consumers. The system includes a server configured to receive the inbound messages from a first peripheral device and to transmit messages to one or more of the plurality of message consumers. The system also includes an inbound message queue coupled to the server, the inbound message queue configured to store inbound message and discard at least one message when an age of the message exceeds an expiration time. | 01-14-2010 |
20100014539 | Packet Relay Device And Queue Scheduling Method - Each of the plurality of queues stores packet data of a received packet. The read concession assignor assigns one of the plurality of queues with a read concession for a predefined time period. The overdraft storage stores an overdraft amount in connection with each of the plurality of queues. The read adequacy determiner determines, in accordance with an overdraft amount stored in connection with one queue out of the plurality of queues, whether to read packet data from the one queue. The overdraft updater updates at least one of a first overdraft amount stored in connection with a first queue and a second overdraft amount stored in connection with a second queue different from the first queue upon reading packet data from the first queue during a time period while the second queue is assigned with the read concession. | 01-21-2010 |
20100020814 | Ethernet Switching - A method in an Ethernet switch ( | 01-28-2010 |
20100020815 | DATA TRANSMISSION METHOD FOR HSDPA - In the data transmission method of an HSDPA system according to the present invention, a transmitter transmits Data Blocks each composed of one or more data units originated from a same logical channel, and a receiver receives the Data Block through a HS-DSCH and distributes the Data Block to a predetermined reordering buffer. Since each Data Block is composed of the MAC-d PDUs originated from the same logical channel, it is possible to monitor the in-sequence delivery of the data units, resulting in reduction of undesirable queuing delay caused by logical channel multiplexing. | 01-28-2010 |
20100020816 | Connectionless packet data transport over a connection-based point-to-point link - A multiple processor device generates a control packet for at least one connectionless-based packet in partial accordance with a control packet format of the connection-based point-to-point link and partially not in accordance with the control packet format. For instance, the multiple processor device generates the control packet to include, in noncompliance with the control packet format, one or more of an indication that at least one connectionless-based packet is being transported, an indication of a virtual channel of a plurality of virtual channels associated with the at least one connectionless-based packet, an indication of an amount of data included in the associated data packet, status of the at least one connectionless-based packet, and an error status indication. The multiple processor device then generates the associated data packet in accordance with a data packet format of the connection-based point-to-point link, wherein the data packet includes at least a portion of the at least one connectionless-based packet. | 01-28-2010 |
20100027556 | HYBRID COMMUNICATIONS LINK - A hybrid communications link includes a slow, reliable communications link and a fast unreliable communications link. Communication via the hybrid communications link selectively uses both the slow, reliable communications link and the fast, unreliable communications link. | 02-04-2010 |
20100034212 | METHODS AND APPARATUS FOR PROVIDING MODIFIED TIMESTAMPS IN A COMMUNICATION SYSTEM - Methods and apparatus for providing modified timestamps in a communication system. In an aspect, a method includes receiving one or more packets associated with a selected destination, computing an average relative delay associated with each packet, determining a modified timestamp associated with each packet based on the average relative delay associated with each packet, and outputting the one or more packets and their associated modified timestamps. In an aspect, an apparatus is provided for generating modified timestamps. The apparatus includes a packet receiver configured to receive one or more packets associated with a selected destination and processing logic configured to compute an average relative delay associated with each packet, determine a modified timestamp associated with each packet based on the average relative delay associated with each packet, and output the one or more packets and their associated modified timestamps. | 02-11-2010 |
20100040076 | NETWORK DEVICE AND METHOD FOR PROCESSING DATA PACKETS - A network device for processing data packets receives data packets from networks connected to the network device, searches a rule table for data packet matching conditions corresponding to the data packets, and transmits the data packets to corresponding data packet targets. The network device further retrieves matching actions corresponding to the data packets, transmits the data packets and the corresponding matching actions to the user daemon thread module, and further transmits the data packets to corresponding daemon threads according to the corresponding matching actions. | 02-18-2010 |
20100040077 | METHOD, DEVICE AND SOFTWARE APPLICATION FOR SCHEDULING THE TRANSMISSION OF DATA STREAM PACKETS - The invention relates to a method for transmitting over a data communication network data packets of a data stream to a receiving device, characterized in that it comprises the steps of: selecting a data packet from a buffer memory containing data packets to be transmitted ( | 02-18-2010 |
20100046536 | METHODS AND SYSTEMS FOR AGGREGATING ETHERNET COMMUNICATIONS - Methods and systems for aggregating Ethernet communications are disclosed. A disclosed apparatus includes a first Ethernet port to communicate with a second Ethernet port of a first device, a third Ethernet port to communicate with a fourth Ethernet port of a second device, a fifth Ethernet port to receive Ethernet frames, and a switching portion to direct nth ones of the frames to a first queue associated with the second port, direct n−1 frames preceding each of the nth ones of the frames to a second queue associated with the fourth port, and select a value of n based on a ratio of a first non-zero data rate of the first device for a first communication link in a first direction and a second non-zero data rate of the second device for a second communication link in the first direction, and based on a remaining capacity of the first queue. | 02-25-2010 |
20100054268 | Method of Tracking Arrival Order of Packets into Plural Queues - In PCI-Express and alike communications systems, it is often desirable to keep track of order of arrival into different queues of packets that will later compete for servicing by a downstream resource of limited bandwidth. Use of time stamping to determine order of arrival can be a problem because time of arrival between different packets entering respective ones of plural queues can vary greatly and thus the number of bits consumed for accurately time stamping each packet can become significant. Disclosed are systems and methods for tracking the arrival orders of packets into plural queues by means of travel-along dynamic counts rather than by means of high precision time stamps. A machine system that keeps track of relative arrival orders of data blocks in different ones of plural queues comprises a first count associater that associates with a first data block in a first of the plural queues, a first count of how many earlier arrived and still pending data blocks await in a second of the plural queues; and a count updater that updates the first count in response to one or more of said earlier arrived data blocks departing from the second queue. | 03-04-2010 |
20100054269 | METHOD FOR TRANSFERRING DATA PACKETS TO A SHARED RESOURCE, AND RELATED DEVICE AND COMPUTER SOFTWARE - The invention relates to a method for transferring data packets to a shared resource ( | 03-04-2010 |
20100061389 | METHODS AND APPARATUS RELATED TO VIRTUALIZATION OF DATA CENTER RESOURCES - In one embodiment, an apparatus includes a switch core that has a multi-stage switch fabric. A first set of peripheral processing devices coupled to the multi-stage switch fabric by a set of connections that have a protocol. Each peripheral processing device from the first set of peripheral processing devices is a storage node that has virtualized resources. The virtualized resources of the first set of peripheral processing devices collectively define a virtual storage resource interconnected by the switch core. A second set of peripheral processing devices coupled to the multi-stage switch fabric by a set of connections that have the protocol. Each peripheral processing device from the first set of peripheral processing devices is a compute node that has virtualized resources. The virtualized resources of the second set of peripheral processing devices collectively define a virtual compute resource interconnected by the switch core. | 03-11-2010 |
20100061390 | METHODS AND APPARATUS FOR DEFINING A FLOW CONTROL SIGNAL RELATED TO A TRANSMIT QUEUE - In one embodiment, a processor-readable medium can store code representing instructions that when executed by a processor cause the processor to receive a value representing a congestion level of a receive queue and a value representing a state of a transmit queue. At least a portion of the transmit queue can be defined by a plurality of packets addressed to the receive queue. A rate value for the transmit queue can be defined based on the value representing the congestion level of the receive queue and the value representing the state of the transmit queue. The processor-readable medium can store code representing instructions that when executed by the processor cause the processor to define a suspension time value for the transmit queue based on the value representing the congestion level of the receive queue and the value representing the state of the transmit queue. | 03-11-2010 |
20100061391 | METHODS AND APPARATUS RELATED TO A LOW COST DATA CENTER ARCHITECTURE - In one embodiment, an apparatus can include a first edge device that can have a packet processing module. The first edge device can be configured to receive a packet. The packet processing module of the first edge device can be configured to produce cells based on the packet. A second edge device can have a packet processing module configured to reassemble the packet based on the cells. A multi-stage switch fabric can be coupled to the first edge device and the second edge device. The multi-stage switch fabric can define a single logical entity. The multi-stage switch fabric can have switch modules. Each switch module from the switch modules can have a shared memory device. The multi-stage switch fabric can be configured to switch the cells so that the cells are sent to the second edge device. | 03-11-2010 |
20100061392 | METHOD, DEVICE AND SYSTEM OF SCHEDULING DATA TRANSPORT OVER A FABRIC - Embodiments of the invention provide systems, devices and methods to schedule data transport across a fabric, e.g., prior to actual transmission of the data across the fabric. In some demonstrative embodiments, a packet switch may include an input controller to schedule transport of at least one data packet to an output controller over a fabric based on permission information received from the output controller. Other embodiments are described and claimed. | 03-11-2010 |
20100067538 | METHOD AND SYSTEM FOR FRAME CLASSIFICATION - The present invention provides a method and a device for classifying data frames. The method is typically carried out by a communication device in a wireless network with quality of service capability. It comprises the step of comparing data in a frame to data in a plurality of classifier entries, wherein the order of comparison of the classifier entries with a frame is a function of a quality of service priority level, and the step of classifying a frame for which a match is found as a function of a parameter associated with the matching classifier entry. | 03-18-2010 |
20100085980 | RECEIVER DEVICE, TRANSMISSION SYSTEM, AND PACKET TRANSMISSION METHOD - In a transmission system of transferring a packet input from a first device to a second device via a network, a receiver device comprises a storage module configured to successively accumulate received packets, which are transferred over a multiple transmission paths, in correlation to each of the multiple transmission paths, a packet selector configured to sequentially perform a packet selection process with respect to each of the received packets accumulated in the storage module, where after elapse of a predetermined time period since a receipt time of a first packet received by the receiver device, the packet selection process respectively reads out one packet for each of the multiple transmission paths among the received identical packets, which are accumulated in correlation to each of the multiple transmission paths, and selects one packet with higher reliability out of the read-out packets, and an output module configured to output the packet selected by the packet selector to the second device. | 04-08-2010 |
20100091782 | Method and System for Controlling a Delay of Packet Processing Using Loop Paths - A method and system for introducing controlled delay of packet processing at a network device using one or more delay loop paths (DLPs). For each packet received at the network device, a determination will be made as to whether or not packet processing should be delayed. If delay is chosen, a DLP will be selected according to a desired delay for the packet. The desired delay value is used to determine a time value and inserts the time value in the DLP ahead of the packet. Upon completion of a DLP delay, a packet will be returned for processing, an additional delay, or some other action. One or more DLPs may be enabled with packet queues, and may be used advantageously by devices, for which in-order processing of packets may be desired or required. | 04-15-2010 |
20100091783 | METHOD AND SYSTEM FOR WEIGHTED FAIR QUEUING - A system for scheduling data for transmission in a communication network includes a credit distributor and a transmit selector. The communication network includes a plurality of children. The transmit selector is communicatively coupled to the credit distributor. The credit distributor operates to grant credits to at least one of eligible children and children having a negative credit count. Each credit is redeemable for data transmission. The credit distributor further operates to affect fairness between children with ratios of granted credits, maintain a credit balance representing a total amount of undistributed credits available, and deduct the granted credits from the credit balance. The transmit selector operates to select at least one eligible and enabled child for dequeuing, bias selection of the eligible and enabled child to an eligible and enabled child with positive credits, and add credits to the credit balance corresponding to an amount of data selected for dequeuing. | 04-15-2010 |
20100091784 | FILTERING OF REDUNDANT FRAMES IN A NETWORK NODE - A method of filtering redundant frames including a MAC source address, a frame ID and a CRC value, in a network node with two ports each including a transmitting device and a receiving device, is provided. The transmitting device includes a transmission list in which frames to be transmitted are stored. The receiving device includes a receiving memory for storing a received frame. For filtering redundant frames in a network node, a first frame is received by one of the two ports. After reception of the MAC source address and the frame ID of the first frame in the transmission list of the port, a second frame with the same MAC source address and frame ID is sought. If the second frame is present, the first frame is neither forwarded to a local application nor forwarded to send to other ports, and the second frame is not sent. | 04-15-2010 |
20100091785 | PACKET PROCESSING APPARATUS - A packet processing apparatus includes a packet buffer with a queue for storing packets. An actual queue length/position discriminator acquires, at every sampling period, the latest actual queue length indicating the occupancy status of the queue, determines the positional relationship of the actual queue length to a random early detection interval, and outputs the positional relationship as position information. A discard probability computation processor calculates, at every sampling period, a packet discard probability based on the position information. A packet discard processor discards, at every sampling period and in accordance with the discard probability, packets that are not yet stored in the queue. If it is judged from the position information that the actual queue length is within the random early detection interval, the discard probability computation processor calculates an average queue length, and then calculates the discard probability from the ratio of a discard target to a reception target. | 04-15-2010 |
20100103946 | PACKET CAPTURING DEVICE - A next sequence number, which is a sequence number that a next packet should have, is compared with the sequence number of a current packet, an identifier of a previous packet is compared with the identifier of the current packet, and a delay is judged to be a simple delay when the next sequence number matches the sequence number of the current packet and the identifier of the previous packet is followed by the identifier of the current packet. Or the delay is judged to be caused by a retransmission when the identifier of the previous packet is not followed by the identifier of the current packet. | 04-29-2010 |
20100118883 | SYSTEMS AND METHODS FOR QUEUE MANAGEMENT IN PACKET-SWITCHED NETWORKS - This disclosure relates to methods and systems for queuing traffic in packet-switched networks. In one of many possible embodiments, a queue management system includes a plurality of queues and a priority module configured to assign incoming packets to the queues based on priorities associated with the incoming packets. The priority module is further configured to drop at least one of the packets already contained in the queues. The priority module is configured to operate across multiple queues when determining which of the packets contained in the queues to drop. Some embodiments provide for hybrid queue management that considers both classes and priorities of packets. | 05-13-2010 |
20100118884 | Method for Resolving Mutex Contention in a Network System - A method of resolving mutex contention within a network interface unit which includes providing a plurality of memory access channels, and moving a thread via at least one of the plurality of memory access channels, the plurality of memory access channels allowing moving of the thread while avoiding mutex contention when moving the thread via the at least one of the plurality of memory access channels is disclosed. | 05-13-2010 |
20100124234 | Method for scheduling packets of a plurality of flows and system for carrying out the method - The invention concerns a method for scheduling packets belonging to a plurality of flows received at a router. It is also provided the system for carrying out the method. According to the invention, a single packet queue is used for storing said packets, said single packet queue being adapted to be divided into a variable number of successive sections which are created and updated dynamically as a function of each received packet, each section being of variable size and a section load threshold for each flow of said plurality of flows being allocated to each section. The method further comprises insertion (S | 05-20-2010 |
20100128735 | PROCESSING OF PARTIAL FRAMES AND PARTIAL SUPERFRAMES - A system determines when to send out a partial data unit or when to complete a data unit before sending it. The system may identify a data unit, determine whether the data unit is a partial data unit, increase a partial count when the data unit is the partial data unit, determine whether the partial count is greater than a threshold, and fill a subsequent data unit with data to form a complete data unit when the partial count is greater than the threshold. The system may, alternatively or additionally, determine a schedule of flush events for a queue, identify whether the queue includes information associated with a partial data unit, identify whether the queue should be flushed based on the schedule of flush events and whether the queue includes information associated with the partial data unit, wait for additional data when the queue should not be flushed, and send out the partial data unit when the queue should be flushed. | 05-27-2010 |
20100128736 | PACKET PROCESSING APPARATUS, NETWORK EQUIPMENT AND PACKET PROCESSING METHOD - A packet processing apparatus includes a static pattern matcher comparing pattern information defining a packet to be filtered with a value regarding at least a part of a received packet, the pattern information being stored by a pattern information manager. A frequency calculator calculates the frequency of matching by the static pattern matcher. A dynamic pattern matcher matches the frequency and a preset comparison value and a processing determiner determines a processing on the received packet based upon the dynamic pattern match. | 05-27-2010 |
20100135312 | Method for Scoring Queued Frames for Selective Transmission Through a Switch - A method includes determining a priority of each of a plurality of frames, wherein the priority is a function of an initial value dependent on content of each said frame and one or more adjustment values independent of content of each said frame, and selecting the frame with the highest determined priority for transmission through the device prior to transmission of any other of the frames. A system includes a receiving port configured to receive frames and assign an initial priority to each frame, a queue configured to insert queue entries associated with received frames on the queue, each queue entry being inserted at a queue position based on the initial priority assigned to the queue entry, the queue further configured to reorder queue entries based on readjusted priorities of the queue entries; and a transmitter switch configured to transmit the frame having the highest priority before transmitting any other frame. | 06-03-2010 |
20100150164 | FLOW-BASED QUEUING OF NETWORK TRAFFIC - A method is provided for queuing packets. A packet may be received and its flow identified. It may then be determined whether a flow queue has been assigned to the identified flow. The identified flow may be dynamically assigning to an available flow queue when it is determined that a flow queue has not been assigned to the identified flow. The packet may be enqueued into the available flow queue. | 06-17-2010 |
20100158031 | METHODS AND APPARATUS FOR TRANSMISSION OF GROUPS OF CELLS VIA A SWITCH FABRIC - In one embodiment, a method can include receiving at an egress schedule module a request to schedule transmission of a group of cells from an ingress queue through a switch fabric of a multi-stage switch. The ingress queue can be associated with an ingress stage of the multi-stage switch. The egress schedule module can be associated with an egress stage of the multi-stage switch. The method can also include determining, in response to the request, that an egress port at the egress stage of the multi-stage switch is available to transmit the group of cells from the multi-stage switch. | 06-24-2010 |
20100158032 | SCHEDULING AND QUEUE MANAGEMENT WITH ADAPTIVE QUEUE LATENCY - The invention relates to a scheduler for a TCP/IP based data communication system and a method for the scheduler. The communication system comprises a TCP/IP transmitter and a receiving unit (UE). The scheduler is associated with a Node comprising a rate measuring device for measuring a TCP/IP data rate from the TCP/IP transmitter and a queue buffer device for buffering data segments from the TCP/IP transmitter. The scheduler is arranged to receive information from the rate measuring device regarding the TCP/IP data rate and is arranged to adapt the permitted queue latency to a minimum value when the TCP/IP transmitter is in a slow start mode and to increase the permitted queue latency when the TCP/IP rate has reached a threshold value. | 06-24-2010 |
20100158033 | COMMUNICATION APPARATUS IN LABEL SWITCHING NETWORK - In a label switching network using a plurality of labels including first and second labels, a communication apparatus receives a packet having the plurality of labels, and determines an output destination of the packet in accordance with the first label of the plurality of labels. Additionally, the communication apparatus sorts the packet to one of a plurality of packet queues in accordance with a combination of the first and the second labels of the plurality of labels, and reads and multiplexes packets from the plurality of packet queues. | 06-24-2010 |
20100172363 | SYSTEMS AND METHODS FOR CONGESTION CONTROL USING RANDOM EARLY DROP AT HEAD OF BUFFER - A system selectively drops data from a queue. The system includes queues that temporarily store data, a dequeue engine that dequeues data from the queues, and a drop engine that operates independently from the dequeue engine. The drop engine selects one of the queues to examine, determines whether to drop data from a head of the examined queue, and marks the data based on a result of the determination. | 07-08-2010 |
20100172364 | FLEXIBLE QUEUE AND STREAM MAPPING SYSTEMS AND METHODS - A system processes data corresponding to multiple data streams. The system includes multiple queues that store the data, stream-to-queue logic, dequeue logic, and queue-to-stream logic. Each of the queues is assigned to one of the streams based on a predefined queue-to-stream assignment. The stream-to-queue logic identifies which of the queues has data to be processed. The dequeue logic processes data in the identified queues. The queue-to-stream logic identifies which of the streams correspond to the identified queues. | 07-08-2010 |
20100183021 | Method and Apparatus for Queuing Data Flows - In a data system, such as a cable modem termination system, different-priority flows are scheduled to be routed to their logical destinations by factoring both the priority level and the time spent in queue. The time that each packet of each flow spends waiting for transmission is normalized such that the waiting times of all flows are equalized with respect to each other. A latency scaling parameter is calculated. | 07-22-2010 |
20100189122 | EFFICIENTLY STORING TRANSPORT STREAMS - Described are computer-based methods and apparatuses, including computer program products, for efficiently storing transport streams. A first sequence of one or more packets associated with the first transport stream is received, the first sequence comprising one or more data packets. A storage packet is generated by selecting one or more packets from the first sequence, the storage packet comprising a packet header and the one or more data packets. One or more null packet insertion locations are identified in a second sequence of one or more packets associated with a second transport stream. Null packet insertion information is generated based on the one or more null packet insertion locations, the information including data indicative of a reconstruction parameter related to reconstructing the second sequence from the storage packet by inserting one or more null packets that are not stored in the storage packet, wherein the packet header includes the null packet insertion information. The storage packet is stored. | 07-29-2010 |
20100189123 | Reordering Packets - There are disclosed processes and apparatus for reordering packets. The system includes a plurality of source processors that transmit the packets to a destination processor via multiple communication fabrics. The source processors and the destination processor are synchronized together. Time stamp logic at each source processor operates to include a time stamp parameter with each of the packets transmitted from the source processors. The system also includes a plurality of memory queues located at the destination processor. An enqueue processor operates to store a memory pointer and an associated time stamp parameter for each of the packets received at the destination processor in a selected memory queue. A dequeue processor determines a selected memory pointer associated with a selected time stamp parameter and operates to process the selected memory pointer to access a selected packet for output in a reordered packet stream. | 07-29-2010 |
20100195662 | METHOD FOR SUB-PACKET GENERATION WITH ADAPTIVE BIT INDEX - A method of generating a sub-packet in consideration of an offset is disclosed. A method of generating a sub-packet in consideration of an offset, for re-transmission of a packet from systematic bits and parity bits stored in a circular buffer includes turbo-coding an input bitstream at a predetermined code rate and generating and storing the systematic bits and the parity bits in the circular buffer, and deciding a starting position of the sub-packet in the circular buffer in consideration of the offset for puncturing at least a portion of the systematic bits of the circular buffer. Accordingly, it is possible to efficiently decide the starting position of the sub-packet to be transmitted adaptively with respect to a variable packet length, improving a coding gain, reducing complexity and reducing a calculation amount. | 08-05-2010 |
20100202469 | QUEUE MANAGEMENT SYSTEM AND METHODS - A system and method are provided for managing a queue of packets transmitted from a sender to a receiver across a communications network. The sender has a plurality of sender states and a queue manager situated in between the sender and receiver may have a corresponding plurality of queue manager states. The queue manager has one or more queue management parameters which may have distinct predetermined values for each of the queue manager states. When the queue manager detects an event that is indicative of a change in the sender's state, the queue manager may change its state correspondingly. | 08-12-2010 |
20100202470 | Dynamic Queue Memory Allocation With Flow Control - A method in an Ethernet controller for allocating memory space in a buffer memory between a transmit queue (TXQ) and a receive queue (RXQ) includes allocating initial memory space in the buffer memory to the RXQ and the TXQ; defining a RXQ high watermark and a RXQ low watermark; receiving an ingress data frame; determining if a memory usage in the RXQ exceeds the RXQ high watermark; if the RXQ high watermark is not exceeded, storing the ingress data frame in the RXQ; if the RXQ high watermark is exceeded, determining if there are unused memory space in the TXQ; if there are no unused memory space in the TXQ, transmitting a pause frame to halt further ingress data frame; if there are unused memory space in the TXQ, allocating unused memory space in the TXQ to the RXQ; and storing the ingress data frame in the RXQ. | 08-12-2010 |
20100220742 | SYSTEM AND METHOD FOR ROUTER QUEUE AND CONGESTION MANAGEMENT - In a multi-QOS level queuing structure, packet payload pointers are stored in multiple queues and packet payloads in a common memory pool. Algorithms control the drop probability of packets entering the queuing structure. Instantaneous drop probabilities are obtained by comparing measured instantaneous queue size with calculated minimum and maximum queue sizes. Non-utilized common memory space is allocated simultaneously to all queues. Time averaged drop probabilities follow a traditional Weighted Random Early Discard mechanism. Algorithms are adapted to a multi-level QOS structure, floating point format, and hardware implementation. Packet flow from a router egress queuing structure into a single egress port tributary is controlled by an arbitration algorithm using a rate metering mechanism. The queuing structure is replicated for each egress tributary in the router system. | 09-02-2010 |
20100226384 | Method for reliable transport in data networks - Rapid and reliable network data delivery uses state sharing to combine multiple flows into one meta-flow at an intermediate network stack meta-layer, or shim layer. Copies of all packets of the meta-flow are buffered using a common wait queue having an associated retransmit timer, or set of timers. The timers may have fixed or dynamic timeout values. The meta-flow may combine multiple distinct data flows to multiple distinct destinations and/or from multiple distinct sources. In some cases, only a subset of all packets of the meta-flow are buffered. | 09-09-2010 |
20100232446 | QUEUE SHARING WITH FAIR RATE GUARANTEE - In one embodiment, separate rate meters are maintained for each flow enqueued an increased at a target rate while a packet from the flow occupies the head of a shared transmit queue. The meter value is decreased by the packet length when a packet in enqueued or dropped. The next packet that occupies the head of the shared transmit queue is dropped if the meter value corresponding to the flow is greater than a threshold. | 09-16-2010 |
20100232447 | QUALITY OF SERVICE QUEUE MANAGEMENT IN HOME MESH NETWORK - An embodiment is a technique to perform queue management. A packet is received from an upper layer or a classifier in a multi-hop mesh network. The packet has a packet type classified by the classifier. The received packet is enqueued into one of a plurality of buffers organized according to the packet type. | 09-16-2010 |
20100232448 | Scalable Interface for Connecting Multiple Computer Systems Which Performs Parallel MPI Header Matching - An interface device for a compute node in a computer cluster which performs Message Passing Interface (MPI) header matching using parallel matching units. The interface device comprises a memory that stores posted receive queues and unexpected queues. The posted receive queues store receive requests from a process executing on the compute node. The unexpected queues store headers of send requests (e.g., from other compute nodes) that do not have a matching receive request in the posted receive queues. The interface device also comprises a plurality of hardware pipelined matcher units. The matcher units perform header matching to determine if a header in the send request matches any headers in any of the plurality of posted receive queues. Matcher units perform the header matching in parallel. In other words, the plural matching units are configured to search the memory concurrently to perform header matching. | 09-16-2010 |
20100238946 | APPARATUS FOR PROCESSING PACKETS AND SYSTEM FOR USING THE SAME - An apparatus processes a packet and classifies the packet as a processed fast path packet or a slow path packet, wherein the processed fast path packet is forwarded to a fast path forwarding queue directly or is forwarded to a fast path output queue through a packet direct memory access controller. The apparatus not only improves the packet processing performance but also guarantees the quality of service. | 09-23-2010 |
20100238947 | DATA TRANSFER SYSTEM AND METHOD - A transmission source bridge collects packets sent from nodes connected to a serial bus in accordance the IEEE1394 Standards, into one packet in an order they are to be transmitted and then sends them onto an ATM network, so that a transmission destination bridge receives this packet and divides it into a plurality of smaller packets and transfers them, in the order they were sent, to nodes connected to the serial bus in accordance with the IEEE1394 Standards. | 09-23-2010 |
20100246590 | DYNAMIC ASSIGNMENT OF DATA TO SWITCH-INGRESS BUFFERS - Embodiments of a system that includes a switch and a buffer-management technique for storing signals in the system are described. In this system, data cells are dynamically assigned from a host buffer to at least a subset of switch-ingress buffers in the switch based at least in part on the occupancy of the switch-ingress buffers. This buffer-management technique may reduce the number of switch-ingress buffers relative to the number of input and output ports to the switch, which in turn may overcome the limitations posed by the amount of memory available on chips, thereby facilitating large switches. | 09-30-2010 |
20100246591 | ENABLING LONG-TERM COMMUNICATION IDLENESS FOR ENERGY EFFICIENCY - A network adapter comprises a controller to change to a first mode from a second mode based on a number of transmit packets, sizes of received packets, and intervals between arrivals of the received packets. In one embodiment, the network controller further comprises a memory to buffer received packets, where the received packets are buffered for a longer period in the first mode than in the second mode. | 09-30-2010 |
20100246592 | LOAD BALANCING METHOD FOR NETWORK INTRUSION DETECTION - A load balancing method for network intrusion detection includes the following steps. Packets are received from a client. The data packets include a protocol type and a protocol property. An intrusion detection procedure is loaded on a receiving end. A corresponding request queue is set for each intrusion detection procedure. The request queue is used for storing the data packets. The data packets are processed a separation procedure, and are categorized into data packets of a chain type and data packets of a non-chain type according to the protocol type. The data packets of the chain type are processed by a first distribution procedure. The data packets of the non-chain type are processed by a second distribution procedure. The distribution procedures distribute the data packets to the corresponding request queues according to the protocol property. The corresponding intrusion detection procedure is performed on the data packets in each request queue. | 09-30-2010 |
20100278189 | Methods and Apparatus for Providing Dynamic Data Flow Queues - A network system and method capable of creating separate output queues on demand to improve overall network routing performance are disclosed. The network system, in one embodiment, includes a classifier, an egress queuing device and a processor. The classifier provides a result of classification for an incoming data flow in accordance with a set of predefined application policies. The egress queuing device is an egress per flow queue (“PFQ”) wherein a separately dedicated queue can be dynamically allocated within the egress PFQ in accordance with the result of classification. The processor is configured to establish a temporary circuit connection between the classifier and the egress queuing device for facilitating routing process. | 11-04-2010 |
20100278190 | HIERARCHICAL PIPELINED DISTRIBUTED SCHEDULING TRAFFIC MANAGER - A hierarchical pipelined distributed scheduling traffic manager includes multiple hierarchical levels to perform hierarchical winner selection and propagation in a pipeline including selecting and propagating winner queues of a lower level to subsequent levels to determine one final winning queue. The winner selection and propagation is performed in parallel between the levels to reduce the time required in selecting the final winning queue. In some embodiments, the hierarchical traffic manager is separated into multiple separate sliced hierarchical traffic managers to distributively process the traffic. | 11-04-2010 |
20100296518 | Single DMA Transfers from Device Drivers to Network Adapters - Methods and arrangements of data communications are discussed. Embodiments include transformations, code, state machines or other logic to provide data communications. An embodiment may involve receiving from a protocol stack a request for a buffer to hold data. The data may consist of all or part of a payload of a packet. The embodiment may also involve allocating space in a buffer for the data and for a header of a packet. The protocol stack may store the data in a portion of the buffer and hand down the buffer to a network device driver. The embodiment may also involve the network device driver transferring the entire packet from the buffer to a communications adapter in a single direct memory access (DMA) operation. | 11-25-2010 |
20100309928 | ASYNCHRONOUS COMMUNICATION IN AN UNSTABLE NETWORK - Embodiments are directed to promptly reestablishing communication between nodes in a dynamic computer network and dynamically maintaining an address list in an unstable network. A computer system sends a message to other message queuing nodes in a network, where each node in the message queuing network includes a corresponding persistent unique global identifier. The computer system maintains a list of unique global identifiers and the current network addresses of those network nodes from which the message queuing node has received a message or to which the message queuing node has sent a message. The computer system goes offline for a period of time and upon coming back online, sends an announcement message to each node maintained in the list indicating that the message queuing node is ready for communication in the message queuing network, where each message includes the destination node's globally unique identifier and the node's current network address. | 12-09-2010 |
20100322264 | METHOD AND APPARATUS FOR MESSAGE ROUTING TO SERVICES - An approach is provided for message routing to services. A publish request associated with a service is received from a user equipment. A query is generated to determine a plurality of locations of the service. Each location corresponds respectively to a plurality of clusters. Transmission of the query is initiated to a home locator. The locations from the home locator are received. One of the locations is selected. Transmission of the publish request to the selected location is initiated. | 12-23-2010 |
20100329275 | Multiple Processes Sharing a Single Infiniband Connection - A compute node with multiple transfer processes that share an Infiniband connection to send and receive messages across a network. Transfer processes are first associated with an Infiniband queue pair (QP) connection. Then send message commands associated with a transfer process are issued. This causes an Infiniband message to be generated and sent, via the QP connection, to a remote compute node corresponding to the QP. Send message commands associated with another process are also issued. This causes another Infiniband message to be generated and sent, via the same QP connection, to the same remote compute node. As mentioned, multiple processes may receive network messages received via a shared QP connection. A transfer process on a receiving compute node receives a network message through a QP connection using a receive queue. A second transfer process receives another message through the same QP connection using another receive queue. | 12-30-2010 |
20110019685 | METHOD AND SYSTEM FOR PACKET PREEMPTION FOR LOW LATENCY - Latency requirements for Ethernet link partners comprising PHY devices and memory buffers, may be determined for packets pending transmission. Transmission may be interrupted for a first packet having greater latency than a second packet, and the second packet may be transmitted. The second packet may be interrupted for transmission of a third or more packets. Packets are inspected for marks and/or for OSI layer | 01-27-2011 |
20110026539 | Forwarding Cells of Partitioned Data Through a Three-Stage Clos-Network Packet Switch with Memory at Each Stage - Examples are disclosed for forwarding cells of partitioned data through a three-stage memory-memory-memory (MMM) input-queued Clos-network (IQC) packet switch. In some examples, each module of the three-stage MMM IQC packet switch includes a virtual queue and a manager that are configured in cooperation with one another to forward a cell from among cells of partitioned data through at least a portion of the switch. The cells of partitioned data may have been partitioned and stored at an input port for the switch and have a destination of an output port for the switch. | 02-03-2011 |
20110032947 | RESOURCE ARBITRATION - A circuit includes queue buffers, a bid masking circuit, and a priority selection circuit. Each of the queue buffers carries packets of a respective message class selected from a set of message classes and asserts a respective bid signal indicating that the queue buffer carries a packet that is available for transmission. The bid masking circuit produces a masked vector of bid signals by selectively masking one or more of the bid signals asserted by the queue buffers based on credit available to transmit the packets and on cyclical masking of one or more of the bid signals asserted by ones of the queue buffers selected for packet transmission. The priority selection circuit selects respective ones of the queue buffers from which packets are transmitted based on the masked vector of bid signals produced by the bid masking circuit. | 02-10-2011 |
20110058568 | PACKET-BASED COMMUNICATION SYSTEM AND METHOD - A system and method for facilitating communication of packets between one or more applications residing on a first computing device and at least one second computing device. The system comprises a connection manager adapted to receive packets from the at least one second computing device, and a packet cache for storing packets received by the connection manager. The connection manager, upon receiving a packet from a second computing device, transmits the packet to the packet cache for storage and notifies each of the applications of receipt of the packet. Subsequently, the packet is retrievable from the packet cache by a notified application, and verification that the packet is intended for communication to the notified application is made. | 03-10-2011 |
20110058569 | NETWORK ON CHIP INPUT/OUTPUT NODES - The present invention relates to a torus network comprising a matrix of infrastructure routers, each of which is connected to two other routers belonging to the same row and to two other routers belonging to the same column; and input/output routers, each of which is connected by two internal inputs to two other routers belonging either to the same row, or to the same column, and comprising an external input for supplying the network with data. Each input/output router is devoid of queues for its internal inputs and comprises queues assigned to its external input managed by an arbiter which is configured to also manage the queues of an infrastructure router connected to the input/output router. | 03-10-2011 |
20110069716 | METHOD AND APPARATUS FOR QUEUING VARIABLE SIZE DATA PACKETS IN A COMMUNICATION SYSTEM - Variable size data packets are queued in a communication system by generating from each data packet a record portion of predetermined fixed size containing information about each packet and storing only data portions of the packets in independent memory locations in a first memory. The record portions are only stored in one or more managed queues in a second memory having fixed size memory locations equal in size to the size of the record portions. The first memory is larger than the second memory; and the memory locations in the first memory are arranged in blocks having a plurality of different sizes. The memory locations are allocated to the data portions according to the size of the data portions. | 03-24-2011 |
20110075678 | NETWORK INTERFACE SYSTEM WITH FILTERING FUNCTION - A network interface system for transferring a data packet between a host system and a network includes multiple matchers and multiple queues. The matchers match the data packet with multiple rules from the host system to generate multiple matching results and allocate a transferring priority to the data packet according to the rules. The queues correspond to the matchers respectively. A queue of the queues stores information indicating the transferring priority for the data packet according to the matching results and priorities of matchers. | 03-31-2011 |
20110075679 | PACKET TRANSMISSION DEVICE AND PACKET TRANSMISSION METHOD - Provided are a packet transmission device and a packet transmission method which can effectively use a radio band while suppressing a processing overhead. A packet transmission device ( | 03-31-2011 |
20110085566 | METHOD FOR COMMUNICATING IN A NETWORK AND RADIO STATIONS ASSOCIATED - The present invention relates to a method for communicating in a network comprising a primary station and at least one secondary station, said secondary station comprising a buffer containing data packets to be transmitted to the primary station, the method comprising the step of the secondary station transmitting an indication of the buffer status to the primary station, said indication comprising information about history of said buffer. | 04-14-2011 |
20110085567 | METHOD OF DATA DELIVERY ACROSS A NETWORK - The present invention relates to a method of sorting data packets in a multi- path network having a plurality of ports; a plurality of network links; and a plurality of network elements, each network element having at least first and second separately addressable buffers in communication with a network link and the network links interconnecting the network elements and connecting the network elements to the ports, the method comprising: sorting data packets with respect to their egress port or ports such that at a network element a first set of data packets intended for the same egress port are queued in said first buffer and at least one other data packet intended for an egress port other than the egress port of the first set of data packets is queued separately in said second buffer whereby said at least one other data packet is separated from any congestion associated with the first set of data packets. The present invention further relates to a method of data delivery in a multi-path network comprising the sorting of data packets in accordance with a first aspect of the present invention. The present invention further relates to a multi-path network operable to sort data packets and operable to deliver data in a multi-path network. | 04-14-2011 |
20110096790 | SIGNAL PROCESSING CIRCUIT, INTERFACE UNIT, FRAME TRANSMISSION APPARATUS, AND SEGMENT DATA READING METHOD - A signal processing circuit for controlling reading of segment data from a buffer in which a plurality of segment data generated by dividing a frame and received via a plurality of switches which direct each of the segment data to a designated destination are stored, comprises: a start detecting unit which detects a starting segment representing the first transmitted segment data to the switch among the segment data received after the buffer has emptied; a transmission time acquiring unit which acquires a transmission time at which the starting segment was transmitted to the switch; and a read timing control unit which determines, based on the transmission time, a read timing for reading the segment data from the buffer. | 04-28-2011 |
20110103395 | COMPUTING THE BURST SIZE FOR A HIGH SPEED PACKET DATA NETWORKS WITH MULTIPLE QUEUES - A communications method is provided. The method includes processing multiple packet queues for a high speed packet data network and associating one or more arrays for the multiple packet queues. The method also includes generating an index for the arrays, where the index is associated with a time stamp in order to determine a burst size for the high speed packet data network. | 05-05-2011 |
20110110380 | Hiding System Latencies in a Throughput Networking Systems - A method for addressing system latency within a network system which includes providing a network interface and moving data within each of the plurality of memory access channels independently and in parallel to and from a memory system so that one or more of the plurality of memory access channels operate efficiently in the presence of arbitrary memory latencies across multiple requests is disclosed. The network interface includes a plurality of memory access channels. | 05-12-2011 |
20110116511 | Directly Providing Data Messages To A Protocol Layer - In one embodiment, the present invention provides for a layered communication protocol for a serial link, in which a link layer is to receive and forward a message to a protocol layer coupled to the link layer with a minimal amount of buffering and without maintenance of a single resource buffer for adaptive credit pools where all message classes are able to consume credits. By performing a message decode, the link layer is able to steer non-data messages and data messages to separate structures within the protocol layer. Credit accounting for each message type can be handled independently where the link layer is able to return credits immediately for non-data messages. In turn, the protocol layer includes a shared buffer to store all data messages received from the link layer and return credits to the link layer for these messages when the data is removed from the shared buffer. Other embodiments are described and claimed. | 05-19-2011 |
20110122883 | SETTING AND CHANGING QUEUE SIZES IN LINE CARDS - A device may include a first line card and a second line card. The first line card may include a memory including queues. In addition, the first line card may include a processor. The processor may identify, among the queues, a queue whose size is to be modified, change the size of the identified queue, receive a packet, insert a header cell associated with the packet in the identified queue, identify a second line card from which the packet is to be sent to another device in a network, remove the header cell from the identified queue, and forward the header cell to the second line card. The second line card may receive the header cell from the first line card, and send the packet to the other device in the network. | 05-26-2011 |
20110122884 | ZERO COPY TRANSMISSION WITH RAW PACKETS - A system for providing a zero copy transmission with raw packets includes an operating system that receives an application request pertaining to a data packet to be transmitted over a network, where the data packet has already gone through networking stack processing invoked by the application. The operating system queries a driver of a network device on whether the network device has a zero copy capability. Based on the query response of the driver, the operating system determines whether a zero copy transmission should be used for the data packet. If not, the operating system copies the data packet from the application memory to a kernel buffer, and notifies the driver about the data packet in the kernel buffer. If so, the operating system refrains from copying the data packet to the kernel buffer, and notifies the driver about the data packet in the application memory. | 05-26-2011 |
20110122885 | Controlling Packet Filter Installation in a User Equipment - A communication system includes a user equipment (UE) | 05-26-2011 |
20110122886 | METHOD AND DEVICES FOR INSTALLING PACKET FILTERS IN A DATA TRANSMISSION - A method is described for associating a data packet (DP) with a packet bearer (PB) in a user equipment (UE | 05-26-2011 |
20110134933 | CLASSES OF SERVICE FOR NETWORK ON CHIPS - A method includes a local switch receiving a first set of upstream packets and a first set of local packets, each assigned a first class of service. The local switch inserts, according to a first insertion rate, a local packet between subsets of the first set of upstream packets to obtain an ordered set of first class packets. The local switch also receives a second set of upstream packets and a second set of local packets, each assigned a second class. The local switch inserts, according to a second insertion rate, a local packet between subsets of the second set of upstream packets to obtain an ordered set of second class packets. The method includes for each timeslot, selecting a class, and forwarding a packet from the selected class of service to a downstream switch. The switches are interconnected in a daisy chain topology on a single chip. | 06-09-2011 |
20110142064 | DYNAMIC RECEIVE QUEUE BALANCING - A method according to one embodiment includes the operations of assigning a network application to at least one first core processing unit, from among a plurality of core processing unit. The method of this embodiment also includes the operations of assigning a first receive queue to said first core processing unit, wherein the first receive queue is configured to receive packet flow associated with the network application; defining a high threshold for the first receive queue; and monitoring the packet flow in the first receive queue and comparing a packet flow level in the first receive queue to the high threshold; wherein if the packet flow level exceeds the threshold based on the comparing, generating a queue status message indicating that the packet flow level in the first queue has exceeded the queue high threshold. | 06-16-2011 |
20110149988 | COMMUNICATION CONTROLLER - A switch includes: a packet buffer including a plurality of segments for temporarily storing a packet to be relayed, using a segment of a fixed size as a unit of management of a storage area; an input processor configured to receive a packet to be relayed from an external source, refer to an offset, indicating a location in a segment where a free area starts, store a first packet, starting at the location in the segment indicated by the offset, update the location of start indicated by the offset in accordance with the packet size, and store a second packet, starting at the location of start in the segment thus updated; and an output processor configured to read the first and second packets and send the packets to a communication node. | 06-23-2011 |
20110149989 | INSTRUCTION SET FOR PROGRAMMABLE QUEUING - A traffic manager includes an execution unit that is responsive to instructions related to queuing of data in memory. The instructions may be provided by a network processor that is programmed to generate such instructions, depending on the data. Examples of such instructions include (1) writing of data units (of fixed size or variable size) without linking to a queue, (2) re-sequencing of the data units relative to one another without moving the data units in memory, and (3) linking the previously-written data units to a queue. The network processor and traffic manager may be implemented in a single chip. | 06-23-2011 |
20110158248 | DYNAMIC PRIORITIZED FAIR SHARE SCHEDULING SCHEME IN OVER-SUBSCRIBED PORT SCENARIO - A network device receives initial policer limits for a plurality of over-subscribing ingress ports, where the initial policer limits are based on existing bandwidth limits for an over-subscribed egress port associated with the over-subscribing ingress ports. The network device receives a high threshold watermark and a low threshold watermark for bandwidth usage of the over-subscribed egress port, and identifies a queue, associated with the over-subscribed egress port, with values outside the high threshold watermark or the low threshold watermark. The network device reduces the initial policer limits for the plurality of over-subscribing ingress ports when the queue has values above the high threshold watermark, and increases the initial policer limits for the plurality of over-subscribing ingress ports when the queue has values below the low threshold watermark. | 06-30-2011 |
20110158249 | Assignment Constraint Matrix for Assigning Work From Multiple Sources to Multiple Sinks - An assignment constraint matrix method and apparatus used in assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device. The assignment constraint matrix is implemented as a plurality of qualifier matrixes adapted to operate simultaneously in parallel. Each of the plurality of qualifier matrixes is adapted to determine sources in a subset of supported sources that are qualified to provide work to a set of sinks based on assignment constraints. The determination of qualified sources may be based sink availability information that may be provided for a set of sinks on a single chip or distributed on multiple chips. | 06-30-2011 |
20110158250 | Assigning Work From Multiple Sources to Multiple Sinks Given Assignment Constraints - A method and apparatus for assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device. In a given processing period, sinks that are available to receive work are identified and sources qualified to send work to the available sinks are determined taking into account any assignment constraints. A single source is selected from an overlap of the qualified sources and sources having work available. This selection may be made using a hierarchical source scheduler for processing subsets of supported sources simultaneously in parallel. A sink to which work from the selected source may be assigned is selected from available sinks qualified to receive work from the selected source. | 06-30-2011 |
20110158251 | CONTENT DISTRIBUTION METHOD AND CONTENT RECEPTION DEVICE - Both a first method, in which a packet used to transmit distribution content is divided into two or more containers and the divided containers are transmitted at the same time, and a second method, in which the divided containers are transmitted at different times, are executed in a content distribution system. A download terminal selectively executes high-speed downloading by the first method and low-speed downloading by the second method, thus allowing the user to use a download service according to the terminal performance. | 06-30-2011 |
20110176553 | SYSTEM AND METHOD FOR SEAMLESS SWITCHING THROUGH BUFFERING - A method of preparing data streams to facilitate seamless switching between such streams by a switching device to produce an output data stream without any switching artifacts. Bi-directional switching between any plurality of data streams is supported. The data streams are divided into segments, wherein the segments include synchronized starting points and end points. The data rate is increased before an end point of a segment, to create switch gaps between the segments. Increasing the data rate can include increasing a bandwidth of the plurality of data streams, for example by multiplexing, or compressing the data. | 07-21-2011 |
20110176554 | PACKET RELAY APPARATUS AND METHOD OF RELAYING PACKET - The packet relay apparatus is provided. The packet relay apparatus includes a receiver that receives a packet; and a determiner that determines to drop the received packet without storing the received packet into a queue among the multi-stage queue. The determiner determines to drop the received packet at a latter stage, based on former-stage queue information representing a state of a queue at any former stage which the received packet belongs to and latter-stage queue information representing a state of a queue at the latter stage which the received packet belongs to. | 07-21-2011 |
20110182299 | LIMITING TRANSMISSION RATE OF DATA - An improved solution for limiting the transmission rate of data over a network is provided according to an aspect of the invention. In particular, the transmission rate for a port is limited by rate limiting one of a plurality of queues (e.g., class/quality of service queues) for the port, and directing all data (e.g., packets) for transmission through the port to the single rate limited queue. In this manner, the transmission rate for the port can be effectively limited to accommodate, for example, a lower transmission rate for a port on a destination node. | 07-28-2011 |
20110188510 | DATA CONVERSION DEVICE AND DATA CONVERSION METHOD - A data conversion device includes a receiving unit that receives first data and second data, transmitting after a start of the first data, transmitted from the first device to the second device, a transmitting unit that transmits the received first data and second data to a third device, and a control unit that controls a time point of transmitting the second data from the transmitting unit to lengthen a time interval between transmission of the first data and second data from the transmitting unit than a first time interval between transmission of the first data from the transmitting unit and reception of response data to the first data by the receiving unit when the first time interval is longer than a time interval between the transmission of the first data and second data from the first device to the second device. | 08-04-2011 |
20110222552 | THREAD SYNCHRONIZATION IN A MULTI-THREAD NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate contexts corresponding to tasks received by the packet classifier from a plurality of processing modules of the network processor. A multi-thread instruction engine processes threads of instructions, each thread of instructions corresponding to a context received from the scheduler. A thread status manager maintains a thread status table having N entries to track up to N active threads. Each status entry includes a valid status indicator, a sequence value, and a thread indicator. A sequence counter generates a sequence value for each thread and is incremented when processing of a thread is started, and is decremented when a thread is completed, by the multi-thread instruction engine. Instructions are processed by the multi-thread instruction engine in the order in which the threads were started. | 09-15-2011 |
20110222553 | THREAD SYNCHRONIZATION IN A MULTI-THREAD NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate a thread of contexts for each task received by the packet classifier from a plurality of processing modules of the network processor. The scheduler includes one or more output queues to temporarily store contexts. Each thread corresponds to an order of instructions applied to the corresponding packet, and includes an identifier of a corresponding one of the output queues. The scheduler sends the contexts to a multi-thread instruction engine that processes the threads. An arbiter selects one of the output queues in order to provide output packets to the multi-thread instruction engine, the output packets associated with a corresponding thread of contexts. Each output queue transmits output packets corresponding to a given thread contiguously in the order in which the threads started. | 09-15-2011 |
20110228793 | CUSTOMIZED CLASSIFICATION OF HOST BOUND TRAFFIC - A network device component receives traffic, determines whether the traffic is host bound traffic or non-host bound traffic, and classifies, based on a user-defined classification scheme, the traffic when the traffic is host bound traffic. The network device component also assigns, based on the classification, the classified host bound traffic to a queue associated with network device component for forwarding the classified host bound traffic to a host component of the network device. | 09-22-2011 |
20110228794 | System and Method for Pseudowire Packet Cache and Re-Transmission - Disclosed is an apparatus that includes an ingress node configured to couple to an egress node and transmit a plurality of packets to one or more egress nodes, wherein at least some of the plurality of packets are cached before transmission and wherein the ingress node is further configured to retransmit a packet from the cached packets based on a request from one of the one or more egress nodes. | 09-22-2011 |
20110243150 | Facilitating Communication Of Routing Information - In certain embodiments, facilitating communication of routing information includes receiving, at a shim, incoming messages communicating routing information from a first protocol point of one or more protocol points operating according to a routing protocol. The shim belongs to an internal region separate from an external region, and a transport layer is disposed between the shim and the protocol points. The incoming messages are processed and sent to siblings that belong to the internal region. Each sibling implements a state machine for the routing protocol. Outgoing messages are received from a first sibling. The outgoing messages are processed and sent to a second protocol point of the one or more protocol points. | 10-06-2011 |
20110255551 | METHOD AND SYSTEM FOR WEIGHTED FAIR QUEUING - A system for scheduling data for transmission in a communication network includes a credit distributor and a transmit selector. The communication network includes a plurality of children. The transmit selector is communicatively coupled to the credit distributor. The credit distributor operates to grant credits to at least one of eligible children and children having a negative credit count. Each credit is redeemable for data transmission. The credit distributor further operates to affect fairness between children with ratios of granted credits, maintain a credit balance representing a total amount of undistributed credits available, and deduct the granted credits from the credit balance. The transmit selector operates to select at least one eligible and enabled child for dequeuing, bias selection of the eligible and enabled child to an eligible and enabled child with positive credits, and add credits to the credit balance corresponding to an amount of data selected for dequeuing. | 10-20-2011 |
20110261831 | Dynamic Priority Queue Level Assignment for a Network Flow - Forwarding a flow in a network includes receiving the flow at a switch, determining an optimized priority queue level of the flow at the switch, and forwarding the flow via the switch using an optimized priority queue level of the flow at the switch. The flow passes through a plurality of switches, including the switch, in the network, and the optimized priority queue level of the flow at the switch is different from a priority queue level of the flow at a second switch of the plurality of switches. The second switch routes the flow at the second switch using the different priority queue level for the flow. | 10-27-2011 |
20110286468 | PACKET BUFFERING DEVICE AND PACKET DISCARDING METHOD - A packet buffering device includes: a queue for temporarily holding an arriving packet; a residence time predicting unit which predicts a length of time during which the arriving packet will reside in the queue; and a packet discarding unit which discards the arriving packet when the length of time predicted by the residence time predicting unit exceeds a first reference value. | 11-24-2011 |
20110286469 | Packet retransmission control system, method and program - A lower layer retransmission control unit performs the following processing. When transmitting a transmission packet, giving a sequence number indicating a transmission order to the transmission packet. Receiving, from a receiving device that receives the transmission packet as a reception packet, an ACK packet indicating the sequence number of the reception packet. Referring to the sequence number of the received ACK packet to determine whether or not the ACK packet is received in an order of the sequence number. Transmitting first to third transmission packets and, if receiving a first ACK packet and receiving a third ACK packet following the first ACK packet without receiving a second ACK packet, performing fast retransmission control processing. Specifically, determining whether or not to receive the second ACK packet before a fast retransmission determination period passes after a reception time of the third ACK packet. If failing to receive the second ACK packet within the fast retransmission determination period, retransmitting the second transmission packet. | 11-24-2011 |
20110310909 | PACKET SWITCHING - In an embodiment, an apparatus is provided that may include an integrated circuit including switch circuitry to determine, at least in part, an action to be executed involving a packet. This determination may be based, at least in part, upon flow information determined, at least in part, from the packet, and packet processing policy information. The circuitry may examine the policy information to determine whether a previously-established packet processing policy has been established that corresponds, at least in part, to the flow information. If the circuitry determines, at least in part, that the policy has not been established and the packet is a first packet in a flow corresponding at least in part to the flow information, the switch circuitry may request that at least one switch control program module establish, at least in part, a new packet processing policy corresponding, at least in part, to the flow information. | 12-22-2011 |
20110317712 | Recovering Data From A Plurality of Packets - A method includes receiving a plurality of packets at an integrated processor block of a network on a chip device. The plurality of packets includes a first packet that includes an indication of a start of data associated with a pixel shader application. The method includes recovering the data from the plurality of packets. The method also includes storing the recovered data in a dedicated packet collection memory within the network on the chip device. The method further includes retaining the data stored in the dedicated packet collection memory during an interruption event. Upon completion of the interruption event, the method includes copying packets stored in the dedicated packet collection memory prior to the interruption event to an inbox of the network on the chip device for processing. | 12-29-2011 |
20110317713 | Control Plane Packet Processing and Latency Control - A switch resource receives control plane packets and data packets. The control plane packets indicate how to configure the network in which the switch resource resides. The switch resource includes a classifier. The classifier classifies the control plane packets based on priority and stores the control plane packets into different packet priority queues. The switch resource also includes a flow controller. The forwarding manager selectively forwards the control plane packets stored in the control plane packet priority queues to a control plane packet processing environment depending on a completion status of processing previously forwarded control plane packets by a packet processing thread. The control plane packet processing environment includes a monitor resource that generates one or more interrupts to an operating system to ensure further forwarding of the packets downstream to the packet processing thread for timely processing. | 12-29-2011 |
20120002677 | Arbitration method, arbiter circuit, and apparatus provided with arbiter circuit - An arbitration method includes a first process to perform a path control to transfer data from physically plural input ports logically having plural virtual channels to an arbitrary one of the plural output ports, wherein only one channel is selectable at one input port at an arbitrary point in time, by performing an arbitration among the channels of each of the plural input ports according to an arbitrary arbitration algorithm other than a time-division algorithm, and a second process to perform an arbitration among the plural input ports according to the arbitrary arbitration algorithm. The arbitrary arbitration algorithm used in the first and second processes is switched to the time-division algorithm for a predetermined time in response to a trigger. | 01-05-2012 |
20120002678 | PRIORITIZATION OF DATA PACKETS - A method of operating a telecommunications node ( | 01-05-2012 |
20120008636 | DYNAMICALLY ADJUSTED CREDIT BASED ROUND ROBIN SCHEDULER - A credit based queue scheduler dynamically adjusts credits depending upon at least a moving average of incoming packet size to alleviate the impact of traffic burstiness and packet size variation, and increase the performance of the scheduler by lowering latency and jitter. For the case when no service differentiation is required, the credit is adjusted by computing a weighted moving average of incoming packets for the entire scheduler. For the case when differentiation is required, the credit for each queue is determined by a product of a sum of credits given to all queues and priority levels of each queue. | 01-12-2012 |
20120008637 | DIFFERENTIAL FRAME BASED SCHEDULING FOR INPUT QUEUED SWITCHES - A differential frame-based scheduling scheme is employed for input queued (IQ) switches with virtual output queues (VOQ). Differential scheduling adjusts previous scheduling based on a traffic difference in two consecutive frames. To guarantee quality of service (QoS) with low complexity, the adjustment first reserves some slots for each port pair in each frame, then releases surplus allocations and supplements deficit allocations according to a dichotomy order, designed for high throughput, low jitter, fairness, and low computational complexity. | 01-12-2012 |
20120020366 | PACKET DRAINING FROM A SCHEDULING HIERARCHY IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for restructuring a scheduling hierarchy of a network processor having a plurality of processing modules and a shared memory. The scheduling hierarchy schedules packets for transmission. The network processor generates tasks corresponding to each received packet associated with a data flow. A traffic manager receives tasks provided by one of the processing modules and determines a queue of the scheduling hierarchy corresponding to the task. The queue has a parent scheduler at each of one or more next levels of the scheduling hierarchy up to a root scheduler, forming a branch of the hierarchy. The traffic manager determines if the queue and one or more of the parent schedulers of the branch should be restructured. If so, the traffic manager drops subsequently received tasks for the branch, drains all tasks of the branch, and removes the corresponding nodes of the branch from the scheduling hierarchy. | 01-26-2012 |
20120020367 | SPECULATIVE TASK READING IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for scheduling packets for transmission by a network processor. The network processor generates tasks corresponding to received packets associated with a data flow. A traffic manager of the network processor receives tasks provided by a processing module of the network processor and generates a tree scheduling hierarchy having one or more scheduling levels. Each received task is queued in a queue of the scheduling hierarchy associated with the received task, the queue having a corresponding parent scheduler in each level of the scheduling hierarchy, forming a branch of the scheduling hierarchy. A parent scheduler selects a child node to transmit a task. A task read module determines a thread corresponding to the selected child node to read corresponding packet data from a shared memory. The traffic manager forms one or more output tasks for transmission based on the packet data corresponding to the thread. | 01-26-2012 |
20120020368 | DYNAMIC UPDATING OF SCHEDULING HIERARCHY IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for dynamically controlling a scheduling rate of each node in a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. A traffic manager enqueues received tasks in a queue of the scheduling hierarchy associated with a data flow. The queue has a parent scheduler at each level of the hierarchy up to the root scheduler. The traffic manager maintains one or more scheduling data structures for each node in the scheduling hierarchy. If the traffic manager receives a rate reduction request corresponding to a given node of the scheduling hierarchy, the traffic manager updates one or more indicators in the scheduling data structure corresponding to the given node and removes the given node from the scheduling hierarchy, thereby reducing the scheduling rate of the node. | 01-26-2012 |
20120020369 | SCHEDULING HIERARCHY IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for dynamically constructing a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. The traffic manager queues the received task in the associated queue, the queue having a corresponding parent scheduler at each of one or more next levels of the scheduling hierarchy up to the root scheduler. A parent scheduler selects, starting at the root scheduler and iteratively repeating at each of the corresponding N scheduling levels until a queue is selected, a child node to transmit at least one task. The traffic manager forms output packets for transmission based on the at least one task from the selected queue. | 01-26-2012 |
20120020370 | ROOT SCHEDULING ALGORITHM IN A NETWORK PROCESSOR - Described embodiments provide for arbitrating between nodes of scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. The traffic manager queues the received task in an associated queue of the scheduling hierarchy. The root scheduler performs smooth deficit weighted round robin (SDWRR) arbitration between each child node of the root scheduler. The SDWRR arbitration includes checking one or more status indicators of each child node of the given scheduler and selecting, based on the status indicators, a first active child node of the scheduler and updating the one or more status indicators corresponding to the selected child node. Thus, a task is scheduled for transmission by the traffic manager every cycle of the network processor. | 01-26-2012 |
20120020371 | MULTITHREADED, SUPERSCALAR SCHEDULING IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments schedule packets for transmission by a network processor. A traffic manager generates a scheduling hierarchy having a root scheduler and N levels. The network processor generates tasks corresponding to received packets. The traffic manager enqueues tasks in an associated queue. The queue has a corresponding level M, with a corresponding parent scheduler at each of M−1 levels in the scheduling hierarchy, where M is less than or equal to N. In a single scheduling cycle, a parent scheduler selects a child node to transmit one or more tasks, and the child node responds whether the scheduling is accepted, and if so, with a number of tasks for scheduling. Starting at the parent scheduler and iteratively repeating at each level until reaching the root scheduler, statistics corresponding to the selected node are updated. Output packets corresponding to the scheduled tasks are transmitted, thereby achieving a superscalar task scheduling throughput. | 01-26-2012 |
20120027024 | Zero-Setting Network Quality Service System - A zero-setting QoS system, which is designed with priority session and bandwidth technologies in an undifferentiated network, such that the packets for universal or dedicated network can obtain priority transmission services. As a QoS system of priority levels, the network packets are received from the inlet, and 802.1q tag and 802.1p tag are loaded onto the packets, so that the packets can be transmitted by priority levels, then 802.1q tag and 802.1p tag are removed from the outlet of the system, enabling easy operation in an undifferentiated network environment; therefore, the system with rapid transmission capability can allocate and transmit the network packets within a shorter response time and by better priority levels. | 02-02-2012 |
20120033680 | SYSTEMS AND METHODS FOR RECEIVE AND TRANSMISSION QUEUE PROCESSING IN A MULTI-CORE ARCHITECTURE - Described herein is a method and system for directing outgoing data packets from packet engines to a transmit queue of a NIC in a multi-core system, and a method and system for directing incoming data packets from a receive queue of the NIC to the packet engines. Packet engines store outgoing traffic in logical transmit queues in the packet engines. An interface module obtains the outgoing traffic and stores it in a transmit queue of the NIC, after which the NIC transmits the traffic from the multi-core system over a network. The NIC receives incoming traffic and stores it in a NIC receive queue. The interface module obtains the incoming traffic and applies a hash to a tuple of each obtained data packet. The interface module then stores each data packet in the logical receive queue of a packet engine on the core identified by the result of the hash. | 02-09-2012 |
20120063467 | HIERARCHICAL PACKET SCHEDULING - A packet scheduler may include logic configured to receive packet information. The packet scheduler may include logic to receive an operating parameter associated with a downstream device that operates with cell-based traffic. The packet scheduler may include logic perform a packet to cell transformation to produce an output based on the operating parameter. The packet scheduler may include logic to use the output to compensate for the downstream device. | 03-15-2012 |
20120099603 | METHOD AND APPARATUS FOR SCHEDULING IN A PACKET BUFFERING NETWORK - A system and method that can be deployed to schedule links in a switch fabric. The operation uses two functional elements: to perform updating of a priority link list; and then selecting a link using that list. | 04-26-2012 |
20120106567 | MLPPP OCCUPANCY BASED ROUND ROBIN - Embodiments of the invention are directed to providing a method for selecting a link for transmitting a data packet, from links of a Multi-Link Point-to-Point Protocol (MLPPP) bundle, by compiling a list of links having a minimum queue depth and selecting the link in a round robin manner from the list. Some embodiments of the invention further provide for a flag to indicate if the selected link has been assigned to a transmitter so that an appropriate link will be selected even if link queue depth status is not current. | 05-03-2012 |
20120120965 | LOCK-LESS AND ZERO COPY MESSAGING SCHEME FOR TELECOMMUNICATION NETWORK APPLICATIONS - A computer-implemented system and method for a lock-less, zero data copy messaging mechanism in a multi-core processor for use on a modem in a telecommunications network are described herein. The method includes, for each of a plurality of processing cores, acquiring a kernel to user-space (K-U) mapped buffer and corresponding buffer descriptor, inserting a data packet into the buffer; and inserting the buffer descriptor into a circular buffer. The method further includes creating a frame descriptor containing the K-U mapped buffer pointer, inserting the frame descriptor onto a frame queue specified by a dynamic PCD rule mapping IP addresses to frame queues, and creating a buffer descriptor from the frame descriptor. | 05-17-2012 |
20120120966 | Method and Apparatus for Allocating and Prioritizing Data Transmission - The subject matter disclosed herein describes a method to allocate and prioritize data communications on an industrial control network. A transmission schedule including multiple priority windows and multiple queues is established. Each queue is assigned to at least one priority window, and each priority window may have multiple queues assigned thereto. A control device communicating on the control network transmits data packets according to the transmission schedule. Within each priority window, data packets corresponding to one of the queues assigned to the priority window may be transmitted. The data packets may be transmitted at any point during the priority window, but will only be transmitted if no data packet from a higher queue is waiting to be transmitted. | 05-17-2012 |
20120128007 | DISTRIBUTED SCHEDULING FOR VARIABLE-SIZE PACKET SWITCHING SYSTEM - Scheduling methods and apparatus are provided for an input-queued switch. The exemplary distributed scheduling process achieves 100% throughput for any admissible Bernoulli arrival traffic. The exemplary distributed scheduling process includes scheduling variable size packets. The exemplary distributed scheduling process may be easily implemented with a low-rate control or by sacrificing the throughput by a small amount. Simulation results also showed that this distributed scheduling process can provide very good delay performance for different traffic patterns. The exemplary distributed scheduling process may therefore be a good candidate large-scale high-speed switching systems. | 05-24-2012 |
20120134369 | Programmable Queuing Instruction Set - A traffic manager includes an execution unit that is responsive to instructions related to queuing of data in memory. The instructions may be provided by a network processor that is programmed to generate such instructions, depending on the data. Examples of such instructions include (1) writing of data units (of fixed size or variable size) without linking to a queue, (2) re-sequencing of the data units relative to one another without moving the data units in memory, and (3) linking the previously-written data units to a queue. The network processor and traffic manager may be implemented in a single chip. | 05-31-2012 |
20120134370 | ASYNCHRONOUS COMMUNICATION IN AN UNSTABLE NETWORK - Embodiments are directed to promptly reestablishing communication between nodes in a dynamic computer network and dynamically maintaining an address list in an unstable network. A computer system sends a message to other message queuing nodes in a network, where each node in the message queuing network includes a corresponding persistent unique global identifier. The computer system maintains a list of unique global identifiers and the current network addresses of those network nodes from which the message queuing node has received a message or to which the message queuing node has sent a message. The computer system goes offline for a period of time and upon coming back online, sends an announcement message to each node maintained in the list indicating that the message queuing node is ready for communication in the message queuing network, where each message includes the destination node's globally unique identifier and the node's current network address. | 05-31-2012 |
20120134371 | QUEUE SCHEDULING METHOD AND APPARATUS - A queue scheduling method and apparatus is disclosed in the embodiments of the present invention, the method comprises: one or more queues are indexed by using a first circulation link list; one or more queues are accessed respectively by using the front pointer of the first circulation link list, and the value acquired from subtracting a value of a unit to be scheduled at the head of the queue from a weight middle value of each queue is treated as the residual weight middle value of the queue; when the weight middle value of one queue in the first circulation link list is less than the unit to be scheduled at the head of the queue, the queue is deleted from the first circulation link list and the weight middle value is updated with the sum of a set weight value and the residual weight middle value of the queue; the queue deleted from the first circulation link list is linked with a second circulation link list. The present invention enables the scheduling to support any number of queues, and supports the expansion of the number of queues under the circumstances that the hardware implementation logic core is not changed. | 05-31-2012 |
20120163396 | QUEUE SPEED-UP BY USING MULTIPLE LINKED LISTS - One embodiment of the present invention provides a switch that includes a transmission mechanism configured to transmit frames stored in a queue, and a queue management mechanism configured to store frames associated with the queue in a number of sub-queues which allow frames in different sub-queues to be retrieved independently, thereby facilitating parallel processing of the frames stored in the sub-queues. | 06-28-2012 |
20120170590 | Bandwidth Arrangement Method and Transmitter Thereof - A bandwidth arranging method includes the following steps of: registering isochronous packets of N isochronous streams, N is a natural number greater than 1; segmenting an isochronous transmission period into M sub-periods, M is a natural number greater than 1; arranging operation of transmitting each of the N isochronous streams in one of the M sub-periods and allocating corresponding bandwidth according to bandwidth requirement information corresponding to each of the N isochronous streams; arranging the isochronous packets into M output queues corresponding to the respective M sub-periods; outputting isochronous packets stored in the M output queues in the respective M sub-periods. | 07-05-2012 |
20120207175 | Dynamic load balancing for port groups - In one embodiment, a method includes receiving a packet at an input port of a network device, the input port having a plurality of queues with at least one queue for each output port at the network device, identifying a port group for transmitting the packet from the network device, the port group having a plurality of members each associated with one of the output ports, and selecting one of the queues based on utilization of the members. An apparatus for load balancing is also disclosed. | 08-16-2012 |
20120219010 | Port Packet Queuing - A port queue includes a first memory portion having a first memory access time and a second memory portion having a second memory access time. The first memory portion includes a cache row. The cache row includes a plurality of queue entries. A packet pointer is enqueued in the port queue by writing the packet pointer in a queue entry in the cache row in the first memory. The cache row is transferred to a packet vector in the second memory. A packet pointer is dequeued from the port queue by reading a queue entry from the packet vector stored in the second memory. | 08-30-2012 |
20120230345 | Systems and Methods of QoS for Single Stream ICA - The present solution provides quality of service (QoS) for a stream of protocol data units via a single transport layer connection. A device receives via a single transport layer connection a plurality of packets carrying a plurality of protocol data units. Each protocol data unit identifies a priority. The device may include a filter for determining an average priority for a predetermined window of protocol data units and an engine for assigning the average priority as a connection priority of the single transport layer connection. The device transmits via the single transport layer connection the packets carrying those protocol data units within the predetermined window of protocol data units while the connection priority of the single transport layer connection is assigned the average priority for the predetermined window of protocol data units. | 09-13-2012 |
20120230346 | Ethernet Switching - A scheduler in an Ethernet switch and method for scheduling and queuing received unicast packets. The scheduler determines a destination address and a traffic priority of a received packet, and searches for a stored association between the destination address and an interfacing port of the Ethernet switch. When a stored association is found, the received packet is scheduled and queued in one of the priority buffers of the output buffer in an associated interfacing port according to the received packet's traffic priority. When no association is found, the scheduler floods the received unicast packet in a flooding buffer in every interfacing outgoing port of the Ethernet switch. The flooded packet may be scheduled as low priority traffic, or may be prioritized in relation to other flooded unicast packets based on each flooded unicast packet's traffic priority. | 09-13-2012 |
20120243550 | TECHNIQUES TO UTILIZE QUEUES FOR NETWORK INTERFACE DEVICES - Techniques to allocate packets for processing among multiple processor(s). Other embodiments are also disclosed and/or claimed. | 09-27-2012 |
20120243551 | Efficient Processing of Compressed Communication Traffic - A method for processing communication traffic includes receiving an incoming stream of compressed data conveyed by a sequence of data packets, each containing a respective portion of the compressed data. The respective portion of the compressed data contained in the first packet is stored in a buffer, having a predefined buffer size. Upon receiving a subsequent packet, at least a part of the compressed data stored in the buffer and the respective portion of the compressed data contained in the subsequent packet are decompressed, thereby providing decompressed data. A most recent part of the decompressed data that is within the buffer size is recompressed and stored in the buffer. | 09-27-2012 |
20120275464 | SYSTEM AND METHOD FOR DYNAMICALLY ALLOCATING BUFFERS BASED ON PRIORITY LEVELS - Methods and systems consistent with the present invention provide dynamic buffer allocation to a plurality of queues of differing priority levels. Each queue is allocated fixed minimum number of buffers that will not be de-allocated during buffer reassignment. The rest of the buffers are intelligently and dynamically assigned to each queue depending on their current need. The system then monitors and learns the incoming traffic pattern and resulting drops in each queue due to traffic bursts. Based on this information, the system readjusts allocation of buffers to each traffic class. If a higher priority queue does not need the buffers, it gradually relinquishes them. These buffers are then assigned to other queues based on the input traffic pattern and resultant drops. These buffers are aggressively reclaimed and reassigned to higher priority queues when needed. | 11-01-2012 |
20120287940 | REDUCING DATA TRANSFER FOR MATCHING PATTERNS - A device may receive a packet, obtain data from the packet, store the data in a memory, and send a request to match a portion of the data to a set of patterns, the request identifying the portion in the memory. In addition, the device may access the portion in the memory based on the request, compare the accessed portion to the set of patterns, generate a result by comparing the accessed portion to the set of patterns, and output the result. | 11-15-2012 |
20120294314 | DUAL-ROLE MODULAR SCALED-OUT FABRIC COUPLER CHASSIS - A scaled-out fabric coupler (SFC) chassis includes a plurality of root fabric cards installed on the one side of the SFC chassis. Each root fabric card has a plurality of electrical connectors. A plurality of line cards is installed on the opposite side of the SFC chassis. Each line card is one of two types of line cards. One of the two types of line cards is a switch-based network line card having network ports for connecting to servers and switches. The other of the two types of line cards is a leaf fabric card having fabric ports for connecting to a fabric port of a network element. Each of the two types of the line cards has electrical connectors that mate with one electrical connector of each root fabric card installed in the chassis. | 11-22-2012 |
20120294315 | PACKET BUFFER COMPRISING A DATA SECTION AND A DATA DESCRIPTION SECTION - The present invention relates to a data buffer memory ( | 11-22-2012 |
20120300787 | APPARATUS AND A METHOD OF RECEIVING AND STORING DATA PACKETS CONTROLLED BY A CENTRAL CONTROLLER - An assembly and a method where a number of receiving units receive and store data in a number of queues de-queued by a plurality of processors/processes. If a selected queue for one processor has a fill level exceeding a limit, the packet is forwarded to a queue of another processor which is instructed to not de-queue that queue until the queue with the exceeded fill level has been emptied. Thus, load balancing between processes/processors may be obtained while maintaining an ordering between packets. | 11-29-2012 |
20120307838 | METHOD AND SYSTEM FOR TEMPORARY DATA UNIT STORAGE ON INFINIBAND HOST CHANNEL ADAPTOR - A method for temporary storage of data units including receiving a first data unit to store in a hardware linked list queue on a communications adapter, reading a first index value from the first data unit, determining that the first index value does match an existing index value of a first linked list, and storing the first data unit in the hardware linked list queue as a member of the first linked list. The method further includes receiving a second data unit, reading a second index value from the second data unit, determining that the second index value does not match any existing index value, allocating space in the hardware linked list queue for a second linked list, and storing the second data unit in the second linked list. | 12-06-2012 |
20120327948 | ADJUSTMENT OF NEGATIVE WEIGHTS IN WEIGHTED ROUND ROBIN SCHEDULING - In one embodiment, a network processor services a plurality of queues having data using weighted round robin scheduling. Each queue is assigned an initial weight based on the queue's priority. During each cycle, an updated weight is generated for each queue by adding the corresponding initial weight to a corresponding previously generated decremented weight. Further, each queue outputs as many packets as it can without exceeding its updated weight. As each packet gets transmitted, the updated weight is decremented based on the number of blocks in that packet. If, after those packets are transmitted, the decremented weight is still positive and the queue still has data, then one more packet is transmitted, no matter how many blocks are in the packet. When a decremented weight becomes negative, the weights of the remaining queues are increased to restore the priorities of the queues as set by the initial weights. | 12-27-2012 |
20120327949 | DISTRIBUTED PROCESSING OF DATA FRAMES BY MULTIPLE ADAPTERS USING TIME STAMPING AND A CENTRAL CONTROLLER - An apparatus and a method where a plurality of physically separate data receiving/analyzing elements receive data packets and time stamp these. A controlling unit determines a storing address for each data packet based on at least the time stamp, where the controlling unit does not perform the determination of the address until a predetermined time delay has elapsed after the time of receipt. | 12-27-2012 |
20120327950 | Method for Transmitting Data Packets - Method for transmitting data packets in an Ethernet automation network, wherein the method comprises receiving a first data packet having a first priority by a transmitter, starting a transmit operation to send the first data packet from the transmitter to a receiver, receiving a second data packet having a second priority at an instant in time by the transmitter, where the second priority is higher than the first priority, and where the second data packet is to be transmitted to the receiver. The method further comprises aborting the transmit operation of the first data packet within one of the data frames of the first data packet which is located in the transmit operation at the time of the reception of the second data packet, and thereupon transmitting the second data packet from the transmitter to the receiver. | 12-27-2012 |
20120327951 | Channel Service Manager with Priority Queing - A system and method are provided for prioritizing network processor information flow in a channel service manager (CSM). The method receives a plurality of information streams on a plurality of input channels, and selectively links input channels to CSM channels. The information streams are stored, and the stored the information streams are mapped to a processor queue in a group of processor queues. Information streams are supplied from the group of processor queues to a network processor in an order responsive to a ranking of the processor queues inside the group. More explicitly, selectively linking input channels to CSM channels includes creating a fixed linkage between each input port and an arbiter in a group of arbiters, and scheduling information streams in response to the ranking of the arbiter inside the group. Finally, a CSM channel is selected for each information stream scheduled by an arbiter. | 12-27-2012 |
20130003751 | METHOD AND SYSTEM FOR EXPONENTIAL BACK-OFF ON RETRANSMISSION - A method for exponential back-off on retransmission includes queuing a packet of a message in a completion module with an initial transport timeout, transmitting the packet of the message to a responder node, and applying an exponential timeout formula to the initial transport timeout to obtain an exponentially increased transport timeout for a first retransmission. After determining the initial transport timeout has lapsed, the method further includes requeuing the packet with the exponentially increased transport timeout, and retransmitting the packet to the responder node. The method further includes, after determining the exponentially increased transport timeout has lapsed, retransmitting the packet to the responder node. | 01-03-2013 |
20130003752 | Method, Network Device, Computer Program and Computer Program Product for Communication Queue State - Aspects of the disclosure provide a method for communicating queue information. The method includes determining a queue state for each one of a plurality of queues at least partially based on respective queue length, selecting a queue with a greatest difference between the queue state of the queue and a last reported queue state of the queue, and reporting the queue state of the selected queue to at least one node. | 01-03-2013 |
20130058356 | METHOD AND APPARATUS FOR USING A NETWORK INFORMATION BASE TO CONTROL A PLURALITY OF SHARED NETWORK INFRASTRUCTURE SWITCHING ELEMENTS - Some embodiments provide a program for managing several switching elements. The program receives, at a network information base (NIB) data structure that stores data for managing the several switching elements, a request to modify data stored in at least one particular switching element. The program modifies at least a first set of data tuples stored in the NIB for managing the particular switching element. The program sends a request to the particular switching element to modify at least a second set of data tuples for managing the particular switching element's operation. | 03-07-2013 |
20130058357 | DISTRIBUTED NETWORK VIRTUALIZATION APPARATUS AND METHOD - Some embodiments provide a distributed control system for controlling managed switching elements of a network. The distributed control system comprises a first network virtualizer for converting a first set of input logical forwarding plane data to a first set of output physical control plane data. It also includes a second network virtualizer for converting a second set of input logical forwarding plane data to a second set of output physical control plane data. In some embodiments, the physical control plane data is translated into physical forwarding behaviors that direct the forwarding of data by the managed switching elements. | 03-07-2013 |
20130058358 | NETWORK CONTROL APPARATUS AND METHOD WITH QUALITY OF SERVICE CONTROLS - A control application of some embodiments allows a user to enable a logical switching element for Quality of Service (QoS). QoS in some embodiments is a technique to apply to a particular logical port of a logical switching element such that the switching element can guarantee a certain level of performance to network data that a machine sends through the particular logical port. The control application of some embodiments receives user inputs that specify a particular logical switch to enable for QoS. The control application may additionally receive performance constraints data. The control application in some embodiments formats the user inputs into logical control plane data. The control application in some embodiments then converts the logical control plane data into logical forwarding data that specify QoS functions. | 03-07-2013 |
20130070777 | Reordering Network Traffic - Impairment units and methods for impairing network traffic. An impairment unit may receive packets from a network and determine an impairment class of each packet from a plurality of impairment classes. Input logic may determine whether or not each received packet will be reordered. A received packet not to be reordered may be stored in a normal traffic FIFO queue uniquely associated with the impairment class of the received packet. A received packet to be reordered may be stored in a reorder traffic FIFO queue uniquely associated with the impairment class of the received packet. Output logic may select a sequence of packets from head ends of the plurality of normal traffic FIFO queues and the plurality of reorder traffic FIFO queues to provide outgoing traffic. A transmitter may transmit the outgoing traffic to the network. | 03-21-2013 |
20130070778 | WEIGHTED DIFFERENTIAL SCHEDULER - A method for managing packets, including: identifying a first plurality of packets from a first packet source having a first weight; identifying a second plurality of packets from a second packet source having a second weight; obtaining a first weight ratio based on the first weight and the second weight; obtaining an error threshold and a first error value corresponding to the second packet source, where the error threshold exceeds the first error value; forwarding a first packet from the first packet source in response to the error threshold exceeding the first error value; incrementing the first error value by the first weight ratio; forwarding a first packet from the second packet source, after incrementing the first error value and in response to the first error value exceeding the error threshold; and decrementing the first error value. | 03-21-2013 |
20130070779 | Interleaving Data Packets In A Packet-Based Communication System - In one embodiment, the present invention includes a method for receiving a first portion of a first packet at a first agent and determining whether the first portion is an interleaved portion based on a value of an interleave indicator. The interleave indicator may be sent as part of the first portion. In such manner, interleaved packets may be sent within transmission of another packet, such as a lengthy data packet, providing improved processing capabilities. Other embodiments are described and claimed. | 03-21-2013 |
20130077636 | Time-Preserved Transmissions In Asynchronous Virtual Machine Replication - The method includes determining a timestamp corresponding to a received data packet associated with the virtual machine and releasing the data packet from a buffer based on the timestamp and a time another data packet is released from the buffer. | 03-28-2013 |
20130089106 | SYSTEM AND METHOD FOR DYNAMIC SWITCHING OF A TRANSMIT QUEUE ASSOCIATED WITH A VIRTUAL MACHINE - Methods and systems for managing multiple transmit queues of a networking device of a host machine in a virtual machine system. The networking device includes multiple transmit queues that are used by multiple guests of the virtual machine system for the transmission of packets in a data communication. A hypervisor of the virtual machine system manages the switching from one or more transmit queues (i.e., old transmit queues) to one or more other queues (i.e., new transmit queues) by managing a flow of packets in the virtual machine system to maintain a proper sequence of packets and avoid a need to re-order the transmitted packets at a destination. | 04-11-2013 |
20130089107 | Method and Apparatus for Multimedia Queue Management - Methods and systems for a multimedia queue management solution that maintaining graceful Quality of Experience (QoE) degradation are provided. The method selects a frame from all weighted queues based on a gradient function indicating a network performance rate change and a distortion rate caused by the frame and its related frames in the queue, and dropping the selected frame and all its related frames, and continues to drop similarly chosen frame until a network performance rate change caused by the dropping frame and its related frames meets a predetermined performance metric. A frame gradient is a distortion rate divided by a network performance rate change caused by the frame and its related frames, and a distortion rate is based on a sum of each individual frame distortion rate when the frame and its related frames are replaced by some other frames derived from remaining frames based on a replacement method. | 04-11-2013 |
20130100960 | SYSTEM AND METHOD FOR DYNAMIC SWITCHING OF A RECEIVE QUEUE ASSOCIATED WITH A VIRTUAL MACHINE - Methods and systems for managing multiple receive queues of a networking device of a host machine in a virtual machine system. The networking device includes multiple receive queues that are used to receive packets intended for a guest of the virtual machine system and pass the packets to the intended virtual machine. A hypervisor of the virtual machine system manages the switching from one or more receive queues (i.e., old receive queues) to one or more other receive queues (i.e., new receive queues) by managing the provisioning of packets from the receive queues to one or more virtual machines in the virtual machine system. | 04-25-2013 |
20130107890 | BUFFER MANAGEMENT OF RELAY DEVICE | 05-02-2013 |
20130114621 | System and Method for Computer Originated Audio File Transmission - A system and method for computer originated audio file transmission includes a server having a communications module operable to communicate with a terminal unit. The server may also include a storage module operable to store at least one file. A processor may be provided to separate the file into a plurality of packets. In accordance with one embodiment of the present invention, the communications module is operable to send an initial burst of packets to the terminal unit, wherein the initial burst of packets includes at least two of the plurality of packets. In accordance with another embodiment of the present invention, the communications module is further operable to send additional packets of the plurality of packets at a predetermined rate, until each of the plurality of packets has been sent to the terminal unit. | 05-09-2013 |
20130128895 | FRAME TRANSMISSION AND COMMUNICATION NETWORK - Exemplary embodiments are directed to a communication network interconnecting a plurality of synchronized nodes, where regular frames including time-critical data are transmitted periodically or cyclically, and sporadic frames are transmitted non-periodically or occasionally. For example, each node can transmit a regular frame at the beginning of a transmission period common to, and synchronized among, all nodes. Another node then receives regular frames from its first neighboring node, and forwards the frames within the same transmission period and with the shortest delay, to a second neighboring node. Furthermore, each node actively delays transmission of any sporadic frame, whether originating from an application hosted by the node itself or whether received from a neighboring node, until forwarding of all received regular frames is completed. | 05-23-2013 |
20130128896 | NETWORK SWITCH WITH EXTERNAL BUFFERING VIA LOOPAROUND PATH - Described embodiments process data packets received by a network switch coupled to an external buffering device. The network switch determines a queue of an internal buffer of the network switch associated with a flow of the received packet and determines whether the received packet should be forwarded to the external buffering device. If the received packet should be forwarded to the external buffering device, the network switch sets an external buffering active indicator indicating that the network switch is in an external buffering mode for the flow, tags the received packet with metadata, and forwards the packet to the external buffering device. The external buffering device stores the forwarded packet in a queue of a memory of the external buffering device corresponding to the tagged metadata of the forwarded packet. The network switch processes packets stored in the internal buffer of the network switch. | 05-23-2013 |
20130136141 | WRR SCHEDULER CONFIGURATION FOR OPTIMIZED LATENCY, BUFFER UTILIZATION - A method includes receiving network information for calculating weighted round-robin (WRR) weights, calculating WRR weights associated with queues based on the network information, and determining whether a highest common factor (HCF) exists in relation to the calculated WRR weights. The method further includes reducing the calculated WRR weights in accordance with the HCF, when it is determined that the HCF exists, and performing a WRR scheduling of packets, stored in the queues, based on the reduced WRR weights. | 05-30-2013 |
20130142204 | COMMUNICATION METHOD AND APPARATUS FOR THE EFFICIENT AND RELIABLE TRANSMISSION OF TT ETHERNET MESSAGES - The goal of the present invention is to improve the useful data efficiency and reliability in the use of commercially available ETHERNET controllers, in a distributed real time computer system, by a number of node computers communicating via one or more communication channels by means of TT ETHERNET messages. To achieve this goal, a distinction is made between the node computer send time (KNSZPKT) and the network send time (NWSZPKT) of a message. The KNSZPKT must wait for the NWSZPKT, so that under all circumstances, the start of the message has arrived in the TT star coupler at the NWSZPKT, interpreted by the clock in the TT star coupler. The TT star coupler is modified, so that a message arriving from a node computer is delayed in an intelligent port of the TT star coupler until the NWSZPKT can send it precisely at the NWSZPKT into the TT network. | 06-06-2013 |
20130201995 | SYSTEM AND METHOD FOR PERFORMING PACKET QUEUING ON A CLIENT DEVICE USING PACKET SERVICE CLASSIFICATIONS - A client device having a networking layer and a network driver layer for transmitting network packets comprising: a plurality of transmit queues configured at the network layer, each of the transmit queues having different packet service classifications associated therewith, packets being queued in one of the transmit queues according to traffic service classifications assigned to the packets; a classifier module for classifying packets according to the different packet service classifications, wherein a packet to be transmitted is stored in one of the transmit queues based on the packet service classifications; and a network layer packet scheduler for scheduling packets for transmission from each of the transmit queues at the networking layer, the network layer packet scheduler scheduling packets for transmission according to the packet service classifications. | 08-08-2013 |
20130201996 | SCHEDULING PACKET TRANSMISSION ON A CLIENT DEVICE USING PACKET CLASSIFICATIONS INCLUDING HIGH PRIORITY NETWORK CONTROL PACKETS - A method comprising: configuring a plurality of transmit queues, each of the transmit queues having different packet service classifications associated therewith, the packet service classifications specifying a relative priority for packets stored within each respective queue, at least one of the transmit queues having a packet service classification assigned to network control packets being assigned a highest priority relative to the other transmit queues; classifying packets according to the different packet service classifications, wherein a packet to be transmitted is stored in one of the transmit queues based on the packet service classifications, and wherein network control packets are stored in the queue associated with network control packets; and scheduling packets for transmission from each of the transmit queues, wherein packets are scheduled for transmission according to the packet service classifications and wherein network control packets are prioritized for transmission above all other packet service classifications. | 08-08-2013 |
20130208731 | POSTED AND UNENCUMBERED QUEUES - In one aspect, techniques are provided for adding a packet to a queue. A packet may he received. A determination may be made if the packet is encumbered or unencumbered. The packet may be added to a posted queue, to an encumbered queue, or a unencumbered queue based on the determination. In another aspect, techniques are provided for de-queuing a packet in a posted queue. A posted packet may be de-queued and encumbered queues associated with the packet may be added to unencumbered queues. | 08-15-2013 |
20130215904 | VIRTUAL MEMORY PROTOCOL SEGMENTATION OFFLOADING - Methods and systems for a more efficient transmission of network traffic are provided. According to one embodiment, a user process of a host processor requests a network driver to store payload data within a system memory. The network driver stores (i) payload buffers each containing therein at least a subset of the payload data and (ii) buffer descriptors each containing therein information indicative of a starting address of a corresponding payload buffer within a user memory space. A network processor transmits onto a network the payload data within multiple transport layer protocol packets by (i) causing a network interface to retrieve the payload data from the payload buffers by performing direct virtual memory addressing of the user memory space using the buffer descriptors and information contained within a translation data structure stored within the system memory; and (ii) segmenting the payload data across the transport layer protocol packets. | 08-22-2013 |
20130235878 | DATA BLOCK OUTPUT APPARATUS, COMMUNICATION SYSTEM, DATA BLOCK OUTPUT METHOD, AND COMMUNICATION METHOD - A data block output apparatus includes a first queue that stores data blocks of first traffic; a second queue that stores data blocks of second traffic and is read preferentially over the first queue; a monitoring unit that monitors for occurrence of data blocks read out of the second queue after reading of a data block from the first queue is completed; and a control unit that controls a data block interval between completion of reading of one data block in the first traffic and a start of reading of a next data block in the first traffic when occurrence frequency of the data blocks read out of the second queue after the reading of one data block from the first queue is completed is equal to or higher than a predetermined value. | 09-12-2013 |
20130235879 | Method And Device For Managing Priority During The Transmission Of A Message - Method of managing priority during the transmission of a message, in an interconnections network comprising at least one transmission agent which comprises at least one input and at least one output, each input comprising a means of storage organized as a queue of messages. A message priority is assigned during the creation of the message, and a queue priority equal to the maximum of the priorities of the messages of the queue is assigned to at least one queue of messages of an input. A link priority is assigned to a link linking an output of a first transmission agent to an input of a second transmission agent, equal to the maximum of the priorities of the queues of messages of the inputs of said first agent comprising a first message destined for that output of said first agent which is coupled to said link, and the priority of the link is transmitted to that input of said second agent which is coupled to the link. | 09-12-2013 |
20130235880 | APPLYING BACKPRESSURE TO A SUBSET OF NODES IN A DEFICIT WEIGHTED ROUND ROBIN SCHEDULER - A scheduler in a network element may include a dequeuer to dequeue packets from a set of scheduling nodes using a deficit weighted round robin process, where the dequeuer is to determine whether a subset of the set of scheduling nodes is being backpressured. The dequeuer may set a root rich most negative credits (MNC) value, associated with a root node, to a root poor MNC value, associated with the root node, and set the root poor MNC value to zero, when the subset is not being backpressured, and may set the rich MNC value to a maximum of the root poor MNC value and a root backpressured rich MNC value, associated with the subset, and set the root poor MNC value to a root backpressured poor MNC value, associated with the subset, when the subset is being backpressured. | 09-12-2013 |
20130243007 | LOW-POWER POLICY FOR PORT - Various example embodiments are disclosed. According to an example embodiment, a method may include determining, by a port processor, a buffer length based on an amount of data stored in a port controlled by the port processor, comparing the buffer length to a low-power buffer threshold, determining a link utilization based on a number of packets transmitted by the port, comparing the link utilization to a link utilization threshold, and placing the port into a low-power state based on the comparison of the buffer length to the low-power buffer threshold and the comparison of the link utilization to the link utilization threshold. | 09-19-2013 |
20130294457 | METHOD OF AUTOMATED DIGITAL MULTI-PROGRAM MULTI-SIGNAL COMMUTATION - A method of automated digital multi-program multi-signal commutation of analogue or digital incoming signal with packet, i.e. periodically discreet structure, is considered. Such type of signals is used in the communication area, in television and radio, surveillance systems and computer networks. The method provides synchronized group switching of analogue or digital signals from a considerable number of sources (fifty, one hundred, one thousand, etc.). A situation, when there is no opportunity of preliminary synchronization of signal sources, is considered. It is conditioned by the need of simultaneous work of several users and commutation of an increased number of incoming signals, and therefore the need of usage of various signal sources from different producers, and also the ones that are considerably remote from the commutation points. The multi-user technology is provided by means of controlled multiplication of incoming signals which, in their turn, provide several users with the ability of simultaneous real-time work. In the case of video signals it allows, in particular, to effectively solve the technical problem of aggregation of a considerable number of incoming video signals into a resultant composite panorama. An effective solution of a classical problem of broadcast unification of multitudes of attributes is also offered: television channel signals, foreign segments of television programs, times of remote starts of each of the segments, broadcasting areas, customers of commutation insertions, owners of television channel rights, etc. Herewith if in the process of exploitation the user detects a new multitude of attributes, not considered before exploitation, the method implies natural integration of these new multitudes. | 11-07-2013 |
20130294458 | ROUTER, METHOD FOR CONTROLLING THE ROUTER, AND COMPUTER PROGRAM - An exemplary router is provided for an integrated circuit that has distributed buses and is arranged on a transmission route that leads from a transmission node to a reception node on the distributed buses to relay data. The distributed buses include first and second routes, each leading from the router to the reception node. The router includes a notifying section which sends a data transfer permission request to a second router on the first route and a third router on the second route and which determines whether or not the request is approved before a predetermined standby period passes to see if there is any abnormality in the first and second routes. | 11-07-2013 |
20130315259 | Method and Apparatus for Internal/External Memory Packet and Byte Counting - Systems and methods are provided for counting a number of received packets and a number of bytes contained in the received packets. A system includes a first memory disposed in an integrated circuit, the first memory being configured as a first combination counter having a first set of bits for storing a subtotal of received packets, and a second set of bits for storing a subtotal of bytes contained in the received packets. A second memory is external to the integrated circuit. The second memory is configured to store a total number of received packets and a total number of bytes contained in the received packets. Update circuitry is configured to update the total number of packets stored in the second whenever either of the first set of bits or the second set of bits overflows in the first memory. | 11-28-2013 |
20130322459 | ROUTER AND MANY-CORE SYSTEM - According to one embodiment, a router includes a plurality of input ports and a plurality of output ports. The input ports receive a packet including control information indicating a type of access. Each of the input ports includes a first buffer and a second buffer which store the packet. The output ports output the packet. Each of the input ports selects at least one of the first buffer and the second buffer as a buffer in which the packet is stored on the basis of the control information and a state of the output port serving as a destination port of the packet. | 12-05-2013 |
20130329747 | Dynamical Bandwidth Adjustment Of A Link In A Data Network - An apparatus includes a first node configured to receive the data packets from a plurality of source nodes of the data network and to selectively route some of the received data packets to a link via a port of the first node. The apparatus also includes a link-input buffer that is located in the first node and is configured to store the some of the received data packets for transmission to the link via the port. The first node is configured to power off hardware for transmitting received data packets to the link in response to a fill level of the link-input buffer being below a threshold. | 12-12-2013 |
20130336332 | SCALING OUTPUT-BUFFERED SWITCHES - The systems and methods described herein allow for the scaling of output-buffered switches by decoupling the data path from the control path. Some embodiment of the invention include a switch with a memory management unit (MMU), in which the MMU enqueues data packets to an egress queue at a rate that is less than the maximum ingress rate of the switch. Other embodiments include switches that employ pre-enqueue work queues, with an arbiter that selects a data packet for forwarding from one of the pre-enqueue work queues to an egress queue. | 12-19-2013 |
20130336333 | EXTERNAL JITTER BUFFER IN A PACKET VOICE SYSTEM - A packet voice communication system having a jitter buffer external to a voice processor. The jitter buffer stores voice packets received from a packet network. The voice processor processes the voice packets from the jitter buffer. A jitter buffer processor may place an indicator in each voice packet it holds. The indicator can indicate a length of time the voice packet was held. The rate at which packets come from the jitter buffer may be based upon the indicator, a higher rate if holding times are high and a slower rate if low. The voice processor can store the voice packets in a packet queue prior to processing the voice packets. The rate voice packets come to the voice processor may be based upon how full the packet queue is, a higher rate if the packet queue is relatively empty and a slower rate if relatively full. | 12-19-2013 |
20130343398 | PACKET-BASED COMMUNICATION SYSTEM WITH TRAFFIC PRIORITIZATION - A method is provided for handling packets at a queuing point in a packet-based communication system that handles the packets, when each of the packets is assigned one of a plurality of service priorities. At least one discard threshold is assigned to each of the service priorities, and when one of the packets is delivered to the queuing point, a count of the total number of packets or bytes stored in a queue at the queuing point is maintained. That count is compared with a selected discard threshold associated with the service priority assigned to the packet delivered to the queuing point, and that packet is selectively discarded if the count reaches the selected discard threshold. Packets having different service priorities may be stored in the queue. | 12-26-2013 |
20130343399 | OFFLOADING VIRTUAL MACHINE FLOWS TO PHYSICAL QUEUES - The present invention extends to methods, systems, and computer program products for offloading virtual machine flows to physical queues. A computer system executes one or more virtual machines, and programs a physical network device with one or more rules that manage network traffic for the virtual machines. The computer system also programs the network device to manage network traffic using the rules. In particular, the network device is programmed to determine availability of one or more physical queues at the network device that are usable for processing network flows for the virtual machines. The network device is also programmed to identify network flows for the virtual machines, including identifying characteristics of each network flow. The network device is also programmed to, based on the characteristics of the network flows and based on the rules, assign one or more of the network flows to at least one of the physical queues. | 12-26-2013 |
20140003445 | NETWORK APPLICATION VIRTUALIZATION METHOD AND SYSTEM | 01-02-2014 |
20140023085 | PACKET ROUTER HAVING A HIERARCHICAL BUFFER STRUCTURE - A packet-router architecture in which buffer modules are interconnected by one or more interconnect fabrics and arranged to form a plurality of hierarchical buffer levels, with each higher buffer level having more buffer modules than a corresponding lower buffer level. An interconnect fabric is configured to connect three or more respective buffer modules, with one of these buffer modules belonging to one buffer level and the other two or more buffer modules belonging to a next higher buffer level. A buffer module is configured to implement a packet queue that (i) enqueues received packets at the end of the queue in the order of their arrival to the buffer module, (ii) dequeues packets from the head of the queue, and (iii) advances packets toward the head of the queue when the buffer module transmits one or more packets to the higher buffer level or to a respective set of output ports connected to the buffer module. | 01-23-2014 |
20140023086 | Backplane Interface Adapter with Error Control and Redundant Fabric - A backplane interface adapter with error control and redundant fabric for a high-performance network switch. The error control may be provided by an administrative module that includes a level monitor, a stripe synchronization error detector, a flow controller, and a control character presence tracker. The redundant fabric transceiver of the backplane interface adapter improves the adapter's ability to properly and consistently receive narrow input cells carrying packets of data and output wide striped cells to a switching fabric. | 01-23-2014 |
20140036928 | Short Packet Transmission - Disclosed are various embodiments that provide short packet transmission by a network interface controller (NIC). The NIC may receive a signal indicating that a set of buffer descriptors is available for fetching from a host device. The NIC is configured to fetch the set of buffer descriptors from the host device, the set of buffer descriptors comprising a control flag, the control flag indicating whether the set of buffer descriptors comprises immediate packet data; and the NIC may transmit the immediate packet data as a transmit packet if the control flag indicates that the set of buffer descriptors comprises immediate packet data. | 02-06-2014 |
20140036929 | Phase-Based Packet Prioritization - A network node comprises a receiver configured to receive a first packet, a processor coupled to the receiver and configured to process the first packet, and prioritize the first packet according to a scheme, wherein the scheme assigns priority to packets based on phase, and a transmitter coupled to the processor and configured to transmit the first packet. An apparatus comprises a processor coupled to the memory and configured to generate instructions for a packet prioritization scheme, wherein the scheme assigns priority to packet transactions based on closeness to completion, and a memory coupled to the processor and configured to store the instructions. A method comprises receiving a first packet, processing the first packet, prioritizing the first packet according to a scheme, wherein the scheme assigns priority to packets based on phase, and transmitting the first packet. | 02-06-2014 |
20140056311 | RELAY WITH EFFICIENT SERVICE CHANGE HANDLING - A relay device with efficient service change handling, and method there for, is provided. The relay comprises: a processor; a memory; a communication interface; and a plurality of connection objects, each of the plurality of connection objects comprising a respective queue of messages, each of the messages for relay in association with respective devices via the communication interface, the processor enabled to maintain, in the memory, a cache of associations between respective identifiers of the connection objects and identifiers associated with respective messages respectively queued therein; receive an indication of a service change to a given device; determine, from the cache, a subset of the plurality of connection objects comprising given messages associated with the given device; and, communicate only with the subset to apply an action associated with the service change to the given messages, while ignoring the remaining connection objects. | 02-27-2014 |
20140064291 | Single Producer, Single Consumer Lockless FIFO/LIFO Queue - A query inserter receives data elements having individual priority types for placement in a queue, and utilizes the priority types of the received data elements to determine placement in the queue relative to an initial location established when a first data element is placed in an empty queue in order to manage the queue with a combination of first-in first-out and last-in first-out queue functionality. | 03-06-2014 |
20140064292 | Switching to a Protection Path Without Causing Packet Reordering - In one embodiment, a working path through a packet switched network is protected by a protection path. In response to a switchover condition, a packet switching device ceases to enqueue packets for sending over the current working path. Packets are enqueue for sending over the protection path, with a delay by a predetermined duration before beginning to dequeue and send of packets over the protection path. A sending packet switching device, by delaying an appropriate predetermined duration, can guarantee that the protection switching operation will not induce packet reordering nor packet loss. This predetermined delay is calculated, possibly based on measurements, of different component delays of sending packets over the working and protection paths. For example, these component delays typically include latency within the sending device, latency of communications between the sending device and the destination, and latency with the destination. | 03-06-2014 |
20140064293 | FAST DATA PACKET TRANSFER OPERATIONS - A fast send method may be selectively implemented for certain data packets received from an application for transmission through a network interface. When the fast send method is triggered for a data packet, the application requesting transmission of the data packet may be provided a completion notice nearly immediately after the data packet is received. The fast send method may be used for data packets similar to previously-transmitted data packets for which the information in the data packet is already vetted. For example, a data packet with a similar source address, destination address, source port, destination port, application identifier, and/or activity identifier may have already been vetted. | 03-06-2014 |
20140064294 | THROTTLING FOR FAST DATA PACKET TRANSFER OPERATIONS - A fast send method may be selectively implemented for certain data packets received from an application for transmission through a network interface. When the fast send method is triggered for a data packet, the application requesting transmission of the data packet may be provided a completion notice nearly immediately after the data packet is received. The fast send method may be used for data packets similar to previously-transmitted data packets for which the information in the data packet is already vetted. For example, a data packet with a similar source address, destination address, source port, destination port, application identifier, and/or activity identifier may have already been vetted. Data packets sent through the fast send method may be throttled to prevent one communication stream from blocking out other communication streams. For example, every nth data packet queued for the fast send method may be transmitted by a slow send method. | 03-06-2014 |
20140064295 | SOCKET TABLES FOR FAST DATA PACKET TRANSFER OPERATIONS - A fast send method may be selectively implemented for certain data packets received from an application for transmission through a network interface. When the fast send method is triggered for a data packet, the application requesting transmission of the data packet may be provided a completion notice nearly immediately after the data packet is received. The fast send method may be used for data packets similar to previously-transmitted data packets for which the information in the data packet is already vetted. For example, a data packet with a similar source address, destination address, source port, destination port, application identifier, and/or activity identifier may have already been vetted. A socket table may be maintained listing previously-transmitted data packets and an instruction for handling additional data packets similar to the data packet entered in the socket table. | 03-06-2014 |
20140064296 | Method And Apparatus For Performing Finite Memory Network Coding In An Arbitrary Network - Techniques for performing finite memory network coding in an arbitrary network limit an amount of memory that is provided within a node of the network for the performance of network coding operations during data relay operations. When a new data packet is received by a node, the data stored within the limited amount of memory may be updated by linearly combining the new packet with the stored data. In some implementations, different storage buffers may be provided within a node for the performance of network coding operations and decoding operations. | 03-06-2014 |
20140064297 | COMMUNICATION DEVICE, COMMUNICATION METHOD, AND COMMUNICATION SYSTEM - A communication device includes: first and second memories configured to store first and second packets in first and second queues, respectively; a processor configured to: select a packet to be transmitted by selecting the first packet in priority to the second packet, read the selected packet from the first or second queue, and detect the first packet stored in the first queue during reading of the second packet from the second queue; and a third memory configured to hold copied data relating to the second packet, wherein when detecting the first packet, the processor is configured to cause an internal or external part of the communication device to discard the currently read second packet, read the first packet stored in the first queue, and read the copied data from the third memory after completion of the reading of the first packet. | 03-06-2014 |
20140064298 | DATA TRANSMISSION DEVICE AND DATA TRANSMISSION METHOD - A data transmission device includes a packet storing unit that temporarily retains therein multiple data packets. The data transmission device includes a top location instructing unit that indicates a location in the packet storing unit to retain a new created data packet. The data transmission device includes a location information storing unit that has a plurality of entries storing therein a top location of the data packets stored in the data packet storing unit. | 03-06-2014 |
20140064299 | REFRESHING BLOCKED MEDIA PACKETS FOR A STREAMING MEDIA SESSION OVER A WIRELESS NETWORK IN A STALL CONDITION - A method for refreshing blocked media packets for a streaming media session over a wireless network in a stall condition is disclosed. The method can include a wireless communication device maintaining a buffer at an application layer. The buffer can contain at least a portion of media packets provided to a baseband layer by the application layer for transmission. Media packets provided to the baseband layer can be queued in a baseband queue prior to transmission. The method can further include the wireless communication device generating at least one new media packet for the streaming media session during the stall condition; flushing at least a portion of the media packets queued in the baseband queue; and replenishing the baseband queue by providing the baseband layer with at least a portion of the media packets contained in the buffer and at least one new media packet. | 03-06-2014 |
20140071993 | TRANSFER DEVICE AND TRANSFER METHOD - A transfer device increments a value of a phase ID at predetermined time intervals, and registers a packet ID of a transmitted data packet and a phase ID on a determination table in an associated manner. When having received a response packet from a receiving-side transfer device, the transfer device determines an unarrived packet on the basis of received packet IDs contained in the received response packet and packet IDs of transmitted data packets. Then, the transfer device determines whether a data packet corresponding to the unarrived packet is lost or on-the-fly from a relationship between a phase ID of the unarrived packet and the maximum phase contained in the received response packet, and retransmits the corresponding data packet only if it is lost. | 03-13-2014 |
20140086258 | Buffer Statistics Tracking - The systems and methods disclosed herein allow for a switch (in a packet-switching network) to track buffer statistics, and trigger an event, such as a hardware interrupt or a system snapshot, in response to the buffer statistics reaching a threshold that may indicate an impending problem. Since the switch itself triggers the event to alert the network administrator, the network administrator no longer needs to sift through mountains of data to identify potential problems. Also, since the switch triggers the event prior to a problem arising, the network administrator can provide remedial action prior to a problem occurring. This type of event-triggering mechanism makes the administration of packet-switching networks more manageable. | 03-27-2014 |
20140086259 | METHOD AND SYSTEM FOR WEIGHTED FAIR QUEUING - A system for scheduling data for transmission in a communication network includes a credit distributor and a transmit selector. The communication network includes a plurality of children. The transmit selector is communicatively coupled to the credit distributor. The credit distributor operates to grant credits to at least one of eligible children and children having a negative credit count. Each credit is redeemable for data transmission. The credit distributor further operates to affect fairness between children with ratios of granted credits, maintain a credit balance representing a total amount of undistributed credits available, and deduct the granted credits from the credit balance. The transmit selector operates to select at least one eligible and enabled child for dequeuing, bias selection of the eligible and enabled child to an eligible and enabled child with positive credits, and add credits to the credit balance corresponding to an amount of data selected for dequeuing. | 03-27-2014 |
20140092914 | METHOD AND SYSTEM FOR INTELLIGENT DEEP PACKET BUFFERING - Disclosed is a method and system for deep packet buffering on a switch core comprising an ingress and egress deep packet buffer and an external deep packet buffer. | 04-03-2014 |
20140092915 | Channel Service Manager - A system and method are provided for prioritizing network processor information flow in a channel service manager (CSM). The method receives a plurality of information streams on a plurality of input channels, and selectively links input channels to CSM channels. The information streams are stored, and the stored the information streams are mapped to a processor queue in a group of processor queues. Information streams are supplied from the group of processor queues to a network processor in an order responsive to a ranking of the processor queues inside the group. More explicitly, selectively linking input channels to CSM channels includes creating a fixed linkage between each input port and an arbiter in a group of arbiters, and scheduling information streams in response to the ranking of the arbiter inside the group. Finally, a CSM channel is selected for each information stream scheduled by an arbiter. | 04-03-2014 |
20140098822 | Port Mirroring at a Network Interface Device - A notification from a source host is received at a network interface device that indicates that a data packet is ready for transmission to a destination host. The data packet may be transmitted to the destination host via the network interface device, and a first completion queue event is generated. The first completion queue event may be used as a trigger to re-transmit the data packet to a port mirroring destination via the network interface device. In another example, a network interface device receives a data packet transmitted from a source host to a destination host. A first completion queue event is generated based on the receipt of the packet, and is used as a trigger to re-transmit the data packet to a port mirroring destination via the network interface device. | 04-10-2014 |
20140098823 | Ensuring Any-To-Any Reachability with Opportunistic Layer 3 Forwarding in Massive Scale Data Center Environments - Techniques are provided for updating routing tables of switch devices. At a first switch device of a first rack unit in a network, information is received about addresses of host devices in the network. The addresses are stored in a software cache. A packet is received from a first host device assigned to a first subnet and housed in the first rack unit. The packet is destined for a second host device assigned to a second subnet and housed in a second rack unit in the network. The packet is forwarded using the subnet entry and it may remain sub-optimal during a period before which an entry can be installed form a software cache. The software cache is evaluated to determine the address of the second host device. The packet is then forwarded optimally. This will ensure any-to-any communications in the network initially sub-optimally and subsequently optimally. | 04-10-2014 |
20140098824 | INTEGRATED CIRCUIT DEVICE AND METHOD OF PERFORMING CUT-THROUGH FORWARDING OF PACKET DATA - An integrated circuit device comprises a cut-through forwarding module. The cut-through forwarding module comprises at least one receiver component arranged to receive data to be forwarded, and at least one transmitter component arranged to transmit data stored within at least one transmitter buffer thereof. The cut-through forwarding module further comprises at least one delimiter component arranged to trigger a transmission of frame data within the at least one transmitter buffer, upon receipt of a first number of data elements of a respective data frame by the at least one receiver component, the first number of data elements comprising a first predefined integer value. | 04-10-2014 |
20140105218 | QUEUE MONITORING TO FILTER THE TREND FOR ENHANCED BUFFER MANAGEMENT AND DYNAMIC QUEUE THRESHOLD IN 4G IP NETWORK/EQUIPMENT FOR BETTER TRAFFIC PERFORMANCE - A method for dynamic queue management using a low latency feedback control loop created based on the dynamics of a network during a very short time scale is implemented in a network element. The network element includes a plurality of queues for buffering data traffic to be processed by the network element. The method includes receiving a data packet, a classification of the data packet, and identification of a destination for the data packet. The data packet is assigned to a queue according to the classification and the destination. A queue bandwidth utilization, a total buffer usage level, and a buffer usage of the assigned queue are determined as a set of parameters. A look-up of a dynamic queue threshold using at least two parameters from the set of parameters is performed, and the dynamic queue threshold is applied for admission control to the assigned queue in the shared buffer. | 04-17-2014 |
20140105219 | Pre-fill Retransmission Queue - A method of discontinuous transmission data communication in a digital subscriber line (DSL) transceiver unit, the method comprising determining that a number of a plurality of bits available to transmit is enough to fill a data transfer unit (DTU), forming a DTU, by a DTU framer, comprising the plurality of bits, transferring the DTU to a retransmission queue, and determining the DTUs from the retransmission queue to be transmitted over a next time period used for transmitting over the DSL subscriber line by the DSL transceiver unit. | 04-17-2014 |
20140140351 | APPARATUS, SYSTEM AND METHOD FOR THE TRANSMISSION OF DATA WITH DIFFERENT QoS ATTRIBUTES - An apparatus, system and method are provided for transmitting data from logical channel queues over a telecommunications link, each of the logical channel queues capable of being associated with quality of service attributes, the method including determining available resources for transmission over the telecommunications link in a frame; selecting one of the logical channel queues based on a first one of the quality of service attributes; packaging data from the selected one of the logical channel queues until one of: a second one of the quality of service attributes for the selected one of the logical channel queues is satisfied, the available resources are used, or the selected one of the logical channel queues is empty; and repeating the selecting step and the packaging step for remaining ones of the logical channel queues. | 05-22-2014 |
20140146829 | FORWARDING CELLS OF PARTITIONED DATA THROUGH A THREE-STAGE CLOS-NETWORK PACKET SWITCH WITH MEMORY AT EACH STAGE - Examples are disclosed for forwarding cells of partitioned data through a three-stage memory-memory-memory (MMM) input-queued Clos-network (IQC) packet switch. In some examples, each module of the three-stage MMM IQC packet switch includes a virtual queue and a manager that are configured in cooperation with one another to forward a cell from among cells of partitioned data through at least a portion of the switch. The cells of partitioned data may have been partitioned and stored at an input port for the switch and have a destination of an output port for the switch. | 05-29-2014 |
20140146830 | TRANSMISSION CIRCUIT, RECEPTION CIRCUIT, TRANSCEIVER SYSTEM, AND METHOD FOR CONTROLLING THE TRANSCEIVER SYSTEM - A transmission circuit includes a buffer that stores a packet transmitted to a reception circuit, a processing unit that reads a retransmission target packet from the buffer, when a retransmission request including processing pattern information that specifies processing to a packet is received from the reception circuit, and that performs the processing specified in the processing pattern information on the retransmission target packet, and an output unit that outputs a processed retransmission target packet to the reception circuit. | 05-29-2014 |
20140146831 | Queue Scheduling Method and Apparatus - Embodiments of the present invention disclose a queue scheduling method and apparatus, which can not only implement scheduling of a large number of queues, but also ensure that the queues uniformly send service data. The method includes: determining whether service data exists in each to-be-scheduled data queue and determining whether the to-be-scheduled data queues are allowed to send data; if it is determined that the service data exists in the to-be-scheduled data queues and the to-be-scheduled data queues are allowed to send data, placing queue marks of the to-be-scheduled data queues into a mark queue; scheduling queue marks of the to-be-scheduled data queues from the mark queue in sequence, scheduling the to-be-scheduled data queues corresponding to the queue marks, and enabling the to-be-scheduled data queues corresponding to the queue marks to send service data not exceeding predetermined data amounts. | 05-29-2014 |
20140153581 | PRIORITY-BASED BUFFER MANAGEMENT - Media units are stored in a buffer, wherein an importance rating is assigned to each of the media units. At least some of the media units are selectively flushed from the buffer based on the importance rating. | 06-05-2014 |
20140161135 | Output Queue Latency Behavior For Input Queue Based Device - In one implementation, an input queue switch provides latency fairness across multiple input ports and multiple output ports. In one embodiment, each input port maintains a virtual output queue for each associate output port. The virtual output queues across multiple inputs are aggregated for each specific output port. The sum of the lengths of the virtual output queues is compared to a threshold, and based on the comparison, feedback may be generated to control the operation of the input port for subsequent packets. The feedback may instruct the input port to stop buffering or drop packets destined for the output port with the sum of the lengths of the virtual output queues associated to the specific output port that exceeds the threshold. In another embodiment, each packet has an arrival timestamp, and a virtual output queue having the oldest timestamp is selected first to dequeue. | 06-12-2014 |
20140169383 | SEAMLESS SWITCHING FOR MULTIHOP HYBRID NETWORKS - Seamless path switching is made possible in a multi-hop network based upon stream marker packets and additional path distinguishing operations. A device receiving out-of-order packets on the same ingress interface is capable of determining a proper order for the incoming packets having different upstream paths. Packets may be reordered at a relay device or a destination device based upon where a path update is initiated. Reordering packets from the various upstream paths may be dependent upon a type of service associated with the packet. | 06-19-2014 |
20140169384 | HIERARCHICAL PROFILED SCHEDULING AND SHAPING - Various exemplary embodiments relate to a method and related network node including one or more of the following: determining, by the network node, that a port of the network node is ready to receive a packet; identifying a packet having a highest packet priority among a plurality of packets received via a plurality of interfaces, wherein the step of identifying includes, for each of a plurality of components at a first hierarchy level: identifying a first level highest priority packet among a plurality of packets available to the component, based on a packet priority associated with each of the plurality of packets available to the component, sharing the packet priority of the first level highest priority packet with at least one component at a second hierarchy level; and transmitting the packet having the highest priority to the port. | 06-19-2014 |
20140177643 | PARALLEL PROCESSING USING MULTI-CORE PROCESSOR - Disclosed are methods, systems, paradigms and structures for processing data packets in a communication network by a multi-core network processor. The network processor includes a plurality of multi-threaded core processors and special purpose processors for processing the data packets atomically, and in parallel. An ingress module of the network processor stores the incoming data packets in the memory and adds them to an input queue. The network processor processes a data packet by performing a set of network operations on the data packet in a single thread of a core processor. The special purpose processors perform a subset of the set of network operations on the data packet atomically. An egress module retrieves the processed data packets from a plurality of output queues based on a quality of service (QoS) associated with the output queues, and forwards the data packets towards their destination addresses. | 06-26-2014 |
20140185628 | DEADLINE AWARE QUEUE MANAGEMENT - A method for managing data traffic operating on a deadline is provided. The method includes receiving, on an intermediate node, a packet having one or more traffic characteristics. The method also includes evaluating, on the intermediate node, the one or more traffic characteristics to determine a priority of the packet. The method also includes selecting one of multiple queues on the intermediate node based on the determined priority. The method also includes processing, on the intermediate node, the packet based on the determined priority. The method also includes enqueuing the processed packet into the selected queue. The method further includes outputting the queued packet from the selected queue. | 07-03-2014 |
20140185629 | QUEUE PROCESSING METHOD - A method of processing data packets, each data packet being associated with one of a plurality of entities. The method comprises storing a data packet associated with a respective one of said plurality of entities in a buffer, storing state parameter data associated with said stored data packet, the state parameter data being based upon a value of a state parameter associated with said respective one of said plurality of entities, and processing a data packet in said buffer based upon said associated state parameter data. | 07-03-2014 |
20140192819 | PACKET EXCHANGING DEVICE, TRANSMISSION APPARATUS, AND PACKET SCHEDULING METHOD - A packet exchanging device includes queues each configured to accumulate one or more packets, a scheduler unit configured to give a certain permissible reading amount indicating amounts of data of readable packets to each of the queues, and a reading processing unit configured to read the one or more packets from the queues by the permissible reading amount in an order in which a reading condition regarding the permissible reading amount for each queue and an amount of data in the one or more packets accumulated in each queue is satisfied. | 07-10-2014 |
20140219287 | VIRTUAL SWITCHING BASED FLOW CONTROL - Flow control of data packets in a network may be enabled to at least one side of a virtual switching interface to provide a lossless environment. In some embodiments, wherever two buffer queues are in communication with at least one buffer queue being connected to a virtual switching interface, flow control may be used to determine if a threshold has been exceeded in one of the buffer queues. When exceeded, the transmission of data packets may cease to one of the buffer queues to prevent packet dropping and loss of data. | 08-07-2014 |
20140219288 | Packet Switch and Switching Method for Switching Variable Length Packets - A packet switch for switching variable length packets, wherein each of output port interfaces includes a buffer memory for storing transmission packets, a transmission priority controller for classifying, based on a predetermined algorithm, transmission packets passed from a packet switching unit into a plurality of queue groups to which individual bandwidths are assigned respectively, and queuing said transmission packets in said buffer memory so as to form a plurality of queues according to transmission priority in each of said queue groups, and a packet read-out controller for reading out said transmission packets from each of said queue groups in the buffer memory according to the order of transmission priority of the packets while guaranteeing the bandwidth assigned to the queue group. | 08-07-2014 |
20140233583 | PACKET PROCESSING WITH REDUCED LATENCY - Generally, this disclosure provides devices, methods and computer readable media for packet processing with reduced latency. The device may include a data queue to store data descriptors associated with data packets, the data packets to be transferred between a network and a driver circuit. The device may also include an interrupt generation circuit to generate an interrupt to the driver circuit. The interrupt may be generated in response to a combination of an expiration of a delay timer and a non-empty condition of the data queue. The device may further include an interrupt delay register to enable the driver circuit to reset the delay timer, the reset postponing the interrupt generation. | 08-21-2014 |
20140247833 | PRIORITIZATION OF DATA PACKETS - A method of operating a telecommunications node ( | 09-04-2014 |
20140269748 | METHOD AND APPARATUS HAVING IMPROVED LINE RATE IP PACKET COMMUNICATION - An apparatus having improved line rate communication. A Media Access Controller (MAC) accesses each reference pointer stored in transmission slots of a first sub-queue of a transmission queue. Notably, each reference pointer is indexed to a shared memory frame. The MAC transmits data from the shared memory frame in response to accessing the reference pointer, and triggers at least one interrupt when each reference pointer of the first sub-queue is accessed at least once. A processor and/or the MAC can mark in response to the at least one interrupt, each transmission slot of the first sub-queue as ready for transmission. | 09-18-2014 |
20140269749 | Timestamp Estimation and Jitter Correction Using Downstream FIFO Occupancy - First, a packet may be received and a timestamp value may be placed on the packet. The timestamp value may comprise a place time value comprising a time when the timestamp was placed on the packet plus a delay time value comprising an estimated time delay between when the timestamp was placed on the packet and when the packet leaves a port exit. Next, the packet may be sent to a first in first out (FIFO) memory. The packet may then be sent from the FIFO memory out the port exit. | 09-18-2014 |
20140286349 | COMMUNICATION DEVICE AND PACKET SCHEDULING METHOD - A communication device, includes: a plurality of queues each configured to accumulate a packet; a scheduler configured to provide a permissible readout amount to each of the plurality of queues in accordance with an order that is based on a priority of each queue; a read processor configured to read out the packet from the plurality of queues, the permissible readout amount being consumed according to amount of the packets read out; and an accumulation amount counter configured to count an accumulation amount of the packets accumulated in each of the plurality of queues, wherein the accumulation amount counter notifies the scheduler of a change in the accumulation amount, and wherein the scheduler adjusts the priority of, among the plurality of queues, the queue of which the accumulation amount has changed, in response to the notification from the accumulation amount counter. | 09-25-2014 |
20140286350 | Switching Method - A method for providing identifiers for virtual devices in a network. The method comprises receiving a discovery data packet directed to a physical network node associated with a physical endpoint device. A response to the discovery data packet directed to a physical network node is provided, the response comprising an identifier of a virtual device. At least one further discovery data packet directed at least to said virtual device is received. A response to a first one of the further discovery data packets is provided, the response comprising an identifier of a virtual endpoint device. At least some functionality of the virtual endpoint device is provided by the physical endpoint device. | 09-25-2014 |
20140294013 | SYSTEM AND METHOD FOR NETWORK PROVISIONING - Implementations described and claimed herein provided for a system for provisioning network resources. The system includes a network provisioning abstraction layer having an application interface for receiving network provisioning requests from applications and determine provisioning instructions for fulfilling the requests. Each of the received provisioning instructions is queued in a priority queuing system according to a request priority. The provisioning instructions for the highest priority requests are removed from the front of the queue and sent to a resource interface that relays the requests to the appropriate network resources. | 10-02-2014 |
20140294014 | QUEUE SPEED-UP BY USING MULTIPLE LINKED LISTS - One embodiment of the present invention provides a switch that includes a transmission mechanism configured to transmit frames stored in a queue, and a queue management mechanism configured to store frames associated with the queue in a number of sub-queues which allow frames in different sub-queues to be retrieved independently, thereby facilitating parallel processing of the frames stored in the sub-queues. | 10-02-2014 |
20140341228 | NETWORK NODE AND PACKET CONTROL METHOD - To control a timing of transmitting real-time packets, it is provided a network node for transferring a packet, comprising: ports for inputting and outputting a packet to be transferred; a buffer memory for temporarily storing the input packet; a search engine for determining a port from which the input packet is output; and a timing adjusting unit for adjusting a difference in period of time needed from reception of a specific type of packet to transmission thereof. | 11-20-2014 |
20140348177 | MANAGING FLOW CONTROL BUFFER - A count of data segments is maintained. The count includes data segments in a queue and data segments in transit between a data source and the queue. A flow of data segments from the data source is controlled, based on a value of the count. | 11-27-2014 |
20140362867 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND METHOD FOR CONTROLLING INFORMATION PROCESSING SYSTEM - An information processing device configured to process packets received from a plurality of sources includes a buffer configured to store the packets received from the plurality of sources, a first processing unit configured to transmit, to a source of a first packet, a request to stop transmission of the first packet and configured to discard the first packet if the buffer does not have an available region for storing the first packet received, and a second processing unit configured to transmit, to the source of the first packet, a request to retransmit the first packet if the buffer has the available region. | 12-11-2014 |
20140362868 | PACKET RELAY DEVICE AND PACKET TRANSMISSION DEVICE - A packet relay device includes: a first buffer configured to store a packet; and a processor coupled to the first buffer and configured to: calculate a delay time for reading from the first buffer based on a packet length and a packet interval of the packet which is inputted to the first buffer, and delay the packet according to the calculated delay time, the packet being read from the first buffer. | 12-11-2014 |
20140369360 | SYSTEMS AND METHODS FOR MANAGING TRAFFIC IN A NETWORK USING DYNAMIC SCHEDULING PRIORITIES - A system for managing traffic in a communication network. The system includes a plurality of queues each configured to store data packets and a plurality of scheduling nodes each configured to process data packets from one or more of the plurality of queues. A scheduler is configured to schedule, using the plurality of scheduling nodes, respective transfers of the data packets from the plurality of queues. Each of the plurality of scheduling nodes is assigned to one or more of the plurality of queues. Each of the plurality of scheduling nodes and each of the plurality of queues is assigned a respective scheduling priority. The respective scheduling priorities are selectively changeable between a predetermined scheduling priority and a dynamic scheduling priority, wherein the dynamic scheduling priority corresponds to a priority propagated from the one or more of the plurality of queues. | 12-18-2014 |
20140376563 | SELECTIVELY TRANSFERRING HIGH-PRIORITY NON-AUDIO DATA OVER A QUALITY OF SERVICE CHANNEL - In an embodiment, a transmitting UE is engaged with a target UE in a communication session supported at least in part via a QoS channel on which audio traffic is primarily carried and a non-QoS channel on which non-audio traffic is carried. The transmitting UE obtains audio data and non-audio data for transmission to the target UE during the communication session, and identifies a subset of higher-priority non-audio data within the obtained non-audio data. The transmitting UE transmits a stream of packets including both the audio data and the subset of higher-priority audio data over the QoS channel instead of the non-QoS channel based on the identification. The target UE receives the stream of packets on the QoS channel, and the target UE identifies and extracts the audio data and the higher-priority non-audio data. After extraction, the target UE plays the audio data and processes the higher-priority non-audio data. | 12-25-2014 |
20140376564 | METHOD AND APPARATUS FOR IMPLEMENTING ROUND ROBIN SCHEDULING - A method and an apparatus for implementing round robin scheduling are provided. The method includes: acquiring, from a queue, original location information of elements in the queue; performing location mapping processing on the original location information of the elements in the queue based on a set algorithm to obtain mapped location information of the elements in the queue, where the set algorithm or parameters used by the set algorithm change according to a set rule during each time of round robin scheduling; and starting from an element corresponding to a set initial location, performing round robin scheduling according to mapped queue sequences corresponding to the mapped location information of the elements. The method and the apparatus for implementing round robin scheduling can reduce the cost of storage devices and can ensure a balance in scheduling of elements in a service queue. | 12-25-2014 |
20140376565 | SUB FLOW BASED QUEUEING MANAGEMENT - Nodes and related methods are directed to grouping PDUs into sub flows based on information elements of the PDUs and selecting PDUs from sub flow according to a priority indicator calculated for each sub flow. The selected PDUs are sent in an order that provides a fairer resource sharing for real time like service sessions at the expense of bandwidth greedy applications. Reduced delay variability and/or reduced PDU loss probability may be obtained for real time like services when network transport resources are shared with bandwidth greedy services, such as file transfers over TCP. | 12-25-2014 |
20150016466 | NETWORK DEVICE AND METHOD FOR OUTPUTTING DATA TO BUS WITH DATA BUS WIDTH AT EACH CYCLE BY GENERATING END OF PACKET AND START OF PACKET AT DIFFERENT CYCLES - A method used in a network device for outputting data to a bus with a data bus width at each cycle includes: using a packet generator for generating idle data after an end of packet for a packet at a cycle and generating a start of packet for a next packet at a different cycle; and using an inter-packet gap (IPG) generator for receiving data transmitted from the packet generator, dynamically writing the received data into the buffer, and inserting a gap of idle data between the end of packet and the start of packet according to the end of packet and the idle data generated by the packet generator. | 01-15-2015 |
20150023366 | ADAPTIVE MARKING FOR WRED WITH INTRA-FLOW PACKET PRIORITIES IN NETWORK QUEUES - In one embodiment, a router receives a packet, and determines an intra-flow packet priority level of the packet. The router may then map the intra-flow packet priority level to a weighted random early detection (WRED) marking based on running statistics of intra-flow packet priority levels across received flows, and marks the packet with the mapped WRED marking. By placing the marked packet into an outgoing network queue for transmission, the router may then forward or drop the marked packet based on the network queue, accordingly. | 01-22-2015 |
20150036692 | INCREASED EFFICIENCY OF DATA PAYLOADS TO DATA ARRAYS ACCESSED THROUGH REGISTERS IN A DISTRIBUTED VIRTUAL BRIDGE - A system and method for efficient transfer of data from a controlling bridge to a register of a distributed bridge element. A Load Store over Ethernet (LSoE) frame processing engine (FPE) is equipped with a repeat and a repeat with strobe function that, when coupled with an auto-increment function of an indirect register facility, allows a distributed virtual bridge to move data payload more efficiently which decreases the data loading on the computers data paths used for other data transfers. | 02-05-2015 |
20150049769 | SOCKET MANAGEMENT WITH REDUCED LATENCY PACKET PROCESSING - Generally, this disclosure provides systems, methods and computer readable media for management of sockets and device queues for reduced latency packet processing. The method may include maintaining a unique-list comprising entries identifying device queues and an associated unique socket for each of the device queues, the unique socket selected from a plurality of sockets configured to receive packets; busy-polling the device queues on the unique-list; receiving a packet from one of the plurality of sockets; and updating the unique-list in response to detecting that the received packet was provided by an interrupt processing module. The updating may include identifying a device queue associated with the received packet; identifying a socket associated with the received packet; and if the identified device queue is not on one of the entries on the unique-list, creating a new entry on the unique-list, the new entry comprising the identified device queue and the identified socket. | 02-19-2015 |
20150049770 | APPARATUS AND METHOD - An apparatus includes a memory; and a processor coupled to the memory and configured to: provide an acceptable read amount with a plurality of queues; discern one or more queues from among the plurality of queues as one or more prioritized queues, the one or more prioritized queues receiving packets within a receiving rate that equals to or is less than a reading rate depending on the acceptable read amount; and read one or more packets from the plurality of queues, with prioritizing the one or more prioritized queues and consuming the acceptable read amount, in order satisfying a read condition being associated with the acceptable read amount and an amount of stored packets. | 02-19-2015 |
20150055659 | Method for Prioritizing Network Packets at High Bandwidth Speeds - The embodiments are directed to methods and appliances for scheduling a packet transmission. The methods and appliances can assign received data packets or a representation of data packets to one or more connection nodes of a classification tree having a link node and first and second intermediary nodes associated with the link node via one or more semi-sorted queues, wherein the one or more connection nodes correspond with the first intermediary node. The methods and appliances can process the one or more connection nodes using a credit-based round robin queue. The methods and appliances can authorize the sending of the received data packets based on the processing. | 02-26-2015 |
20150055660 | APPARATUS AND METHOD FOR CONTROLLING AN OCCUPANCY RATIO OF EACH REGION IN A BUFFER - A buffer control device included in a base station apparatus acquires predetermined information regarding data communication between the base station apparatus and external apparatuses other than the base station apparatus. The buffer control device predicts an amount of traffic in the data communication based on the acquired predetermined information, and controls an occupation ratio of each of regions that are arranged in a buffer in association with a plurality of priorities, based on the amount of traffic in the predicted data communication, where the buffer stores pieces of data each of which is assigned one of the plurality of priorities and transmitted within the base station apparatus from a first processing unit to a second processing unit via a high speed serial interface. | 02-26-2015 |
20150063367 | PROVIDING OVERSUBSCRIPTION OF PIPELINE BANDWIDTH - A system for providing oversubscription of pipeline bandwidth comprises a steer module, an absorption buffer, an ingress packet processor (IPP), a memory management unit (MMU), and a main packet buffer. The steer module receives packets that include start of packet (SOP), middle of packet (MOP), and end of packet (EOP) cells, attaches a packet identifier to the cells, passes the MOP and EOP cells to the MMU, and stores the SOP cells and EOP metadata in the absorption buffer. The IPP processes the SOP cells and EOP metadata and passes the same to the MMU. The MMU stores the MOP, EOP, and processed SOP cells in the main packet buffer, combines, upon receiving the processed EOP metadata of each packet, the processed SOP cell, the MOP cells and the EOP cell of each packet to reconstruct each packet, and queues each reconstructed packet in an egress port queue. | 03-05-2015 |
20150063368 | REAL-TIME DATA COMMUNICATION OVER INTERNET OF THINGS NETWORK - System(s) and method(s) for real-time data communication over an Internet of Things (IoT) network are described. According to the present subject matter, the system(s) implement the described method(s) for real-time data communication over the IoT network. The method includes encoding, at a source communication device, data to be exchanged between peer sub-layers of IoT entities based on a Forward Error Correction (FEC) context to generate encoded data packets, the IoT entities comprising the source communication device and a destination communication device. The method further includes identifying time delay to be maintained for transmission of the encoded data packets from the source communication device to the destination communication device to have minimal data packet drop due to queue overflow at the source communication device. The method further includes transmitting the encoded data packets over the IoT network. | 03-05-2015 |
20150071299 | METHODOLOGY TO INCREASE BUFFER CAPACITY OF AN ETHERNET SWITCH - A methodology to increase buffer capacity of an Ethernet switch uses an intelligent packet buffer at external ports of the Ethernet switch. Each intelligent packet buffer may include buffer logic and a buffered Ethernet port coupled to an internal Ethernet port of a switching element. The intelligent packet buffer may use a memory controller to access a random access memory using page mode access, and may write portions of a packet stream to a logical buffer in the random access memory that is dedicated to the internal Ethernet port. The intelligent packet buffer may forward the packet stream from the logical buffer to the internal Ethernet port. The logical buffer may represent a virtual output queue of the Ethernet switch associated with the internal Ethernet port. The intelligent packet buffer may be dimensioned with corresponding buffer logic and random access memory capacity to buffer one or more external ports. | 03-12-2015 |
20150071300 | SYSTEM AND METHOD FOR EFFICIENT UPSTREAM TRANSMISSION USING SUPPRESSION - A system and method suited for improved overall data transmission having a hardware-based transceiver configured for transmitting upstream data with suppressed data packets. In TCP sessions between devices, a server seeks an “acknowledgement” that the downstream data transmission has been received by a client. Some data packets sent upstream may contain only TCP acknowledgement data and therefore may be combined with other purely TCP acknowledgement data packets in order to reduce the impact of the TCP acknowledgement packets on the overall upstream data throughput. In addition, this results in increased TCP performance in the downstream transmission direction as well because the algorithm enables replacing earlier arriving ACK packets with later arriving ACK packets which allows the device to send all TCP ACK information known to the suppressor at the earliest possible time. | 03-12-2015 |
20150078394 | HASH PERTURBATION WITH QUEUE MANAGEMENT IN DATA COMMUNICATION - A system, and computer program product for hash perturbation with queue management in data communication are provided. Using a first set of old queues corresponding to a first hash function, a set of data packets corresponding to a set of session is queued. At a first time, the first hash function is changed to a second hash function. A second set of new queues is created corresponding to the second hash function. A data packet is dequeued from a first old queue in a set of old queues. A second data packet is selected from a second queue in the set of old queues. A new hash value is computed for the second data packet using the second hash function. The second data packet is queued in a first new queue such that the second packet is in position to be delivered first from the first new queue. | 03-19-2015 |
20150078395 | TRAFFIC CONTROL APPARATUS, BUFFERING CONTROL METHOD AND PACKET RELAY APPARATUS - A traffic control apparatus at which packets of a plurality of packet flows arrive includes a plurality of buffers corresponding to a plurality of times, a selector configured to read a packet accumulated in one of the plurality of buffers corresponding to a current time, and a scheduler configured to decide one of the plurality of buffers to accumulate a packet of each of the plurality of packet flows. The scheduler attempts, for each of the plurality of packet flows, accumulation of packets which are reached during a predetermined period under a condition that, as quantity of packets accumulated in the plurality of buffers is larger, the number of buffers into which packets can be accumulated becomes smaller after the predetermined period. | 03-19-2015 |
20150078396 | SELF-HEALING DATA TRANSMISSION SYSTEM AND METHOD TO ACHIEVE DETERMINISTIC AND LOWER LATENCY - The invention provides a method of simulcasting data fragments sent over a first packet-switched computer network to a trunk network and redistributed over a second packet-switched computer network. A method is provided comprising the steps of providing a trunk network including multiple links including an RF link and a fiber link, wherein all the transmission links transmit data packets to a receiver which will redistribute the earliest data packets with a matching frame check sequence (FCS) over the second packet switched computer network. The method provides for a sender for adding an incremental sender sequence number and sender FCS to the data packets, creating a sender data packet. Transmitting a sender data packet, simulcast over the transmission links, receiving, by a receiver, via the fastest links which may drop or change some bits unintentionally, a sender data packet and receiving, by the receiver, via slower links which are unlikely to drop or change some bits unintentionally, a sender data packet. The method provides for checking, by the receiver, a receiver calculated FCS of the received sender data packet and comparing them to the sender FCS that was added by the sender. Transmitting over the second packet-switched computer network, the first sender data packet with the next sequence number increment after the incremental sender sequence number and verifying sender FCS, verifying, by the receiver, the sender data packets and identifying, by the receiver, a gap between sender data packets, queueing up verified sender data packets when there is a gap in the verified sender data packets and sending all verified sender data packets in sequence order, once the gap is filled by any transmission link. | 03-19-2015 |
20150078397 | Data Matching Using Flow Based Packet Data Storage - A system for matching data using flow based packet data storage includes a communications interface and a processor. A communications interface receives a packet between a source and a destination. The processor identifies a flow between the source and the destination based on the packet. The processor determines whether some of packet data of the packet indicates a potential match to data in storage using hashes. The processor then stores the data from the most likely data match and second most likely data match without a packet header in a block of memory in the storage based on the flow. | 03-19-2015 |
20150092787 | FLUCTUATION ABSORBING DEVICE, COMMUNICATION DEVICE, AND CONTROL PROGRAM - A fluctuation absorbing device includes a buffer | 04-02-2015 |
20150110123 | LIMITATION OF SERIAL LINK INTERFERENCE - A plurality of frames of data are transmitted over a serial interface in a manner that limits interference on the interface. This involves generating a pseudo-random number and asserting a read control signal at a moment in time, wherein a timing of the moment in time is influenced by the pseudo-random number. In response to the asserted read control signal, a frame of data is read from a data buffer. The read frame of data is then transmitted over the serial interface. A number of alternative embodiments are possible, such as embodiments in which buffer read operations are triggered based on the buffer fill level, and other embodiments in which buffer read operations are triggered by a timer. By using the pseudo-random number to influence the buffer read operations, timing coherency between the reading of frames is made low, thereby limiting interference. | 04-23-2015 |
20150110124 | QUALITY OF SERVICE IN MULTI-TENANT NETWORK - A data handling system network includes a data handling system that is communicatively coupled to a switch by a network. The data handling system includes one or more logical partitions. Each logical partition includes a plurality of virtual switches and a plurality of virtual network interface cards. Each virtual network interface card is associated with a particular virtual switch and includes a plurality of QoS queues. The switch includes one or more switch partitions. Each switch partition includes a plurality of QoS queues that are associated with the QoS queues of the virtual network interface card. A packet is received with the virtual switch and the virtual switch sets and associates a QoS priority flag with the received packet. The virtual switch forwards the packet to a QoS queue comprised within the virtual network interface card based upon the QoS priority flag. | 04-23-2015 |
20150110125 | VIRTUAL MEMORY PROTOCOL SEGMENTATION OFFLOADING - Methods and systems for a more efficient transmission of network traffic are provided. According to one embodiment, payload data originated by a user process running on a host processor of the computer system is fetched by an interface of the computer system by performing direct virtual memory addressing of a user memory space of a system memory of the computer system on behalf of a network processor of the computer system. The direct virtual memory addressing maps a physical address of the payload data to a virtual address. The payload data is segmented by the network processor across one or more packets. | 04-23-2015 |
20150124832 | WORK CONSERVING SCHEDULAR BASED ON RANKING - A work conserving scheduler can be implemented based on a ranking system to provide the scalability of time stamps while avoiding the fast search associated with a traditional time stamp implementation. Each queue can be assigned a time stamp that is initially set to zero. The time stamp for a queue can be incremented each time a data packet from the queue is processed. To provide varying weights to the different queues, the time stamp for the queues can be incremented at varying rates. The data packets can be processed from the queues based on the tier rank order of the queues as determined from the time stamp associated with each queue. To increase the speed at which the ranking is determined, the ranking can be calculate from a subset of the bits defining the time stamp rather than the entire bit set. | 05-07-2015 |
20150124833 | BOOSTING LINKED LIST THROUGHPUT - Multiple listlets function as a single master linked list to manage data packets across one or more banks of memory in a first-in first-out (FIFO) order, while allowing multiple push and/or pop functions to be performed per cycle. Each listlet can be a linked list that tracks pointers and is stored in a different memory bank. The nodes can include a pointer to a data packet, a pointer to the next node in the listlet and a next listlet identifier that identifies the listlet that contains the next node in the master linked list. The head and tail of each listlet, as well as an identifier each to track the head and tail of the master linked list, can be maintained in cache. The individual listlets are updated accordingly to maintain order of the master linked list as pointers are pushed and popped from the master linked list. | 05-07-2015 |
20150124834 | PACKET PROCESSING APPARATUS, PACKET PROCESSING METHOD, AND PACKET PROCESSING SYSTEM - A first cache stores preferential packets to be preferentially processed. A second cache stores packets other than the packets stored in the first cache. A processing circuit adjusts the number of preferential packets stored in the first cache in a manner such that the preferential packets are processed at the amount of processing that is equal to or less than a set value set as the amount of processing applicable to the preferential packets within a predetermined period, processes the packets stored in the first cache, and reads from the second cache as many packets as are processable at a surplus value, and processes the read packets, the surplus value being obtained by subtracting the amount of processing to be applied to the preferential packets stored in the first cache from the amount of processing that the processing circuit is capable of performing within the predetermined period. | 05-07-2015 |
20150124835 | METHOD AND APPARATUS FOR SCHEDULING A HETEROGENEOUS COMMUNICATION FLOW - A method and apparatus are provided for scheduling a heterogeneous communication flow. A heterogeneous flow is a flow comprising packets with varying classes or levels of service, which may correspond to different priorities, qualities of service or other service characteristics. When a packet is ready for scheduling, it is queued in order in a flow queue that corresponds to the communication flow. The flow queue then migrates among class queues that correspond to the class or level of service of the packet at the head of the flow queue. Thus, after the head packet is scheduled, the flow queue may be dequeued from its current class queue and requeued at the tail of another class queue. If the subsequent packet has the same classification, it may be requeued at the tail of the class queue or may remain in place for another servicing round. | 05-07-2015 |
20150295843 | TIME-TRIGGERED ETHERNET-BASED DATA TRANSMISSION METHOD AND NODE DEVICE - A time-triggered Ethernet (TTE)-based data transmission method and node device, solving the problem of wasting network bandwidth resources in the prior art during TTE-based data transmission; in the method, a main node determines a scheduling period table based on a time-triggered packet; when a node has a to-be-transmitted event-triggered packet, and the node determines, according to the information stored in the scheduling period table, that a physical link occupied by the event-triggered packet is not in conflict with a physical link corresponding to the current time slot, the node transmits the event-triggered packet in the current time slot. The main node does not need to separately allocate time for the event-triggered packet of each node. Therefore, when a node has a to-be-transmitted event-triggered packet, the node can transmit the event-triggered packet in the current time slot as long as the physical link occupied by the event-triggered packet is not in conflict with the physical link corresponding to the current time slot, thus effectively improving data transmission efficiency and network bandwidth utilization. | 10-15-2015 |
20150295858 | SIMULTANEOUS TRANSFERS FROM A SINGLE INPUT LINK TO MULTIPLE OUTPUT LINKS WITH A TIMESLICED CROSSBAR - A method for scheduling a crossbar using distributed request-grant-accept arbitration between input group arbiters and output group arbiters in a switch unit is provided. The switch unit may be a hierarchical high radix switch with a timesliced crossbar that is configured to transfer packets between a plurality of input ports and a plurality of output ports, organized into groups, using wide words. The timesliced crossbar transfers data for a given packet once per supercycle, in a designated timeslice of that supercycle. Multiple buffered packets from one input port to multiple output ports are transferred by utilizing different timeslices of the supercycle. | 10-15-2015 |
20150304245 | CROSSBAR SWITCH AND RECURSIVE SCHEDULING - A crossbar switch has N input ports, M output ports, and a switching matrix with N×M crosspoints. In an embodiment, each crosspoint contains an internal queue (XQ), which can store one or more packets to be routed. Traffic rates to be realized between all Input/Output (IO) pairs of the switch are specified in an N×M traffic rate matrix, where each element equals a number of requested cell transmission opportunities between each 10 pair within a scheduling frame of F time-slots. An efficient algorithm for scheduling N traffic flows with traffic rates based upon a recursive and fair decomposition of a traffic rate vector with N elements, is proposed. To reduce memory requirements a shared row queue (SRQ) may be embedded in each row of the switching matrix, allowing the size of all the XQs to be reduced. To further reduce memory requirements, a shared column queue may be used in place of the XQs. The proposed buffered crossbar switches with shared row and column queues, in conjunction with the row scheduling algorithm and the DCS column scheduling algorithm, can achieve high throughput with reduced buffer and VLSI area requirements, while providing probabilistic guarantees on rate, delay and jitter for scheduled traffic flows. | 10-22-2015 |
20150312159 | MECHANISM TO SAVE SYSTEM POWER USING PACKET FILTERING BY NETWORK INTERFACE - A network interface that connects a computing device to a network may be configured to process incoming packets and determine an action to take with respect to each packet, thus decreasing processing demands on a processor of the computing device. The action may be indicating the packet to an operating system of the computing device immediately, storing the packet in a queue of one or more queues or discarding the packet. When the processor is interrupted, multiple packets aggregated on the network interface may be indicated to the operating system all at once to increase the device's power efficiency. Hardware of the network interface may be programmed to process the packets using filter criteria specified by the operating system based on information gathered by the operating system, such as firewall rules. | 10-29-2015 |
20150312160 | SYSTEM FOR FLEXIBLE DYNAMIC REASSIGNMENT OF THROUGHPUT - A network switch including a set of communication ports is provided. The communication ports may have an allocated prebuffer to store data during packet switching operations. The network switch may further include a calendar associated with the set of communication ports that provides bandwidth configuration for the set of communication ports. The network switch may further include a secondary calendar that may be dynamically setup. The secondary calendar may provide an alternative bandwidth configuration strategy for the set of communication ports. The switch includes circuitry that may increase the prebuffer size and upon the successful increase of the prebuffer size reconfigure the set of communication ports from the original calendar to the secondary calendar, without a reboot. The circuitry may reset the prebuffer size after reconfiguration is complete and the switch may continue operation according to the reconfigured settings. | 10-29-2015 |
20150312161 | HANDLING LARGE FRAMES IN A VIRTUALIZED FIBRE CHANNEL OVER ETHERNET (FCOE) DATA FORWARDER - A switch unit has one frame buffer pool for storing received frames and another frame buffer pool for storing large frames. The frame size in the large frame buffer pool may be optimized to the largest amount of data the switch unit that an FCoE switching is running on can support (i.e., a limitation of zone entries). Should free space be unavailable in the large frame buffer pool, or if a sequence grows bigger than can be supported, the switch unit may still continue to send response frames back to the sender. While the switch unit may store header information of the frame, the switch unit does not store the data of subsequent frames any longer. Once the sequence has been received completely, a rejection message is sent back with an appropriate error or reason code. The rejection message enables the sender to attempt a retransmission or cancel the current request altogether. | 10-29-2015 |
20150312162 | HANDLING LARGE FRAMES IN A VIRTUALIZED FIBRE CHANNEL OVER ETHERNET (FCOE) DATA FORWARDER - A switch unit has one frame buffer pool for storing received frames and another frame buffer pool for storing large frames. The frame size in the large frame buffer pool may be optimized to the largest amount of data the switch unit that an FCoE switching is running on can support (i.e., a limitation of zone entries). Should free space be unavailable in the large frame buffer pool, or if a sequence grows bigger than can be supported, the switch unit may still continue to send response frames back to the sender. While the switch unit may store header information of the frame, the switch unit does not store the data of subsequent frames any longer. Once the sequence has been received completely, a rejection message is sent back with an appropriate error or reason code. The rejection message enables the sender to attempt a retransmission or cancel the current request altogether. | 10-29-2015 |
20150334038 | Method and Apparatus for Managing Media Access Control Addresses - A method and apparatus for managing a media access control address are provided. The method comprises assigning a priority to the MAC address. The method also comprises managing the MAC address in a forwarding database based on the priority. With the method and apparatus, a MAC flooding attack can be efficiently avoided and communication performance would be improved in a secure manner. | 11-19-2015 |
20150350082 | SYSTEMS AND METHODS FOR THROTTLING PACKET TRANSMISSION IN A SCALABLE MEMORY SYSTEM PROTOCOL - A method may include transmitting, via a processor, a plurality of packets to a receiving component, such that the plurality of packets corresponds to a plurality of data operations configured to access a memory component. The plurality of packets is stored in a buffer of the receiving component upon receipt. The method may also include determining, via the processor, whether an available capacity of the buffer is less than a threshold decreasing a transmission rate of the plurality of packets when the available capacity is less than the threshold. | 12-03-2015 |
20150350099 | Controlling A Jitter Buffer - Apparatus and methods for controlling a jitter buffer are described. In one embodiment, the apparatus for controlling a jitter buffer includes an inter-talkspurt delay jitter estimator for estimating an offset value of the delay of a first frame in the current talkspurt with respect to the delay of a latest anchor frame in a previous talkspurt, and a jitter buffer controller for adjusting a length of the jitter buffer based on a long term length of the jitter buffer for each frame and the offset value. | 12-03-2015 |
20150365333 | METHOD AND APPARATUS FOR REDUCING POOL STARVATION IN A SHARED MEMORY SWITCH - A switch includes a reserved pool of buffers in a shared memory. The reserved pool of buffers is reserved for exclusive use by an egress port. The switch includes pool select logic which selects a free buffer from the reserved pool for storing data received from an ingress port to be forwarded to the egress port. The shared memory also includes a shared pool of buffers. The shared pool of buffers is shared by a plurality of egress ports. The pool select logic selects a free buffer in the shared pool upon detecting no free buffer in the reserved pool. The shared memory may also include a multicast pool of buffers. The multicast pool of buffers is shared by a plurality of egress ports. The pool select logic selects a free buffer in the multicast pool upon detecting an IP Multicast data packet received from an ingress port. | 12-17-2015 |
20150365338 | SYSTEMS AND METHODS FOR BLOCKING TRANSMISSION OF A FRAME IN A NETWORK DEVICE - A network device including a queue, a timing module, an adjustment module, a register module and a blocking shaper. The queue is configured to store a frame. The timing module is configured to generate a local clock signal. The adjustment module is configured to determine (i) based on a first edge of a global clock signal, an expected time of a second edge of the global clock signal, and (ii) a window centered on the expected time of the second edge of the global clock signal. The register module is configured to capture a time of a first edge of the local clock signal. The adjustment module is configured to, based on the captured time of the first edge of the local clock signal and the time of the first edge of the global clock signal, generate an adjustment signal to center a second edge of the local clock signal in the window. The blocking shaper is configured to, subsequent to adjusting the second edge of the local clock signal, block transmission of the frame from the network device based on timing of the local clock signal. | 12-17-2015 |
20150365339 | COUNTER WITH OVERFLOW FIFO AND A METHOD THEREOF - Embodiments of the present invention relate to an architecture that extends counter life by provisioning each counter for an average case and handles overflow via an overflow FIFO and an interrupt to a process monitoring the counters. This architecture addresses a general optimization problem, which can be stated as, given N counters, for a certain CPU read interval T, of how to minimize the number of storage bits needed to store and operate these N counters. Equivalently, this general optimization problem can also be stated as, given N counters and a certain amount of storage bits, of how to optimize and increase CPU read interval T. This architecture extends the counter CPU read interval linearly with depth of the overflow FIFO. | 12-17-2015 |
20150365355 | HIERARCHICAL STATISTICALLY MULTIPLEXED COUNTERS AND A METHOD THEROF - Embodiments of the present invention relate to an architecture that uses hierarchical statistically multiplexed counters to extend counter life by orders of magnitude. Each level includes statistically multiplexed counters. The statistically multiplexed counters includes P base counters and S subcounters, wherein the S subcounters are dynamically concatenated with the P base counters. When a row overflow in a level occurs, counters in a next level above are used to extend counter life. The hierarchical statistically multiplexed counters can be used with an overflow FIFO to further extend counter life. | 12-17-2015 |
20150372896 | PACKET SEGMENTATION WITH DIFFERENT SEGMENT SIZES FOR A SWITCH FABRIC - An epoch-based network processor internally segments packets for processing and aggregation in epoch payloads. FIFO buffers interact with a memory management unit to efficiently manage the segmentation and aggregation process. | 12-24-2015 |
20160006665 | HIGH-SPEED DEQUEUING OF BUFFER IDS IN FRAME STORING SYSTEM - Incoming frame data is stored in a plurality of dual linked lists of buffers in a pipelined memory. The dual linked lists of buffers are maintained by a link manager. The link manager maintains, for each dual linked list of buffers, a first head pointer, a second head pointer, a first tail pointer, a second tail pointer, a head pointer active bit, and a tail pointer active bit. The first head and tail pointers are used to maintain the first linked list of the dual linked list. The second head and tail pointers are used to maintain the second linked list of the dual linked list. Due to the pipelined nature of the memory, the dual linked list system can be popped to supply dequeued values at a sustained rate of more than one value per the read access latency time of the pipelined memory. | 01-07-2016 |
20160006676 | Method And Apparatus For Performing Finite Memory Network Coding In An Arbitrary Network - Techniques for performing finite memory network coding in an arbitrary network limit an amount of memory that is provided within a node of the network for the performance of network coding operations during data relay operations. When a new data packet is received by a node, the data stored within the limited amount of memory may be updated by linearly combining the new packet with the stored data. In some implementations, different storage buffers may be provided within a node for the performance of network coding operations and decoding operations. | 01-07-2016 |
20160006677 | INVERSE PCP FLOW REMAPPING FOR PFC PAUSE FRAME GENERATION - An overflow threshold value is stored for each of a plurality of virtual channels. A link manager maintains, for each virtual channel, a buffer count. If the buffer count for a virtual channel is detected to exceed the overflow threshold value for a virtual channel whose originating PCP flows were merged, then a PFC (Priority Flow Control) pause frame is generated where multiple ones of the priority class enable bits are set to indicate that multiple PCP flows should be paused. For the particular virtual channel that is overloaded, an Inverse PCP Remap LUT (IPRLUT) circuit performs inverse PCP mapping, including merging and/or reordering mapping, and outputs an indication of each of those PCP flows that is associated with the overloaded virtual channel. Associated physical MAC port circuitry uses this information to generate the PFC pause frame so that the appropriate multiple enable bits are set in the pause frame. | 01-07-2016 |
20160014051 | Data Matching Using Flow Based Packet Data Storage | 01-14-2016 |
20160021018 | ORDER-SENSITIVE COMMUNICATIONS IN PACKET REORDERING NETWORKS - In one embodiment, a device in a network determines that a particular packet flow in the network is sensitive to packet reordering. The device determines whether a particular packet of the packet flow is to be routed differently than an immediately prior packet in the packet flow, in response to determining that the particular packet flow is sensitive to reordering. The device marks the particular packet as taking a different route than the immediately prior packet in the packet flow, prior to forwarding the marked packet and in response to determining that the particular packet is to be routed differently than the immediately prior packet in the packet flow. | 01-21-2016 |
20160028616 | DYNAMIC PATH SWITCHOVER DECISION OVERRIDE BASED ON FLOW CHARACTERISTICS - In one embodiment, a device in a network receives a switchover policy for a particular type of traffic in the network. The device determines a predicted effect of directing a traffic flow of the particular type of traffic from a first path in the network to a second path in the network. The device determines whether the predicted effect of directing the traffic flow to the second path would violate the switchover policy. The device causes the traffic flow to be routed via the second path in the network, based on a determination that the predicted effect of directing the traffic flow to the second path would not violate the switchover policy for the particular type of traffic. | 01-28-2016 |
20160057068 | SYSTEM AND METHOD FOR TRANSMITTING DATA EMBEDDED INTO CONTROL INFORMATION - An apparatus executes a transmission-side process on target data to be transmitted to another apparatus through a communication path. The transmission-side process generates transmission data including payload information and control information, where the control information includes the target data and address information indicating a destination address of the target data. The another apparatus includes a queue area configured to store pieces of information as queueing data so as to prevent a piece of information from being overwritten by another piece of information. The apparatus controls transmission of the transmission data to the another apparatus by embedding the target data into the control information included in the transmission data. The another apparatus stores the control information included in the received transmission data into the queue area as queuing data, and extracts the embedded target data from the control information stored in the queue area. | 02-25-2016 |
20160057069 | PACKET ENGINE THAT USES PPI ADDRESSING - Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing and handling the storing of packet portions into the memory, a packet engine is provided. The PDRSDs use a PPI (Packet Portion Identifier) Addressing Mode (PAM) in communicating with the packet engine and in instructing the packet engine to store packet portions. The packet engine uses linear memory addressing to write the packet portions into the memory, and to read the packet portions from the memory. | 02-25-2016 |
20160065422 | BANDWIDTH ON DEMAND IN SDN NETWORKS - Bandwidth-on-Demand (BoD) as a network service (BoD-as-a-Service) is integrated into applications that end-users can flexibly purchase when and for however long they need it. A centralized Software Defined Networking (SDN) controller and distributed SDN controller agents that may be seen in a Service Provider, Enterprise or distributed computing environment with remote and mobile end-users is provided. The end-user initiates the BoD request using an application via desktop, cloud, smartphone or tablet. The BoD Service Provider, Enterprise, has a controller-based centralized view of the complete SDN service topology. On receiving the request, the BoD provider dynamically computes the optimal end-to-end path through the SDN topology that best suits the end-user requested traffic types and service level requirements. It then translates that optimal path into flow computations that are dynamically pushed down to the controller agents to provision the BoD network path in real-time. An on-demand and real-time bandwidth service for consumers are herein provided where it was previously too costly or too time consuming to set up. | 03-03-2016 |
20160065504 | COMMUNICATION SYSTEM AND ELECTRONIC COMPONENT MOUNTING DEVICE - A communication system in which a transmission line performs data transmission using multiplexing. Data extraction sections of an optical wireless device extract data output from multiple electric devices based on a start bit of the respective data, and output the data to multiple first buffers which are disposed corresponding to the electric devices. A control section selects any one of the first buffers, and outputs the data from the first buffers to a second buffer. A control section adds an identification information ID to the data indicating from which electric device the data are obtained, and stores the data in the second buffer. The data and the identification information ID of the second buffer are input to a multiplexing device from an input port. The multiplexing device multiplexes the data together with other data as a frame. | 03-03-2016 |
20160072717 | REDUCING PACKET REORDERING IN FLOW-BASED NETWORKS - The present disclosure provides for methods, network devices, and computer readable storage media for packet reordering. In one embodiment, a method includes receiving a first packet of a first flow at a network device and determining whether flow-identifying information extracted from the first packet matches an existing flow entry. The method also includes, in response to a determination that the flow-identifying information does not match any existing flow entries, generating a new transient flow entry that includes the flow-identifying information and packet-in state. The method also includes forwarding the first packet to a controller via a packet-in stream. | 03-10-2016 |
20160105524 | ONLINE PROGRESSIVE CONTENT PLACEMENT IN A CONTENT CENTRIC NETWORK - A method for content placement along the delivery path of a network of nodes in a content centric network includes receiving first and second interest packets from a downstream node, checking a content cache of a first node for the first and second data packets and, in response to the first and second data packets being absent from the content cache, checking a pending interest table of the first node for the first and second interest packets. The method also includes, in response to the first and second interest packets being absent from the pending interest table of the first node and a cache flag of the first interest packet received from the downstream node being off, performing various operations to cause the first data packet to be cached in the content cache of the first node. | 04-14-2016 |
20160127275 | Command Injection to Hardware Pipeline for Atomic Configuration - A command processing system facilitates pipeline configuration. Each stage of a packet processing pipeline may access certain memory locations for processing of a data packet as it passes through each stage. The command processing system facilitates changing the memory locations in an atomic manner. | 05-05-2016 |
20160134724 | VIRTUAL MEMORY PROTOCOL SEGMENTATION OFFLOADING - Methods and systems for a more efficient transmission of network traffic are provided. According to one embodiment, payload data originated by a user process running on a host processor of a network device is fetched by an interface of the network device by performing direct virtual memory addressing of a user memory space of a system memory of the network device on behalf of a network interface unit of the network device. The direct virtual memory addressing maps physical addresses of various portions of the payload data to corresponding virtual addresses. The payload data is segmented by the network interface unit across one or more packets. | 05-12-2016 |
20160142332 | METHOD AND APPARATUS FOR BLOCKING TRANSMISSION OF FRAMES FROM A NETWORK DEVICE - A network device including first and second queues, a timing module and a shaper. The first queue receives first frames. The second queue receives second frames. A priority level of the second frames is lower than a priority level of the first frames. The timing module determines a start time of a burst period of the first frames. The first frames are transmitted from the network device during the burst period. The shaper determines: a size of a head-of-line frame of the second frames; a predetermined maximum size of one of the second frames; or a predetermined minimum size of one of the second frames. The shaper determines whether to block transmission of the head-of-line frame from the network device based on (i) the start time, (ii) the size of the head-of-line frame, (iii) the predetermined maximum size, or (iv) the predetermined minimum size. | 05-19-2016 |
20160142333 | METHOD AND APPARATUS FOR PERFORMING A WEIGHTED QUEUE SCHEDULING USING A SET OF FAIRNESS FACTORS - A computer-implemented medium using a scheduler for processing requests by receiving packet data from multiple source ports and then classifying, the received packet data based upon the source port received and a destination port the data being sent. Next, sorting, the classified packet data into multiple queues in a buffer, and updating, a static component of one or more of the multiple queues upon the queue receiving the sorted classified data packet. Further, scheduling, using the scheduler based upon the destination port availability and a set of fairness factors including priority weights and positions, for selecting a dequeuing of data packets from a set of corresponding queues of the multiple queues, and then updating the static of the dequeued queue upon the data packet being outputted from the dequeued queue. | 05-19-2016 |
20160150301 | PACKET TRANSFER SYSTEM, CONTROL DEVICE, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM - A packet transfer system includes a packet transfer unit ( | 05-26-2016 |
20160173365 | FRAMEWORK FOR SCHEDULING PACKETS WITH MULTIPLE DESTINATIONS IN A VIRTUAL OUTPUT QUEUE NETWORK SWITCH | 06-16-2016 |
20160173418 | METHOD AND APPARATUS FOR TRANSMITTING CAN FRAME | 06-16-2016 |
20160182360 | Cloud Architecture with State-Saving Middlebox Scaling | 06-23-2016 |
20160182392 | MAINTAINING PACKET ORDER IN A MULTI PROCESSOR NETWORK DEVICE | 06-23-2016 |
20160182393 | COMBINED GUARANTEED THROUGHPUT AND BEST EFFORT NETWORK-ON-CHIP | 06-23-2016 |
20160182943 | METHOD AND APPARATUS FOR MINIMIZING TIMING ARTIFACTS IN REMULTIPLEXING | 06-23-2016 |
20170237678 | WORK CONSERVING SCHEDULAR BASED ON RANKING | 08-17-2017 |
20180026916 | MULTI-PROCESSOR COMPUTING SYSTEMS | 01-25-2018 |
20190149477 | SYSTEMS AND METHODS FOR PROVIDING LOCKLESS BIMODAL QUEUES FOR SELECTIVE PACKET CAPTURE | 05-16-2019 |
20190149485 | DATA TRANSFER CIRCUIT, DATA TRANSFER SYSTEM, AND METHOD FOR CONTROLLING DATA TRANSFER CIRCUIT | 05-16-2019 |