Patent application number | Description | Published |
20080205403 | Network packet processing using multi-stage classification - Methods and systems for processing packets in data network using multistage classification are disclosed. An example method for processing packets includes receiving a data packet at a first processing stage and examining the packet at the first processing stage to determine a first attribute of the packet. Based on the first attribute, a first classification is assigned to the packet. In the example method, the packet and the first classification are communicated from the first processing stage to a second processing stage and the packet is examined at the second processing stage to determine a second attribute of the packet. Based on the second attribute, a second classification is assigned to the packet. The example method further includes processing the packet based on the first classification and the second classification. | 08-28-2008 |
20090003209 | FLOW REGULATION SWITCH - A network switch includes a plurality of egress ports configured to send packets of data traffic to at least one receiving network device and a plurality of ingress ports configured to receive the packets of data traffic from at least one sending network device. The switch further includes a switch logic engine configured to define multiple flows of data through the switch from a sending network device to a receiving network device and to route the flows from the ingress port to the egress port, a flow monitor configured to measure at least one flow attribute of the flows, and a flow regulation engine configured to regulate a flow rate of flows sent by a sending network device based at least in part on a measurement by the flow monitor of the at least one flow attribute of the packets. | 01-01-2009 |
20090122698 | VIRTUAL QUEUE - An apparatus comprising a virtual queue configured to virtually receive virtual data units as the data units are actually received by a real queue. In various embodiments, the virtual queue may include a committed token counter (CTC) configured to represent a number of bytes available to be allocated to a committed burst having a maximum size. In such an embodiment, the virtual queue may include an excess token counter (ETC) configured to represent a number of bytes available to be allocated to an excess burst having a maximum size. In one embodiment, a token counter incrementer configured to, as an exiting data unit virtually exits the virtual queue, increment either the committed token counter or the excess token counter. In various embodiments, the virtual queue may include a token counter decrementor configured to, as an entering data unit virtually enters the virtual queue, attempt to allocate the entering data unit to either the committed burst or the excess burst and decrement either the committed token counter or the excess token counter respectively. In some embodiments, the virtual queue may include a congestion indicator configured to categorize the entering data unit. In various embodiments, the virtual queue may be configured to provide congestion feedback information based, at least in part, upon the state of the CTC & ETC. | 05-14-2009 |
20090154354 | PROXY REACTION ENGINE IN A CONGESTION MANAGEMENT SYSTEM - An apparatus comprising a managed network interface configured to receive data from, and transmit data to, a managed network, wherein the managed network comprises a plurality of managed devices configured to queue and transmit data; an unmanaged network interface configured to receive data from, and transmit data to, an unmanaged network, wherein the unmanaged network is configured to request the amelioration of network congestion experienced by the unmanaged network; and a congestion manager configured to receive a network congestion amelioration request from the unmanaged network, ameliorate network congestion by controlling the rate of information forwarded from the managed network to an unmanaged network, and dynamically alter the rate of information forwarded from the managed network to the unmanaged network. | 06-18-2009 |
20100097934 | NETWORK SWITCH FABRIC DISPERSION - Methods and apparatus for communicating data traffic using switch fabric dispersion are disclosed. An example apparatus includes a first tier of switch elements; and a second tier of switch elements operationally coupled with the first tier of switch elements. In the example apparatus, the first tier of switch elements is configured to receive a data packet from a source. The first tier of switch elements is also configured to route the data packet to the second tier of switch elements in accordance with a dispersion function, where the dispersion function is based on a dispersion tag associated with the data packet. The first tier of switch elements is still further configured to transmit the data packet to a destination for the data packet after receiving it from the second tier of switch elements. In the example apparatus the second tier of switch elements is configured to receive the data packet from the first tier of switch elements and route the data packet, based on a destination address of the data packet, back to the first tier of switch elements for transmission to the destination. | 04-22-2010 |
20100265952 | REMAPPING MODULE IDENTIFIER FIELDS AND PORT IDENTIFIER FIELDS - A method of adjusting fields of a datagram in the handling of the datagram in a network device may comprising receiving a datagram, with the datagram having at least module identifier fields and port identifier fields, at a port of a network device, adding or subtracting an offset value to at least one of the module identifier fields and at least one of the port identifier fields of the datagram based on data registers in the network device, and forwarding the datagram to a legacy device based on the module and port identifier fields of the datagram. A size of each of the module identifier fields and the port identifier fields handled by the legacy device may be smaller than a size of the module identifier fields and port identifier fields handled by the network device. | 10-21-2010 |
20110063979 | NETWORK TRAFFIC MANAGEMENT - Various example embodiments are disclosed. According to an example embodiment, an apparatus may include a switch fabric. The switch fabric may be configured to assign packets to either a first flow set or a second flow set based on fields included in the packets. The switch fabric may also be configured to send a first packet from the first flow set to a first flow set destination via a first path. The switch fabric may also be configured to determine, based at least in part on delays of the first path and a second path, whether sending a second packet from the first flow set to the first flow set destination via a second path will result in the second packet reaching the first flow set destination after the first packet reaches the first flow set destination, the second packet having been received by the router after the first packet. The switch fabric may also be configured to send the second packet to the first flow set destination via the second path based at least in part on the determining that sending the second packet from the first flow set to the first flow set destination via a second path will result in the second packet reaching the first flow set destination after the first packet reaches the first flow set destination. | 03-17-2011 |
20140086258 | Buffer Statistics Tracking - The systems and methods disclosed herein allow for a switch (in a packet-switching network) to track buffer statistics, and trigger an event, such as a hardware interrupt or a system snapshot, in response to the buffer statistics reaching a threshold that may indicate an impending problem. Since the switch itself triggers the event to alert the network administrator, the network administrator no longer needs to sift through mountains of data to identify potential problems. Also, since the switch triggers the event prior to a problem arising, the network administrator can provide remedial action prior to a problem occurring. This type of event-triggering mechanism makes the administration of packet-switching networks more manageable. | 03-27-2014 |
20140156906 | Virtual Trunking Over Physical Links - A technique in which at least one controlling bridge controls data traffic among devices located lower in hierarchy below the controlling bridge. Those devices include a plurality of porting devices, such as line modules and port extenders, which ultimately communicate with an end point device, referred to as a station. At least two physical pathways from a controlling bridge to a station are grouped together into a virtual trunk to provide multiple physical pathways for packet transfer when operating in a dual-homed mode. | 06-05-2014 |
20140169189 | Network Status Mapping - Embodiments of the present disclosure provide systems and methods for network status mapping. Such an exemplary system and method involves inserting a network map tag in a flow set of packets in a computer network and receiving a response to the network map tag from a network element that includes populated fields of the network map tag comprising a field to identify a network element, a field to identify the outgoing port of the network element, a field to identify queue of the outgoing port; and a status field for the queue of the outgoing port. | 06-19-2014 |
20140177641 | Satellite Controlling Bridge Architecture - A system and a method include a port extender communicatively linked to a controlling bridge. Network data is received from a local network peer downstream to the port extender. Whether a destination of the network data is a recognized downstream network peer of the port extender is determined. The network data is selectively routed according to whether the destination of the network data is a recognized downstream network peer of the port extender. | 06-26-2014 |
20140241160 | Scalable, Low Latency, Deep Buffered Switch Architecture - A switch architecture includes an ingress module, ingress fabric interface module, and a switch fabric. The switch fabric communicates with egress fabric interface modules and egress modules. The architecture implements multiple layers of congestion management. The congestion management may include fast acting link level flow control and more slowly acting end-to-end flow control. The switch architecture simultaneously provides high scalability, with low latency and low frame loss. | 08-28-2014 |
20140293825 | TIMESTAMPING DATA PACKETS - Disclosed are various embodiments for providing a data packet with timestamp information. A data packet is generated such that it comprises a payload and a header. The payload comprises a first timestamp field that comprises data indicating when a network device processed the data packet. The payload also comprises a body data field and a body data protocol field. The body data protocol field comprises data identifying a protocol used by body data in the body data field. The header comprises a payload protocol field that comprises data identifying that the payload comprises timestamp data. | 10-02-2014 |
Patent application number | Description | Published |
20080247394 | Cluster switching architecture - A network switch including at least one data port interface supporting a plurality of data ports, at least one stack link interface configured to transmit data between the network switch and other network switches, and a CPU interface configured to communicate with a CPU. A memory management unit in communication with the at least one data port interface and the at least one stack link interface is provided along with a memory interface in communication with the at least one data port interface and the at least one stack link interface, wherein the memory interface is configured to communicate with a memory. A communication channel is provided for communicating data and messaging information between the at least one data port interface, the at least one stack link interface, the memory interface, and the memory management unit, wherein the memory management unit is configured to route data received from each of the at least one data port interface and the at least one stack link interface to the memory interface. | 10-09-2008 |
20090074001 | Switch assembly having multiple blades in a chassis - A switch assembly having multiple blades in a chassis and a method of using that assembly to switch data is disclosed. A network switch assembly for network communications includes at least one fabric blade and a plurality of port blades. The at least one fabric blade has at least one switch having a plurality of data port interfaces, supporting a plurality of fabric data ports transmitting and receiving data, and a CPU interface, where CPU interface is configured to communicate with a CPU. The at least one fabric blade also has a CPU subsystem communicating with the CPU interface. Each of said plurality of port blades has at least one switch having a plurality of data port interfaces, supporting a plurality of port data ports transmitting and receiving data. The plurality of port data ports communicate with the plurality of fabric data ports along multiple paths such that data received by the port data ports is switched to a destination port of the network switch assembly along a specified path of the multiple paths based on a portion of the received data. In particular, the invention relates to configurations having five and nine blades to provide the requisite switching capacity. | 03-19-2009 |
20090323535 | DISTRIBUTING INFORMATION ACROSS EQUAL-COST PATHS IN A NETWORK - A method of distributing data across a network having a plurality of equal-cost paths. Also, a device for distributing data over a network according to the method. The data, which is typically contained in data packets, may be distributed based on at least one attribute of each of the packets. The data may also be distributed according to a weighted distribution function that allows for unequal amounts of traffic to be distributed to each of the equal-cost paths. | 12-31-2009 |
20100142536 | UNICAST TRUNKING IN A NETWORK DEVICE - A network device for selecting a port from a trunk group to transmit a unicast packet on the selected port. The network device includes at least one trunk group including a plurality of physical ports. The network device also includes a table with a plurality of entries. Each entry is associated with one trunk group and includes a plurality of fields that are associated with ports in the trunk group. Each entry also includes a hash field that is used to select bits from predefined fields of an incoming unicast packet to obtain an index bit for accessing one of the plurality of fields. The network device further includes transmitting means for transmitting the unicast packet to a port associated with an accessed one of the plurality of fields. | 06-10-2010 |
20100177637 | FLOW BASED CONGESTION CONTROL - A method for selectively controlling the flow of data through a network device is discussed. The network device has a plurality of ports, with each port of the plurality of ports having a plurality of priority queues. Congestion at one priority queue of the plurality of priority queues is detected and a virtual channel message is sent to other network devices connected to the network device causing data destined for the one priority queue to be halted. After the congestion at the one priority queue has abated, a virtual channel resume message is sent to the other network devices. | 07-15-2010 |
20110110236 | Multiple Logical Channels for Use in Network Devices - A method for establishing a virtual channel between network devices is disclosed. In the case of a local network device establishing a virtual channel with a remote network device, a virtual channel request message is sent from the local network device to the remote network device. A virtual channel acknowledgement message and a remote capability list are received and a virtual channel resume message and a local capability list are sent. The virtual channel is then enabled. In the case of a remote network device establishing a virtual channel with a local network device, a virtual channel request message is received from a local network device by a remote network device. A virtual channel acknowledgement message and a remote capability list are sent and a virtual channel resume message and a local capability list are received. The virtual channel is then enabled. | 05-12-2011 |
20120008502 | FLOW BASED CONGESTION CONTROL - A method for selectively controlling the flow of data through a network device is discussed. The network device has a plurality of ports, with each port of the plurality of ports having a plurality of priority queues. Congestion at one priority queue of the plurality of priority queues is detected and a virtual channel message is sent to other network devices connected to the network device causing data destined for the one priority queue to be halted. After the congestion at the one priority queue has abated, a virtual channel resume message is sent to the other network devices. | 01-12-2012 |
20130301410 | Multiple Logical Channels for Use in Network Devices - A method for establishing a virtual channel between network devices is disclosed. In the case of a local network device establishing a virtual channel with a remote network device, a virtual channel request message is sent from the local network device to the remote network device. A virtual channel acknowledgement message and a remote capability list are received and a virtual channel resume message and a local capability list are sent. The virtual channel is then enabled. In the case of a remote network device establishing a virtual channel with a local network device, a virtual channel request message is received from a local network device by a remote network device. A virtual channel acknowledgement message and a remote capability list are sent and a virtual channel resume message and a local capability list are received. The virtual channel is then enabled. | 11-14-2013 |