Patent application number | Description | Published |
20090092043 | Providing an abstraction layer in a cluster switch that includes plural switches - In a communications network, a cluster switch is provided, where the cluster switch has plural individual switches. An abstraction layer is provided in the cluster switch, such that an interface having a set of ports is provided to upper layer logic in the cluster switch. The set of ports includes a collection of ports of the individual switches. Control traffic and data traffic are communicated over virtual tunnels between individual switches of the cluster switch, where each virtual tunnel has an active channel and at least one standby channel. | 04-09-2009 |
20100169718 | Metro Ethernet Connectivity Fault Management Acceleration - A network element disposed at an edge of a connectivity fault management (CFM) domain includes a switch fabric, a central processor (CP) card, and a line card in communication with the CP card through the switch fabric. The line card includes an Ethernet interface for transmitting and receiving Ethernet CFM frames over a network and circuitry configured to generate new continuity check messages (CCMs) periodically, to process CCMs received on each connection supported by the line card, and to detect a loss of continuity for any of the connections supported by the line card. The line card maintains a list of supported connections. A generate timer and an age counter are associated with each connection in the list. The line card generates a CCM for a given connection when the generate timer expires and detects a loss of continuity for the given connection when its age counter exceeds a threshold. | 07-01-2010 |
20100290335 | METHOD AND APPARATUS FOR LOCALLY IMPLEMENTING PORT SELECTION VIA SYNCHRONIZED PORT STATE DATABASES MAINTAINED BY THE FORWARDING PLANE OF A NETWORK ELEMENT - A method, apparatus and computer program product for implementing port selection via synchronized port state databases maintained by the forwarding plane of a network element is presented. Each Forwarding Data Unit (FDU) within the forwarding plane of the network element maintains a respective port state database, each port state database containing a synchronized view of the port state for all ports within the network element. A port selection process is performed by each port state database upon request of its associated FDU, to identify an available port in an UP state associated with a Multi-Link Trunk (MLT) to enable fast reroute between ports associated with the MLT in the event of port failure. The process returns an identified port to the FDU for use by the FDU to forward the packet. | 11-18-2010 |
20100290458 | METHOD AND APPARATUS FOR PROVIDING FAST REROUTE OF A PACKET THAT MAY BE FORWARDED ON ONE OF A PLURALITY OF EQUAL COST MULTIPATH ROUTES THROUGH A NETWORK - A method, apparatus and computer program product for providing fast reroute of a packet that may be forwarded on one of a plurality of Equal Cost Multi Path (ECMP) routes through a network is presented. A packet is received by a Forwarding Data Unit (FDU) in a data plane of a network element. The unicast packet is routed at L3, and ECMP is enabled for a next hop for the unicast packet. An ECMP route is selected for forwarding the packet to a destination port. A lookup is performed in a port state table maintained by the FDU to determine an available local port for said ECMP route that is in an UP state for the destination, and if no local port is UP, then a lookup is performed in the port state table to determine an available remote port that is in an UP state for the selected ECMP route. | 11-18-2010 |
20100290464 | METHOD AND APPARATUS FOR PROVIDING FAST REROUTE OF A MULTICAST PACKET WITHIN A NETWORK ELEMENT TO AN AVAILABLE PORT ASSOCIATED WITH A MULTI-LINK TRUNK - A method, apparatus and computer program product for providing a fast re-route of a multicast packet within a network element to an available port associated with a multi-link trunk is presented. A packet is received by a Forwarding Data Unit (FDU) in a data plane of a network element and a determination made that the packet is a multicast packet. The packet is forwarded to all egress FDUs having at least one port associated with at least one receiver of the multicast packet. A lookup is performed by each egress FDU in a synchronized local port state database to find a port for each receiver that is in an UP state. The packet is forwarded out the port to a receiver when the port is in the UP state and dropped when the port is in the DOWN state. | 11-18-2010 |
20100290469 | METHOD AND APPARATUS FOR PROVIDING FAST REROUTE OF A UNICAST PACKET WITHIN A NETWORK ELEMENT TO AN AVAILABLE PORT ASSOCIATED WITH A MULTI-LINK TRUNK - A method, apparatus and computer program product for providing fast reroute of a packet is presented. A unicast packet is received by an FDU in a data plane of a network element and a destination is determined for the packet. A lookup is performed in a port state table maintained by the FDU to determine an available local port that is in an UP state for the destination, and if no local port is UP, then a lookup is performed in the port state table to determine an available remote port that is in an UP state for the destination. If a port in the UP state cannot be determined for the unicast packet, then the packet is dropped. | 11-18-2010 |
20100293200 | Method and apparatus for maintaining port state tables in a forwarding plane of a network element - A method, apparatus and computer program product for maintaining port state tables in a forwarding plane of a network element are presented. The state of a first set of ports associated with a first Forwarding Data Unit (FDU) are periodically determined, the first FDU being one of a plurality of FDUs. The determined state is used to update a first port state table of the port state database associated with the first FDU. The determined state is transmitted to each other FDUs on the network element to enable each of the other FDUs to store the state of the first set of ports in a port state database local to each of the other FDUs. The port state database is used by the forwarding plane to perform fast reroute of packets. | 11-18-2010 |
20110317699 | METHOD FOR MEDIA ACCESS CONTROL ADDRESS LEARNING AND LEARNING RATE SUPPRESSION - A method, apparatus and computer program product for Media Access Control (MAC) address learning and learning rate suppression are presented. A Forwarding Data Unit (FDU) maintains two cache tables, each of the cache tables used for harvesting MAC addresses. The FDU uses the cache tables in an alternating manner, wherein when one of the cache tables is used for harvesting MAC addresses the other one of the cache tables has its contents bundled into a packet for forwarding to a control plane of the FDU. | 12-29-2011 |
20110317700 | METHOD FOR REAL-TIME SYNCHRONIZATION OF ARP RECORD IN RSMLT CLUSTER - Embodiments herein include systems and methods for providing a mechanism for efficient data synchronization of ARP records between two peer nodes of an SMLT system. Such techniques include modifying control information of ARP packets transmitted between peer nodes of the SMLT system to indicate originating SMLT ports. Techniques also include disabling MAC synchronization control messaging across the IST link. These techniques enable real-time synchronization ARP records for MAC learning without needing dedicated control messaging over the IST, thereby providing nodal and SMLT port failover and recovery. | 12-29-2011 |
20110317713 | Control Plane Packet Processing and Latency Control - A switch resource receives control plane packets and data packets. The control plane packets indicate how to configure the network in which the switch resource resides. The switch resource includes a classifier. The classifier classifies the control plane packets based on priority and stores the control plane packets into different packet priority queues. The switch resource also includes a flow controller. The forwarding manager selectively forwards the control plane packets stored in the control plane packet priority queues to a control plane packet processing environment depending on a completion status of processing previously forwarded control plane packets by a packet processing thread. The control plane packet processing environment includes a monitor resource that generates one or more interrupts to an operating system to ensure further forwarding of the packets downstream to the packet processing thread for timely processing. | 12-29-2011 |
20110320680 | Method and Apparatus for Efficient Memory Bank Utilization in Multi-Threaded Packet Processors - A method and apparatus for efficient memory bank utilization in multi-threaded packet processors is presented. A plurality of memory access requests, are received and are buffered by a plurality of memory First In First Out (FIFO) buffers, each of the memory FIFO buffers in communication with a memory controller. The memory access requests are distributed evenly across said memory banks by way of the memory controller. This reduces and/or eliminates memory latency which can occur when sequential memory operations are performed on the same memory bank. | 12-29-2011 |
20110320693 | Method For Paramaterized Application Specific Integrated Circuit (ASIC)/Field Programmable Gate Array (FPGA) Memory-Based Ternary Content Addressable Memory (TCAM) - A method and apparatus for providing TCAM functionality in a custom integrated circuit (IC) is presented. An incoming key is broken into a predefined number of sub-keys. Each sub-key is sued to address a Random Access Memory (RAM), one RAM for each sub-key. An output of the RAM is collected for each sub-key, each output comprising a Partial Match Vector (PMV). The PMVs are bitwise ANDed to obtain a value which is provided to a priority encoder to obtain an index. The index is used to access a result RAM to return a result value for the key. | 12-29-2011 |
20110320705 | METHOD FOR TCAM LOOKUP IN MULTI-THREADED PACKET PROCESSORS - A method, apparatus and computer program product for performing TCAM lookups in multi-threaded packet processors is presented. A Ternary Content Addressable Memory (TCAM) key is constructed for a packet and a Packet Reference Number (PRN) is generated. The TCAM key and the packet are tagged with the PRN. The TCAM key and the PRN are sent to a TCAM and in parallel the packet and the PRN are sent to a packet processing thread. The PRN is used to read the TCAM result when it is ready. | 12-29-2011 |
20110320788 | METHOD AND APPARATUS FOR BRANCH REDUCTION IN A MULTITHREADED PACKET PROCESSOR - A method and apparatus for branch reduction in a multithreaded packet processor is presented. An instruction is executed which includes testing of a branch flag. The branch flag references a configuration bit vector wherein each bit in the configuration bit vector corresponds to a respective feature. When said branch flag returns a first result processing is continues at an instruction located at a first location relative to a Program Counter (PC) and when the branch flag returns a second result processing is continued at a second location relative to said PC. | 12-29-2011 |
20120127864 | PERFORMING POLICING OPERATIONS IN PACKET TIME - Methods and apparatus provide for a Packet Policer. The Packet Policer determines a first amount of tokens based on an interval occurring between receipt of a first packet and receipt of a second packet, where the first packet was received before the second packet. The Packet Policer determines a second amount of tokens based on a size of the second packet. The Packet Policer then updates a token bucket with the first amount of tokens as the second amount of tokens is removed from the token bucket. | 05-24-2012 |
20120127998 | NETWORK SWITCH PORT AGGREGATION - A network switch configures a static forwarding to a packet processor by suppressing packet switching and forwards all traffic received on a group of ports a trunk port for aggregation. A trunk header is overloaded with message classification information for use at the downstream packet processor. Routing logic retrieves the packet classification information and stores the information in control fields that are ignored due to the static forwarding and local switching disablement. The static forwarding forwards the packet, with the appended classification information, to a packet processor via the aggregation port. Packet classification information is indicative of the type of the message traffic and is performed upon packet arrival at the switching device. The packet processor reads the classification information from the overloaded control fields, rather then expending processing resources to determine the classification, and sends the message packet on an ingress port to a switching fabric for further transport. | 05-24-2012 |
20120185618 | METHOD FOR PROVIDING SCALABLE STORAGE VIRTUALIZATION - A method, apparatus and computer program product for providing scalable storage virtualization is presented. Storage virtualization management functions are provided in a first device, and storage virtualization Input/Output (I/O) functions are provided in a second device. An interface is provided between the first device and the second device, wherein the first device manages and updates I/O functions of the second device. I/O operations are performed between the second device and at least one storage device. | 07-19-2012 |
20120218894 | IMPLEMENTATION OF A QoS PROCESSING FILTER TO MANAGE UPSTREAM OVER-SUBSCRIPTION - A switch device can be configured to operate in a manner that was not originally intended. For example, a switch device can be a Broadcom XGS type of device that is configured with a packet-processing unit to perform line speed lookups in accordance with a default configuration. The default configuration can include classifying and forwarding received packets to an upstream interface based on VLAN information. The default configuration can be overwritten such that the switch device operates in a different manner than originally intended. For example, the switch device can be reconfigured to include mapping rules that specify different QoS data to be assigned to different type of received packets. Subsequent to utilizing the maps to identify QoS information for received packets, the reconfigured switch device uses the QoS information to forward the packets to queues in an upstream interface. | 08-30-2012 |
20120246449 | METHOD AND APPARATUS FOR EFFICIENT LOOP INSTRUCTION EXECUTION USING BIT VECTOR SCANNING - A method, apparatus and computer program product for performing efficient loop instruction execution using bit vector scanning is presented. A bit vector is scanned, each bit in the bit vector representing at least one of a feature and a conditional status. The presence of a bit of said bit vector set to a first state is detected. The bit is set to a second state. An instruction address for a routine corresponding to said bit set to a first state is looked up using a bit position of said bit that was set to a first state. The routine is executed. The scanning, said detecting, said setting and said using are repeated until there are no remaining bits of said bit vector set to said first state. | 09-27-2012 |
20120250692 | METHOD AND APPARATUS FOR TEMPORAL-BASED FLOW DISTRIBUTION ACROSS MULTIPLE PACKET PROCESSORS - A method, apparatus and computer program product for temporal-based flow distribution across multiple packet processors is presented. A packet is received and a hash identifier (ID) is computed for the packet. The hash ID is used to index into a State Table and to retrieve a corresponding record. When a time credit field of the record is zero then the time credit field is set to a to a new value; a Packet Processing Engine (PE) whose First-In-First-Out buffer (FIFO) has the lowest fill level is selected; and a PE number field in the state table record is updated with the selected PE number. When the time credit field of the record is non-zero then the packet is sent to a PE based on the value stored in the record; and the time credit field in the record is decremented if the time credit field is greater than zero. | 10-04-2012 |
20120275293 | METRO ETHERNET CONNECTIVITY FAULT MANAGEMENT ACCELERATION - In an Ethernet network element comprising at least one line interface element and a central processing unit (CPU) to control forwarding of data packets at the network element, a method comprising receiving continuity check messages (CCMs) at the at least one line interface element, and processing the CCMs in the at least one line interface element to provide continuity checks for connections to the network element without requiring processing of CCMs by the CPU. | 11-01-2012 |
20120307623 | Method and Apparatus Providing Selective Flow Redistribution Across Multi Link Trunk/Link Aggregation Group (MLT/LAG) After Port Member Failure and Recovery - A method, apparatus and computer program product are presented. In a system having at least one Multi Link Trunk/Link Aggregation Group (MLT/LAG), a table is provided for each MLT/LAG, each table having at least one entry, each entry including at least two fields, a first field comprising a port member identification (ID) field and a second field comprising a port member status field. A port member status is checked for a port when a packet flow hashes into the table, and the status for the port member is determined. When the port member status is in a first state, then the associated port member ID is used as a destination port to transmit to. When the port member state is in a second state, then a next entry in the port table is accessed to find a next available entry having a port member status that is in the first state and the corresponding port member ID of the port member state that is in the first state is used as a destination port to transmit to. The first state is UP and the second state is DOWN. | 12-06-2012 |
20120320737 | METHOD AND APPARATUS FOR LOSSLESS LINK RECOVERY BETWEEN TWO DEVICES INTERCONNECTED VIA MULTI LINK TRUNK/LINK AGGREGATION GROUP (MLT/LAG) - A method, apparatus and computer readable medium for maintaining two variables per port member of a network device which is part of a Split Multi Link Trunk/Link Aggregation Group (SMLT/LAG) is presented. A first variable comprising a link status variable reflecting a link status, and a second variable comprising a forwarding status variable reflecting a forwarding status of a forwarding plane with respect to the port are provided, the link status variable and the forwarding status variable in a first state when the port is operating properly. A failure related to the port is detected. The link status variable is set to a second state, and the forwarding status variable is set to a second state. | 12-20-2012 |
20130077471 | METHOD AND APPARATUS PROVIDING SPLIT MULTI LINK TRUNK (SMLT) FOR ADVANCED TECHNOLOGY ATTACHMENT (ATA) OVER ETHERNET - A method, apparatus and computer program product for providing Split Multi Link Trunk (SMLT) for Advanced Technology Attachment (ATA) Over Ethernet is presented. All ports on an ATA server are assigned a same Media Access Control (MAC) address. When the first switch receives a packet destined to the second switch the first switch performs a route lookup on a destination address of the packet and forwards the packet to the target over one of the second plurality of links, and when the second switch receives a packet destined to the first switch the second switch performs a route lookup on a destination address of the packet and forwards the packet to the target over one of the second plurality of links. | 03-28-2013 |
20130250762 | Method and apparatus for Lossless Behavior For Multiple Ports Sharing a Buffer Pool - Packets are colored and stored in a shared packet buffer without assigning fixed page allocations per port. The packet buffer is divided into three areas—an unrestricted area, an enforced area, and a headroom area. Regardless of the fullness level, when a packet is received it will be stored in the packet buffer. If the fullness level is in the unrestricted area, no flow control messages are generated. If the fullness level is in the enforced region, a probabilistic flow control generation process is used determine if a flow control messages will be generated. If the fullness level is in the headroom area, flow control is automatically generated. Quanta timers are used to control regeneration of flow control messages. | 09-26-2013 |
20130250763 | Method and Apparatus for Control Plane CPU Overload Protection - Control packets received at a network element are pre-classified to enable out of profile traffic to be traced to an offending port. Pre-classified control packets are metered at a desired granularity using dynamically configured meters which adjust as ports are put into service or removed from service, and as services are applied to ports. CPU metering is implemented on a per-CPU core basis, but the per-CPU meters are used to perform flow control rather than as thresholds for ejecting errant control traffic. The combination of these three aspects provides robust CPU overload protection while allowing appropriate levels of control traffic to be provided to the control plane for processing, even in the event of a control traffic burst on one or more ports of the network element. | 09-26-2013 |
20140003423 | Method for Layer 2 Forwarding in a Multi-node Switch Cluster | 01-02-2014 |
20140003434 | Method for Mapping Packets to Network Virtualization Instances | 01-02-2014 |
20140003439 | Method for Reducing Processing Latency in a Multi-Thread Packet Processor with at Least One Re-Order Queue | 01-02-2014 |
20140006757 | Method for Thread Reduction in a Multi-Thread Packet Processor | 01-02-2014 |
20140086237 | Method for Virtual Multicast Group IDs - Application MGIDs defining virtual groups of output destinations are assigned by applications and appended to packets to specify on a per-application basis how packets associated with the application should be handed by a network element. The application MGIDs are mapped to single system MGID number space prior to being passed to the network element switch fabric. When a packet is passed to the switch fabric, the application MGID header is passed along with the system MGID header, so that the packet that is passed to the switch fabric has both the system MGID as well as the application MGID. The switch fabric only looks at the system MGID when forwarding the packet, however. Each egress node maintains a set of tables, one table for each application, in which the node maintains a list of ports per application MGID. The egress node uses the application MGID to key into the application table to determine a list of ports, at that egress node, to receive the packet. | 03-27-2014 |
20140086240 | Method for Abstracting Datapath Hardware Elements - A table based abstraction layer is interposed between applications and the packet forwarding hardware driver layer. All behavior and configuration of packet forwarding to be implemented in the hardware layer is articulated as fields in tables of the table based abstraction layer, and the higher level application software interacts with the hardware through the creation of and insertion and deletion of elements in these tables. The structure of the tables in the abstraction layer has no direct functional meaning to the hardware, but rather the tables of the table based abstraction layer simply exist to receive data to be inserted by the applications into the forwarding hardware. Information from the tables is extracted by the packet forwarding hardware driver layer and used to populate physical offset tables that may then be installed into the registers and physical tables utilized by the hardware to perform packet forwarding operations. | 03-27-2014 |
20140086241 | Self Adapting Driver for Controlling Datapath Hardware Elements - A self adapting driver for controlling datapath hardware elements uses a generic driver and a configuration library to create a set of data structures and methods to map information provided by applications to physical tables. A set of virtual tables is implemented as an interface between the applications and the generic driver. The generic driver uses the configuration library to determine a mapping from the virtual tables to the physical tables. A virtual table schema definition is parsed to create the configuration library, such that changes to the physical infrastructure may be implemented as changes to the virtual table schema definition without adjusting the driver code. Thus automatically generated creation of generic packet forwarding drivers is able to be implemented through the use of a configuration language that defines the meaning of the information stored in the virtual tables. | 03-27-2014 |
20140086248 | Method for IP Longest Prefix Match Using Prefix Length Sorting - Prefix length memory tables are used to enable fast IPv4 LPM lookups using a single memory access for a first range of IP prefixes, and using two memory accesses for larger IP prefixes. Each of the prefix length memory tables is used to hold a set of forwarding rules associated with a different prefix length range. IP LPM operations are then performed in parallel in each of the prefix length memory tables of the set, and the forwarding rule matching the longest prefix is returned from each of the memory tables. A priority encoder is used to select between positive results from the multiple prefix length memory tables to enable the forwarding rule with the largest matching prefix to be used to key into the next hop forwarding table. The method utilizes low cost DDR SDRAM rather than TCAM, and also exhibits low overhead. | 03-27-2014 |
20140086249 | Method for IPv6 Longest Prefix Match - IPv6 longest prefix match lookups are implemented by splitting disjoint forwarding rules from non-disjoint forwarding rules and storing these forwarding rules in separate TCAMs. When an IPv6 address is received, the full IP address is passed to the TCAM containing the disjoint forwarding rules and the first n bits of the IP address are passed to the TCAM containing the non-disjoint forwarding rules. If a hit is received in the TCAM containing the disjoint forwarding rules, a result of the hit is used to implement a forwarding decision and the search in the TCAM containing the non-disjoint forwarding rules is terminated. If no hit is obtained from the disjoint TCAM, the search result of the non-disjoint TCAM is used. If a continue flag is set in the result received from the disjoint TCAM, a sub-trie based lookup is implemented based on the remaining m bits of the IPv6 address. | 03-27-2014 |
20140089625 | Method for Heap Management - A bitmask array is implemented as a two dimensional bit array where each bit represents an allocated/free cell of the heap. Groups of bits of the bitmask array are assigned to implement commonly sized memory cell allocation requests. The heap manager keeps track of allocations by keeping separate lists of which groups are being used to implement commonly sized memory cell allocations requests by maintaining linked lists according to the number of cells allocated per request. Each list contains a list of the bit groups that have been used to provide allocations for particularly sized requests. By maintaining lists based on allocation size, the heap manager is able to cause new allocation requests to be matched up with previously retired allocations of the same size. Memory may be dynamically allocated between lists of differently sized memory requests. | 03-27-2014 |