Patent application number | Description | Published |
20100318749 | SCALABLE MULTI-BANK MEMORY ARCHITECTURE - According to one general aspect, a method may include, in one embodiment, grouping a plurality of at least single-ported memory banks together to substantially act as a single at least dual-ported aggregated memory element. In various embodiments, the method may also include controlling read access to the memory banks such that a read operation may occur from any memory bank in which data is stored. In some embodiments, the method may include controlling write access to the memory banks such that a write operation may occur to any memory bank which is not being accessed by a read operation. | 12-16-2010 |
20100318821 | SCALABLE, DYNAMIC POWER MANAGEMENT SCHEME FOR SWITCHING ARCHITECTURES UTILIZING MULTIPLE BANKS - According to one general aspect, a method may include receiving data from a network device. In some embodiments, the method may include writing the data to a memory bank that is part of a plurality of at least single-ported memory banks that have been grouped to act as a single at least dual-ported aggregated memory element. In various embodiments, the method may include monitoring the usage of the plurality of memory banks. In one embodiment, the method may include, based upon a predefined set of criteria, placing a memory bank that meets the predefined criteria in a low-power mode. | 12-16-2010 |
20110013627 | FLOW BASED PATH SELECTION RANDOMIZATION - Methods and apparatus for randomizing selection of a next-hop path/link in a network are disclosed. An example method includes receiving, at the network device, a data packet. The example method further includes generating a first hash key based on the data packet and generating a first hash value from the first hash key using a first hash function. The example method also includes generating a second hash key based on the data packet and the first hash value and generating a second hash value from the second hash key using a second hash function. The example method still further includes selecting a next-hop path based on the second hash value. | 01-20-2011 |
20110013638 | NODE BASED PATH SELECTION RANDOMIZATION - Methods and apparatus for randomizing selection of a next-hop path/link in a network are disclosed. An example method includes randomly selecting one or more path-selection randomization options to be applied to data packets processed in the network device. The example method further includes receiving a data packet and applying, by the network device, the one or more path-selection randomization operations to the data packet. The example method also includes determining a next-hop path for the data packet based on the one or more path-selection randomization operations and transmitting the data packet to a next-hop network device using the determined next-hop path. | 01-20-2011 |
20110013639 | FLOW BASED PATH SELECTION RANDOMIZATION USING PARALLEL HASH FUNCTIONS - Methods and apparatus for randomizing selection of a next-hop path/link in a network are disclosed. An example method includes receiving, at the network device, a data packet. The example method further includes generating a first hash key based on the data packet and generating a first hash value from the first hash key using a first hash function. The example method also includes generating a second hash key based on the data packet and generating a second hash value from the second hash key using a second hash function. The method still further includes combining the first hash value and the second hash value to produce a combined hash value and selecting a next-hop path based on the combined hash value. | 01-20-2011 |
20110029796 | System and Method for Adjusting an Energy Efficient Ethernet Control Policy Using Measured Power Savings - A system and method for adjusting an energy efficient Ethernet (EEE) control policy using measured power savings. An EEE-enabled device can be designed to report EEE event data. This reported EEE event data can be used to quantify the actual EEE benefits of the EEE-enabled device, debug the EEE-enabled device, and adjust the EEE control policy. | 02-03-2011 |
20110051602 | DYNAMIC LOAD BALANCING - Methods and apparatus for dynamic load balancing are disclosed. An example method includes receiving, at a network device, a data packet to be sent via an aggregation group, where the aggregation group comprising a plurality of aggregate members. The example method further includes determining, based on the data packet, a flow identifier of a flow to which the data packet belongs and determining a state of the flow. The example method also includes determining, based on the flow identifier and the state of the flow, an assigned member of the plurality of aggregate members for the flow and communicating the packet via the assigned member. | 03-03-2011 |
20110051603 | DYNAMIC LOAD BALANCING USING QUALITY/LOADING BANDS - Methods and apparatus for. An example method includes determining, by a network device, respective quality metrics for each of a plurality of members of an aggregation group of the network device, the respective quality metrics representing respective data traffic loading for each member of the aggregation group. The example method further includes grouping the plurality of aggregation members into a plurality of loading/quality bands based on their respective quality metrics. The example method also includes selecting members of the aggregation group for transmitting packets from a loading/quality band corresponding with members of the aggregation group having lower data traffic loading relative to the other members of the aggregation group. | 03-03-2011 |
20110051735 | DYNAMIC LOAD BALANCING USING VIRTUAL LINK CREDIT ACCOUNTING - Methods and apparatus for dynamic load balancing using virtual link credit accounting are disclosed. An example method includes receiving, at a network device, a data packet to be communicated using an aggregation group, the aggregation group including a plurality of virtual links having a common destination. The example method further includes determining a hash value based on the packet and determining an assigned virtual link of the plurality of virtual links based on the hash value. The example method also includes reducing a number of available transmission credits for the aggregation group and reducing a number of available transmission credits for the assigned virtual link. The example method still further includes communicating the packet to another network device using the assigned virtual link. | 03-03-2011 |
20110063979 | NETWORK TRAFFIC MANAGEMENT - Various example embodiments are disclosed. According to an example embodiment, an apparatus may include a switch fabric. The switch fabric may be configured to assign packets to either a first flow set or a second flow set based on fields included in the packets. The switch fabric may also be configured to send a first packet from the first flow set to a first flow set destination via a first path. The switch fabric may also be configured to determine, based at least in part on delays of the first path and a second path, whether sending a second packet from the first flow set to the first flow set destination via a second path will result in the second packet reaching the first flow set destination after the first packet reaches the first flow set destination, the second packet having been received by the router after the first packet. The switch fabric may also be configured to send the second packet to the first flow set destination via the second path based at least in part on the determining that sending the second packet from the first flow set to the first flow set destination via a second path will result in the second packet reaching the first flow set destination after the first packet reaches the first flow set destination. | 03-17-2011 |
20110261814 | PACKET PREEMPTION FOR LOW LATENCY - While transmitting a first Ethernet frame from the first buffer onto an Ethernet link, a first Ethernet device may stop transmitting the first frame prior to completing transmission of the frame. The first Ethernet device may then transmit a second frame from a second buffer onto the Ethernet link. The first Ethernet device may resume transmission of the first frame from the first buffer onto the Ethernet link. A second Ethernet device may receive, via the Ethernet link, a first portion of a first Ethernet frame and store the first portion of the first Ethernet frame in a first buffer. The second Ethernet device may then receive, via the Ethernet link, a second Ethernet frame and store the second Ethernet frame in a second buffer. The second Ethernet device may then receive, via the Ethernet link, a second portion of the first Ethernet frame and append it to the contents of the first buffer. | 10-27-2011 |
20120195192 | DYNAMIC MEMORY BANDWIDTH ALLOCATION - Methods and apparatus for dynamic bandwidth allocation are disclosed. An example method includes determining, by a network device, at least one of a congestion state of a packet memory buffer of the network device and a congestion state of an external packet memory that is operationally coupled with the network device. The example method further includes dynamically adjusting, by the network device, respective bandwidth allocations for read and write operations between the network device and the external packet memory, the dynamic adjusting being based on the determined congestion state of the packet memory buffer and/or the determined congestion state of the external packet memory. | 08-02-2012 |
20120230194 | Hash-Based Load Balancing in Large Multi-Hop Networks with Randomized Seed Selection - Methods and apparatus for improving hash-based load balancing with randomized seed selection are disclosed. The methods and apparatus described herein increase the number of unique fields in a hash key before the hash key is presented to a hash function. The methods include selecting one or more seed values based the output of a first arbitrary function having a first set of packet fields as input. The one or more seed values are combined with a second set of packet fields. A second arbitrary function generates a hash value based on the one or more seed values and the second set of packet fields. The hash value is applied as input to a hash function in a member selection module. The method enables per flow randomization attributes based on per packet attributes to perform aggregate member selection while remaining deterministic from a root-node or network perspective. | 09-13-2012 |
20120230225 | Hash-Based Load Balancing with Per-Hop Seeding - Methods and apparatus for improving hash-based load balancing with per-hop seeding are disclosed. The methods and apparatus described herein provide a set of techniques that enable nodes to perform differing mathematical transformations when selecting a destination link. The techniques include manipulation of seeds, hash configuration mode randomization at a per node basis, per node/microflow basis or per microflow basis, seed index generation, and member selection. A node can utilize any, or all, of the techniques presented in this disclosure simultaneously to improve traffic distribution and avoid path starvation with a degree of determinism. | 09-13-2012 |
20120287946 | Hash-Based Load Balancing with Flow Identifier Remapping - Methods and apparatus for improving hash-based load balancing using flow identifier remapping are disclosed. The node-based remapping of flow identifiers introduces additional information into the hash function by injecting new values into the hash key on a per node basis. The methods and apparatus described herein perform a remapping operation on a fixed per-flow attribute such as one or more packet fields. Upon receipt of a packet, a set of the packet fields is selected as a hash key. From these selected packet fields, one or more fields are selected and remapped using a remapping operation. A transformed hash key is formed using the one or more remapped values along with other packet fields. The transformed hash key is then presented as an input to an arbitrary hash function. The hash function generates a hash value that is then used for path selection. | 11-15-2012 |
20130003546 | System and Method for Achieving Lossless Packet Delivery in Packet Rate Oversubscribed Systems - A system and method for achieving lossless packet delivery in packet rate oversubscribed systems. Link-level packet rate control can be effected through the transmission of packet rate control messages to link partners of an oversubscribed system. The transmission of packet rate control messages can be triggered upon a determination that a packet arrival rate over a set of ingress ports exceeds a packet processing rate of a packet processing unit bound to the set of ingress ports. In one embodiment, the packet processing rate is artificially reduced due to a reduction in power consumption in the oversubscribed system. | 01-03-2013 |
20130003549 | Resilient Hashing for Load Balancing of Traffic Flows - Methods, systems, and computer program product embodiments for managing traffic flows member of a plurality of available member resources in a communications device are disclosed. Embodiments include configuring a flow table containing a plurality of mappings, where each of the mappings specifies a relationship between one of a range of index values and at least one of the plurality of available member resources of an aggregated resource, assigning using the flow table respective traffic flows to at least one of the plurality of available links, and responsive to a change in the plurality of available member resources, changing the plurality of mappings. | 01-03-2013 |
20130003559 | Adaptive Power Savings for Aggregated Resources - Embodiments of the present invention are directed to adaptive power savings in aggregated resources in communications devices. According to an embodiment, managing an aggregated resource in a communications device includes monitoring at least one current operational condition of the communications device, identifying based upon a policy configuration and the monitored at least one current operational condition one of the member resources of the aggregated resource as an eligible member resource to configure to a power-saving state, and reassigning traffic flows from the eligible link to at least one other link of the plurality of member resources. | 01-03-2013 |
20130083796 | System and Method for Improving Multicast Performance in Banked Shared Memory Architectures - A system and method for improving multicast performance in banked shared memory architectures. Temporal localities created by multicast packets in a shared memory bank are addressed through caching. In one embodiment, multicast packets are stored in a cache memory that is associated with a bank of shared memory. In another embodiment, read requests for multicast packets are stored in a read request cache, wherein additional read requests are accumulated prior to an actual read event. | 04-04-2013 |
20130121153 | DYNAMIC LOAD BALANCING USING QUALITY/LOADING BANDS - Methods and apparatus for load balancing data traffic are disclosed. An example method includes determining a respective quality metric for each of a plurality of members of an aggregation group of the network device, each respective quality metric representing respective data traffic loading for each member of the plurality of aggregation group members. The example method also includes grouping the plurality of aggregation members into a plurality of loading/quality bands based on their respective quality metric. The example method further includes selecting members of the aggregation group for transmitting packets from a loading/quality band corresponding with members of the aggregation group having lower data traffic loading relative to other members of the aggregation group. | 05-16-2013 |
20130173908 | Hash Table Organization - Disclosed are various embodiments for improving hash table utilization. A key corresponding to a data item to be inserted into a hash table can be transformed to improve the entropy of the key space and the resultant hash codes that can generated. Transformation data can be inserted into the key in various ways, which can result in a greater degree of variance in the resultant hash code calculated based upon the transformed key. | 07-04-2013 |
20130322243 | SYSTEM FOR PERFORMING DISTRIBUTED DATA CUT-THROUGH - A data segment of a data packet destined for an egress port of an egress node may be received at a first ingress node. An egress statement vector and an ingress statement vector may be identified at the first ingress node. A determination may be made, based on the egress statement vector and ingress statement vector, whether the first ingress node is authorized to transfer the data segment to the egress port before the other data segments of the data packet are received at the first ingress node. The data segment may be transferred to the egress port before the other data segments of the data packet are received at the first ingress node when the determination indicates the first ingress node is authorized. The data segment may be stored in a buffer of the first ingress node when the determination indicates the first ingress node is not authorized. | 12-05-2013 |
20130322244 | SYSTEM FOR PERFORMING DISTRIBUTED DATA CUT-THROUGH - A system for transferring data includes an egress node including an egress port, and an ingress node configured to receive a data segment of a data packet destined for the egress port. The data packet is associated with a packet priority level. The ingress node is configured to receive an egress statement vector from the egress node indicating whether the egress port is or is not flow controlled for data associated with the packet priority level. The ingress node is configured to determine whether the egress port is available to receive the data segment from the ingress node before other data segments of the data packet are received at the ingress node based on the egress statement vector. | 12-05-2013 |
20130322271 | SYSTEM FOR PERFORMING DATA CUT-THROUGH - A system transfers data. The system includes an ingress node transferring data at a determined bandwidth. The ingress node includes a buffer and operates based on a monitored node parameter. The system includes a controller in communication with the ingress node. The controller is configured to allocate, based on the monitored node parameter, an amount of the determined bandwidth for directly transferring data to bypass the buffer of the ingress node. | 12-05-2013 |
20130336332 | SCALING OUTPUT-BUFFERED SWITCHES - The systems and methods described herein allow for the scaling of output-buffered switches by decoupling the data path from the control path. Some embodiment of the invention include a switch with a memory management unit (MMU), in which the MMU enqueues data packets to an egress queue at a rate that is less than the maximum ingress rate of the switch. Other embodiments include switches that employ pre-enqueue work queues, with an arbiter that selects a data packet for forwarding from one of the pre-enqueue work queues to an egress queue. | 12-19-2013 |
20140022895 | Reducing Store And Forward Delay In Distributed Systems - Processing techniques in a network switch help reduce latency in the delivery of data packets to a recipient. The processing techniques include speculative flow status messaging, for example. The speculative flow status messaging may alert an egress tile or output port of an incoming packet before the incoming packet is fully received. The processing techniques may also include implementing a separate accelerated credit pool which provides controlled push capability for the ingress tile or input port to send packets to the egress tile or output port without waiting for a bandwidth credit from the egress tile or output port. | 01-23-2014 |
20140086258 | Buffer Statistics Tracking - The systems and methods disclosed herein allow for a switch (in a packet-switching network) to track buffer statistics, and trigger an event, such as a hardware interrupt or a system snapshot, in response to the buffer statistics reaching a threshold that may indicate an impending problem. Since the switch itself triggers the event to alert the network administrator, the network administrator no longer needs to sift through mountains of data to identify potential problems. Also, since the switch triggers the event prior to a problem arising, the network administrator can provide remedial action prior to a problem occurring. This type of event-triggering mechanism makes the administration of packet-switching networks more manageable. | 03-27-2014 |
20140086262 | SCALABLE EGRESS PARTITIONED SHARED MEMORY ARCHITECTURE - Disclosed are various embodiments that provide an architecture of memory buffers for a network component configured to process packets. A network component may receive a packet, the packet being associated with a control structure and packet data, an input port set and an output port set. The network component determines one of a plurality of control structure memory partitions for writing the control structure, the one of the plurality of control structure memory partitions being determined based at least upon the input port set and the output port set; and determines one of a plurality of packet data memory partitions for writing the packet data, the one of the plurality of packet data memory partitions being determined independently of the input port set. | 03-27-2014 |
20140098816 | MULTICAST SWITCHING FOR DISTRIBUTED DEVICES - A system for multicast switching for distributed devices may include an ingress node including an ingress memory and an egress node including an egress memory, where the ingress node is communicatively coupled to the egress node. The ingress node may be operable to receive a portion of a multicast frame over an ingress port, bypass the ingress memory and provide the portion to the egress node when the portion satisfies an ingress criteria, otherwise receive and store the entire frame in the ingress memory before providing the frame to the egress node. The egress node may be operable to receive the portion from the ingress node, bypass the egress memory for the portion and provide the portion to the first egress port when an egress criteria is satisfied, otherwise receive and store the entire multicast frame in the egress memory before providing the multicast frame to an egress port. | 04-10-2014 |
20140112128 | OVERSUBSCRIPTION BUFFER MANAGEMENT - Various methods and systems are provided for oversubscription buffer management. In one embodiment, among others, a method for oversubscription control determines a utilization level of an oversubscription buffer that is common to a plurality of ingress ports and initiates adjustment of an ingress packet rate of the oversubscription buffer in response to the utilization level. In another embodiment, a method determines an occupancy level of a virtual oversubscription buffer associated with an oversubscription buffer and initiates adjustment of an ingress packet rate in response to the occupancy level. In another embodiment, a rack switch includes an oversubscription buffer configured to receive packets from a plurality of ingress ports and provide the received packets for processing by the rack switch and a packet flow control configured to monitor an occupancy level of the oversubscription buffer and to initiate adjustment of an ingress packet rate in response to the occupancy level. | 04-24-2014 |
20140112348 | TRAFFIC FLOW MANAGEMENT WITHIN A DISTRIBUTED SYSTEM - Various methods and systems are provided for traffic flow management within distributed traffic. In one example, among others, a distributed system includes egress ports supported by nodes of the distributed system, cut-through tokens (c-tokens) including an indication of eligibility of the corresponding egress port to handle cut-through traffic, and a cut-through control ring to pass the c-tokens between the nodes. In another example, a method includes determining whether an egress port is available to handle cut-through traffic based upon a corresponding c-token, claiming the egress port for transmission of at least a portion of a packet, and routing it to the claimed egress port for transmission. In another example, a distributed system includes a first node configured to modify an eligibility indication of a c-token before transmission to a second node configured to route at least a portion of a packet based at least in part upon the eligibility indication. | 04-24-2014 |
20140133483 | Distributed Switch Architecture Using Permutation Switching - A distributed switch architecture using permutation switching. In one embodiment, the distributed switch architecture facilitates connections between a plurality of ingress nodes and a plurality of egress nodes, wherein each of the plurality of ingress nodes and plurality of egress nodes are coupled to a plurality of ports (e.g., 40 gigabit Ethernet (GbE), 100 GbE, etc.). A plurality of crossbar switch modules are provided that are configured for coupling to a single output from each of the plurality of ingress nodes, and for coupling to a single input from each of the plurality of egress nodes. Permutations of connections for a crossbar switch module are defined by a permutation connection set that is stored in a permutation engine. Each permutation connection in the permutation connection can be designed to couple one of the outputs from the plurality of ingress nodes to one of the inputs from the plurality of ingress nodes, wherein the permutation connection set can ensures that each of the plurality of ingress nodes has an opportunity to connect with each of the plurality of egress nodes. | 05-15-2014 |
20140185628 | DEADLINE AWARE QUEUE MANAGEMENT - A method for managing data traffic operating on a deadline is provided. The method includes receiving, on an intermediate node, a packet having one or more traffic characteristics. The method also includes evaluating, on the intermediate node, the one or more traffic characteristics to determine a priority of the packet. The method also includes selecting one of multiple queues on the intermediate node based on the determined priority. The method also includes processing, on the intermediate node, the packet based on the determined priority. The method also includes enqueuing the processed packet into the selected queue. The method further includes outputting the queued packet from the selected queue. | 07-03-2014 |
20140201354 | NETWORK TRAFFIC DEBUGGER - Disclosed are various embodiments that relate to a network switch. The switch determines whether a network packet is associated with a packet processing context, the packet processing context specifying a condition of handling network packets processed in the switch. The switch determines debug metadata for the network packet in response to the network packet being associated with the packet processing context; and the debug metadata is stored in a capture buffer. | 07-17-2014 |