Patent application number | Description | Published |
20140052935 | SCALABLE MULTI-BANK MEMORY ARCHITECTURE - According to one general aspect, a method may include, in one embodiment, grouping a plurality of at least single-ported memory banks together to substantially act as a single at least dual-ported aggregated memory element. In various embodiments, the method may also include controlling read access to the memory banks such that a read operation may occur from any memory bank in which data is stored. In some embodiments, the method may include controlling write access to the memory banks such that a write operation may occur to any memory bank which is not being accessed by a read operation. | 02-20-2014 |
20140098818 | Internal Cut-Through For Distributed Switches - Processing techniques in a network switch help reduce latency in the delivery of data packets to a recipient. The processing techniques include internal cut-through. The internal cut-through may bypass input port buffers by directly forwarding packet data that has been received to an output port. At the output port, the packet data is buffered for processing and communication out of the switch. | 04-10-2014 |
20140126395 | SWITCH STATE REPORTING - Disclosed are various embodiments that relate to a network switch. The network switch obtains a network state metric, the network state metric quantifying a network traffic congestion associated with a switch. The network switch identifies a synchronous time stamp associated with the network state metric and generates an network state reporting message, the network state reporting message comprising the network state metric and the synchronous time stamp. The network state reporting message may be transmitted to a monitoring system. | 05-08-2014 |
20140126396 | Annotated Tracing Driven Network Adaptation - Network devices add annotation information to network packets as they travel through the network devices. The network devices may be switches, routers, bridges, hubs, or any other network device. The annotation information may be information specific to the network devices, as opposed to simply the kinds of information available at application servers that receive the network packets. As just a few examples, the annotation information may include switch buffer levels, routing delay, routing parameters affecting the packet, switch identifiers, power consumption, and heat, moisture, or other environmental data. | 05-08-2014 |
20140126573 | Annotated Tracing for Data Networks - Network devices add annotation information to network packets as they travel through the network devices. The network devices may be switches, routers, bridges, hubs, or any other network device. The annotation information may be information specific to the network devices, as opposed to simply the kinds of information available at application servers that receive the network packets. As just a few examples, the annotation information may include switch buffer levels, routing delay, routing parameters affecting the packet, switch identifiers, power consumption, and heat, moisture, or other environmental data. | 05-08-2014 |
20140133314 | FORENSICS FOR NETWORK SWITCHING DIAGNOSIS - A method for diagnosing performance of a network switch device includes a processor monitoring data generated by a sensor associated with a network switch device, the data related to states or attributes of the network switch device. The processor detects a determined condition in the operation of the network switch device related to the state or attribute. The processor generates an event trigger in response to detecting the determined condition and executes a forensic command in response to the event trigger. Executing the command includes sending information relevant to the determined condition for aggregation in computer storage and for analysis. | 05-15-2014 |
20140211634 | ADAPTIVE BUFFER ALLOCATION MANAGEMENT - Aspects of adaptive buffer allocation management are described. In one embodiment of adaptive buffer allocation management, data is received by a network component for communication to a network address. While awaiting transfer to the network address, the data must be stored or distributed to a buffer. In one embodiment, the data is distributed evenly about banks of the buffer when an amount of utilization of the buffer is low. In another embodiment, the data is distributed to certain banks of the buffer when the amount of utilization of the buffer is high. In other aspects, the amount of utilization of the buffer is monitored while data is distributed to the banks of the buffer, and the manner of data distribution among the banks is adapted based on the utilization. According to aspects the of adaptive data distribution, a buffer of reduced size may be used. | 07-31-2014 |
20140211639 | Network Tracing for Data Centers - Network devices facilitate network tracing using tracing packets that travel through the network devices. The network devices may be switches, routers, bridges, hubs, or any other network device. The network tracing may include sending tracing packets down each of multiple routed paths between a source and a destination, at each hop through the network, or through a selected subset of the paths between a source and a destination. The network devices may add tracing information to the tracing packets, which an analysis system may review to determine characteristics of the network and the characteristics of the potentially many paths between a source and a destination. | 07-31-2014 |
20140219087 | Packet Marking For Flow Management, Including Deadline Aware Flow Management - Network devices facilitate flow management through packet marking. The network devices may be switches, routers, bridges, hubs, or any other network device. The packet marking may include analyzing received packets to determine when the received packets meet a marking criterion, and then applying a configurable marking function to mark the packets in a particular way. The marking capability may facilitate deadline aware end-to-end flow management, as one specific example. More generally, the marking capability may facilitate traffic management actions such as visibility actions and flow management actions. | 08-07-2014 |
20140233382 | Oversubscription Monitor - Aspects of oversubscription monitoring are described. In one embodiment, oversubscription monitoring includes accumulating an amount of data that arrives at a network component over at least one epoch of time. Further, a core processing rate at which data can be processed by the network component is calculated. Based on the amount of data and the core processing rate, it is determined whether the network component is operating in an oversubscribed region of operation. In one embodiment, when the network component is operating in the oversubscribed region of operation, certain quality of service metrics are monitored. Using the monitored metrics, a network operation display object may be generated for identifying or troubleshooting network errors during an oversubscribed region of operation of the network component. | 08-21-2014 |
20140233421 | Application Aware Elephant Flow Identification - A network device identifies elephant flows. The network device filters received network data according to an application-specific criteria and identifies the elephant flow from the filtered network data. To do so, the network device can employ a multi-stage filtering process to identify an elephant flow in the received network data. The network device separates the filtered network data into multiple macroflows using a first hash function, and identifies the macroflow with the highest rate. Then, the network device disaggregates the high rate macroflow into multiple microflows using a second hash function and identifies the highest rate microflow as the elephant flow. The network device maintains an elephant flow cache with entries for currently identified elephant flows. The network device may also take management actions on the elephant flows, and the management actions may be application specific. | 08-21-2014 |
20140237118 | Application Aware Elephant Flow Management - A network device manages elephant flows. The network device filters received network data according to an application-specific criteria and identifies the elephant flow from the filtered network data. To do so, the network device can employ a multi-stage filtering process to identify an elephant flow in the received network data. The network device separates the filtered network data into multiple macroflows using a first hash function, and identifies the macroflow with the highest rate. Then, the network device disaggregates the high rate macroflow into multiple microflows using a second hash function and identifies the highest rate microflow as the elephant flow. The network device maintains an elephant flow cache with entries for currently identified elephant flows. The network device may also take management actions on the elephant flows, and the management actions may be application specific. | 08-21-2014 |
20140254357 | FACILITATING NETWORK FLOWS - Disclosed are various embodiments for facilitating network flows in a networked environment. In various embodiments, a switch transmits data using an egress port that comprises an egress queue. The switch sets a congestion notification threshold for the egress queue. The switch generates a drain rate metric based at least in part on a drain rate for the egress queue, and the congestion notification threshold is adjusted based at least in part on the drain rate metric. | 09-11-2014 |
20140359231 | System and Method for Efficient Buffer Management for Banked Shared Memory Designs - A system and method for system and method for efficient buffer management for banked shared memory designs. In one embodiment, a controller within the switch is configured to manage the buffering of the shared memory banks by allocating full address sets to write sources. Each full address set that is allocated to a write source includes a number of memory addresses, wherein each memory address is associated with a different shared memory bank. A size of the full address set can be based on a determined number of buffer access contenders. | 12-04-2014 |
20150026361 | Ingress Based Headroom Buffering For Switch Architectures - A network device performs ingress based headroom buffering. The network device may be configured as an output queue switch and include a main packet buffer that stores packet data according to a destination egress port. The network device may implement one or more ingress buffers associated with ingress data ports in the network device. The ingress buffers may be separate from the main packet buffer. The network device may identify a flow control condition triggered by an ingress data port, such as when an amount of data stored in the main packet buffer received through the ingress data port exceeds a fill threshold. In response, the network device may send a flow control message to a link partner to cease sending network traffic through the ingress data port. The network device may store in-flight data from the link partner in an ingress buffer instead of the main packet buffer. | 01-22-2015 |
20150089047 | CUT-THROUGH PACKET MANAGEMENT - Disclosed are various embodiments that relate identifying a source of corruption in a network made up of multiple network nodes. A network node is configured to provide corruption source identification while handling packets according to a cut-through scheme. According to some embodiments, a network node may perform a running error detection operation on a cut-through packet and then insert a debug indicator into the cut through packet. In other embodiments, the network node may process some packets according to a cut-through scheme while process other packets according to a store-and-forward scheme to detect packet corruption in a network. | 03-26-2015 |