Class / Patent application number | Description | Number of patent applications / Date published |
370417000 | Having output queuing only | 38 |
20080317058 | Accessory queue management system and method for interacting with a queuing system - A virtual queuing system and method provides for dynamic control of queue data in accordance with queue control instructions provided by a separate queue control source. The virtual queuing system comprises an interface to a queuing system and an interface to the separate queue control source. The interface to the queuing system provides for obtaining queue data and controlling the queue data in accordance with the queue control instructions provided by a separate queue control source. The interface to the separate queue control source for: i) providing the queue data to the separate queue control source; and ii) obtaining the queue control instructions there from. | 12-25-2008 |
20080317059 | APPARATUS AND METHOD FOR PRIORITY QUEUING WITH SEGMENTED BUFFERS - Apparatus and methods for efficient queuing and dequeuing using segmented output buffers comprising sub-buffers and priority queues. Output buffers are monitored for empty sub-buffers. When a newly empty sub-buffer is discovered, a refill request is enqueued in a ranked priority queue wherein the rank of the destination priority queue is based on the number of empty-sub-buffers in the requesting output buffer. All high priority refill requests are dequeued before lower priority refill requests, thereby reducing the possibility of starvation. Optionally, by using simple dequeuing criteria, such as a FIFO discipline, instead of complex algorithms designed to improve fairness, system resources may be conserved thereby improving system throughput. | 12-25-2008 |
20090122805 | Instrumenting packet flows - Disclosed are, inter alia, methods, apparatus, computer-readable media, mechanisms, and means for instrumenting real-time customer packet traffic. These measured delays can be used to determine whether or not the performance of a packet switching device and/or network meets desired levels, especially for complying with a Service Level Agreement. | 05-14-2009 |
20090135844 | TRANSMIT-SIDE SCALER AND METHOD FOR PROCESSING OUTGOING INFORMATION PACKETS USING THREAD-BASED QUEUES - Embodiments of a transmit-side scaler and method for processing outgoing information packets using thread-based queues are generally described herein. Other embodiments may be described and claimed. In some embodiments, a process ID stored in a token area may be compared with a process ID of an application that generated an outgoing information packet to obtain a transmit queue. The token area may be updated with a process ID stored in an active threads table when the process ID stored in the token area does not match the process ID of the application. | 05-28-2009 |
20090141733 | ALGORTIHM AND SYSTEM FOR SELECTING ACKNOWLEDGMENTS FROM AN ARRAY OF COLLAPSED VOQ'S - A method for selecting packets to be switched in a collapsed virtual output queuing array (cVOQ) switch core, using a request/acknowledge mechanism. According to the method, an efficient set of virtual output queues (at most one virtual output queue per ingress adapter) is selected, while keeping the algorithm simple enough to allow its implementation in fast state machines. For determining a set of virtual output queues that are each authorized to send a packet, the algorithm is based upon degrees of freedom characterizing states of ingress and egress adapters. For example, the degree of freedom, derived from the collapsed virtual output queuing array, could represent the number of egress ports to which an ingress port may send packet, or the number of ingress ports from which an egress port may receive packets, at a given time. Analyzing all the ingress ports holding at least one data packet, from the lesser degree of freedom to the greater degree of freedom, the algorithm determines as many virtual output queues as possible, in the limit of the number of ingress ports (an ingress port may send only one packet per packet-cycle). | 06-04-2009 |
20090290593 | METHOD AND APPARATUS FOR IMPLEMENTING OUTPUT QUEUE-BASED FLOW CONTROL - A method and apparatus for implementing output queue-based flow control is provided. The method includes: implementing queue scheduling and flow control by using an output port-based cell queue and by counting the number of cells from different angles. In this system, the flow control and queue management are performed separately. The queue management is directly applied to the cell scheduling. The flow control does not directly depend on the cell statistical results in the queue management. Instead, it is implemented on the basis of the cell statistical results that are obtained according to the cell priority, output port and source chip number of the cells. Therefore, the provided method and apparatus may reduce and simplify the number of queues to be scheduled and implement fine and flexible back pressure control. | 11-26-2009 |
20100238948 | PACKET SWITCHING SYSTEM AND METHOD - A packet switching system capable of ensuring the sequence and continuity of packets and further compensating for delays in transmission is disclosed. Each of two redundant switch sections has a high-priority queue and a low-priority queue for each of output ports. A high-priority output selector selects one of two high-priority queues corresponding to respective ones of the two switch sections to store an output of the selected one into a high-priority output queue. A low-priority output selector selects one of two low-priority queues corresponding to respective ones of the two switch sections to store an output of the selected one into a low-priority output queue. The high-priority and low-priority output selectors are controlled depending on a system switching signal and a packet storing status of each of the high-priority and low-priority queues. | 09-23-2010 |
20100260198 | Space-Space-Memory (SSM) Clos-Network Packet Switch - A Clos-network packet switching system may include input modules coupled to a virtual output queue, central modules coupled to the input modules, and output modules coupled to the central modules, each output module having a plurality of cross-point buffers for storing a packet and one or more output ports for outputting the packet. | 10-14-2010 |
20100316061 | Configuring a Three-Stage Clos-Network Packet Switch - Examples of are disclosed for configuring one or more routes through a three-stage Clos-network packet switch. | 12-16-2010 |
20100322265 | SYSTEMS AND METHODS FOR RECEIVE AND TRANSMISSION QUEUE PROCESSING IN A MULTI-CORE ARCHITECTURE - Described herein is a method and system for directing outgoing data packets from packet engines to a transmit queue of a NIC in a multi-core system, and a method and system for directing incoming data packets from a receive queue of the NIC to the packet engines. Packet engines store outgoing traffic in logical transmit queues in the packet engines. An interface module obtains the outgoing traffic and stores it in a transmit queue of the NIC, after which the NIC transmits the traffic from the multi-core system over a network. The NIC receives incoming traffic and stores it in a NIC receive queue. The interface module obtains the incoming traffic and applies a hash to a tuple of each obtained data packet. The interface module then stores each data packet in the logical receive queue of a packet engine on the core identified by the result of the hash. | 12-23-2010 |
20110158252 | OUTGOING COMMUNICATIONS INVENTORY - Systems and methods for generating and accessing a communications inventory are provided. To generate the inventory in one embodiment, a plurality of outgoing communications is received. The outgoing communications may have been auto-generated or generated as part of a batch process. Next, a determination is made that a first outgoing communication of the plurality of outgoing communications is unique relative to other outgoing communications to avoid storing duplicate messages. Lastly, a user may access a display of the first outgoing communication. | 06-30-2011 |
20110170558 | SCHEDULING, INCLUDING DISTRIBUTED SCHEDULING, FOR A BUFFERED CROSSBAR SWITCH - Scheduling methods and apparatus are provided for buffered crossbar switches with a crosspoint buffer size as small as one and no speedup. An exemplary distributed scheduling process achieves 100% throughput for any admissible Bernoulli arrival traffic. Simulation results also showed that this distributed scheduling process can provide very good delay performance for different traffic patterns. The simulation results also showed that packet delay is very weakly dependent on the switch size, which implies that the exemplary distributed scheduling process can scale with the number of switch ports. | 07-14-2011 |
20110228795 | MULTI-BANK QUEUING ARCHITECTURE FOR HIGHER BANDWIDTH ON-CHIP MEMORY BUFFER - A network device includes a main storage memory and a queue handling component. The main storage memory includes multiple memory banks which store a plurality of packets for multiple output queues. The queue handling component controls write operations to the multiple memory banks and controls read operations from the multiple memory banks, where the read operations for at least one of the multiple output queues alternates sequentially between the each of the multiple memory banks, and where the read operations and the write operations occur during a same clock period on different ones of the multiple memory banks. | 09-22-2011 |
20120170591 | Advanced and Dynamic Physical Layer Device Capabilities Utilizing a Link Interruption Signal - Advanced and dynamic physical layer device capabilities utilizing a link interruption signal. The physical layer device can use a link interruption signal to signal to a media access controller device that the link has temporarily been interrupted. This link interruption signal can be generated in response to one or more programmable modes of the physical layer device that are used to support the advanced and dynamic physical layer device capabilities. | 07-05-2012 |
20120207176 | TRANSMIT-SIDE SCALER AND METHOD FOR PROCESSING OUTGOING INFORMATION PACKETS USING THREAD-BASED QUEUES - Embodiments of a transmit-side scaler and method for processing outgoing information packets using thread-based queues are generally described herein. Other embodiments may be described and claimed. In some embodiments, a process ID stored in a token area may be compared with a process ID of an application that generated an outgoing information packet to obtain a transmit queue. The token area may be updated with a process ID stored in an active threads table when the process ID stored in the token area does not match the process ID of the application. | 08-16-2012 |
20120287941 | Queuing Architectures for Orthogonal Requirements in Quality of Service (QoS) - A node in a mobile ad-hoc network or other network classifies packets (a) in accordance with a first set of priority levels based on urgency and (b) within each priority level of the first set, in accordance with a second set of priority levels based on importance. The node: (a) queues packets classified at highest priority levels of the first and/or second sets in high-priority output queues; (b) queues packets classified at medium priority levels of the first set in medium-priority output queue(s); and (3) queues packets classified at low priority levels of the first and/or second set in low-priority output queue(s). Using an output priority scheduler, the node serves the packets in order of the priorities of the output queues. In such manner, orthogonal aspects of DiffServ and MLPP can be resolved in a MANET or other network. | 11-15-2012 |
20130028266 | RESPONSE MESSAGES BASED ON PENDING REQUESTS - Techniques are provided for sending response messages based on pending requests. A request message identifying a data packet may be received. A pending request structure may be used to determine output queues that are in need of the data packet identified in the request message. A response message may be sent indicating if the request message is being refused based on the output queues. | 01-31-2013 |
20130121341 | MULTI-BANK QUEUING ARCHITECTURE FOR HIGHER BANDWIDTH ON-CHIP MEMORY BUFFER - A network device includes a main storage memory and a queue handling component. The main storage memory includes multiple memory banks which store a plurality of packets for multiple output queues. The queue handling component controls write operations to the multiple memory banks and controls read operations from the multiple memory banks, where the read operations for at least one of the multiple output queues alternates sequentially between the each of the multiple memory banks, and where the read operations and the write operations occur during a same clock period on different ones of the multiple memory banks. | 05-16-2013 |
20130201997 | SYSTEM AND METHOD FOR LOCAL FLOW CONTROL AND ADVISORY USING A FAIRNESS-BASED QUEUE MANAGEMENT ALGORITHM - A data processing device for transmitting network packets comprising: packet classification logic for classifying packets according to different packet service classifications, wherein a packet to be transmitted is stored in one or more transmit queues based on the packet service classifications and wherein each packet is associated with a particular flow; and queue management logic for queuing packets in the one or more transmit queues utilizing a flow control policy implemented on a per-flow basis, wherein a number of queued packets for each flow is monitored and when the number of queued packets for a particular flow reaches a specified threshold, then flow control for that particular flow is turned on, and wherein the queue management logic implements a stochastic fair blue (SFB) algorithm to track the number of packets within each transmit queue. | 08-08-2013 |
20130315261 | Service Interface for QoS-Driven HPNA Networks - An in-band signaling model media control (MC) terminal for an HPNA network includes a frame classification entity (FCE) and a frame scheduling entity (FSE) and provides end-to-end Quality of Service (QoS) by passing the QoS requirements from higher layers to the lower layers of the HPNA network. The FCE is located at an LLC sublayer of the MC terminal, and receives a data frame from a higher layer of the MC terminal that is part of a QoS stream. The FCE classifies the received data frame for a MAC sublayer of the MC terminal based on QoS information contained in the received data frame, and associates the classified data frame with a QoS stream queue corresponding to a classification of the data frame. The FSE is located at the MAC sublayer of the MC terminal, and schedules transmission of the data frame to a destination for the data frame based on a QoS requirement associated with the QoS stream. | 11-28-2013 |
20130329748 | CROSSBAR SWITCH AND RECURSIVE SCHEDULING - A crossbar switch has N input ports, M output ports, and a switching matrix with N×M crosspoints. In an embodiment, each crosspoint contains an internal queue (XQ), which can store one or more packets to be routed. Traffic rates to be realized between all Input/Output (IO) pairs of the switch are specified in an N×M traffic rate matrix, where each element equals a number of requested cell transmission opportunities between each IO pair within a scheduling frame of F time-slots. An efficient algorithm for scheduling N traffic flows with traffic rates based upon a recursive and fair decomposition of a traffic rate vector with N elements, is proposed. To reduce memory requirements a shared row queue (SRQ) may be embedded in each row of the switching matrix, allowing the size of all the XQs to be reduced. To further reduce memory requirements, a shared column queue may be used in place of the XQs. The proposed buffered crossbar switches with shared row and column queues, in conjunction with the row scheduling algorithm and the DCS column scheduling algorithm, can achieve high throughput with reduced buffer and VLSI area requirements, while providing probabilistic guarantees on rate, delay and jitter for scheduled traffic flows. | 12-12-2013 |
20140153582 | METHOD AND APPARATUS FOR PROVIDING A PACKET BUFFER RANDOM ACCESS MEMORY - The present invention generally provides a packet buffer random access memory (PBRAM) device including a memory array, a plurality of input ports, and a plurality of serial registers associated with the input ports. The plurality of input ports permit multiple devices to concurrently access the memory in a non-blocking manner. The serial registers enable receiving data from the input ports and concurrently packet data to the memory array. The memory performs all management of network data queues so that all port requests can be satisfied within the real-time constraints of network packet switching. | 06-05-2014 |
20140177644 | PARALLEL PROCESSING USING MULTI-CORE PROCESSOR - Disclosed are methods, systems, paradigms and structures for processing data packets in a communication network by a multi-core network processor. The network processor includes a plurality of multi-threaded core processors and special purpose processors for processing the data packets atomically, and in parallel. An ingress module of the network processor stores the incoming data packets in the memory and adds them to an input queue. The network processor processes a data packet by performing a set of network operations on the data packet in a single thread of a core processor. The special purpose processors perform a subset of the set of network operations on the data packet atomically. An egress module retrieves the processed data packets from a plurality of output queues based on a quality of service (QoS) associated with the output queues, and forwards the data packets towards their destination addresses. | 06-26-2014 |
20140314098 | METHOD AND APPARATUS FOR MANAGING DYNAMIC QUEUE IN BROADCASTING SYSTEM - A method and an apparatus for adaptively coping with a network environment are provided. The method and apparatus includes a packet descriptor for forwarding of Media Transport (MMT) packets in a network process of a switch or a router for processing MMT packets forwarding content expressed in a structure of an MMT standard. The method of managing a queue in a broadcasting system includes receiving a Moving Picture Experts Group (MPEG) MMT packet, obtaining a header of the MMT packet, and queuing the MMT packet according to a type value of a bitrate included in the header of the MMT packet. | 10-23-2014 |
20140321473 | ACTIVE OUTPUT BUFFER CONTROLLER FOR CONTROLLING PACKET DATA OUTPUT OF MAIN BUFFER IN NETWORK DEVICE AND RELATED METHOD - An active output buffer controller is used for controlling a packet data output of a main buffer in a network device. The active output buffer controller has a credit evaluation circuit and a control logic. The credit evaluation circuit estimates a credit value based on at least one of an ingress data reception status of the network device and an egress data transmission status of the network device. The control logic compares the credit value with a first predetermined threshold value to generate a comparison result, and controls the packet data output of the main buffer according to at least the comparison result. | 10-30-2014 |
20140321474 | OUTPUT QUEUE OF MULTI-PLANE NETWORK DEVICE AND RELATED METHOD OF MANAGING OUTPUT QUEUE HAVING MULTIPLE PACKET LINKED LISTS - An output queue of a multi-plane network device includes a first processing circuit, a plurality of storage devices and a second processing circuit. The first processing circuit generates packet selection information based on an arrival sequence of a plurality of packets. The storage devices store a plurality of packet linked lists for the output queue. The second processing circuit dequeues a packet from the output queue by selecting a linked list entry from the packet linked lists according to the packet selection information. | 10-30-2014 |
20150016467 | PORT PACKET QUEUING - A port queue includes a first memory portion having a first memory access time and a second memory portion having a second memory access time. The first memory portion includes a cache row. The cache row includes a plurality of queue entries. A packet pointer is enqueued in the port queue by writing the packet pointer in a queue entry in the cache row in the first memory. The cache row is transferred to a packet vector in the second memory. A packet pointer is dequeued from the port queue by reading a queue entry from the packet vector stored in the second memory. | 01-15-2015 |
20150049771 | PACKET PROCESSING METHOD, AND PACKET TRANSMISSION APPARATUS - The packet processing method includes receiving a first packet, selecting a first storage area from a plurality of storage areas included in a buffer as a packet storage area in accordance with first time at which the first packet is received, and storing the first packet into the selected first storage area. The first storage area is selected as the packet storage area for the other packets received when a predetermined time period has passed from the first time. | 02-19-2015 |
20150078398 | HASH PERTURBATION WITH QUEUE MANAGEMENT IN DATA COMMUNICATION - A method for hash perturbation with queue management in data communication is provided. Using a first set of old queues corresponding to a first hash function, a set of data packets corresponding to a set of session is queued. At a first time, the first hash function is changed to a second hash function. A second set of new queues is created corresponding to the second hash function. A data packet is dequeued from a first old queue in a set of old queues. A second data packet is selected from a second queue in the set of old queues. A new hash value is computed for the second data packet using the second hash function. The second data packet is queued in a first new queue such that the second packet is in position to be delivered first from the first new queue. | 03-19-2015 |
20150124836 | COMMUNICATION DEVICE AND COMMUNICATION METHOD - A communication device includes: a plurality of output ports; a plurality of queues in which packets are stored so as to be sorted into groups of packets that are output from an identical output port in an identical time period, from among the plurality of output ports; a plurality of first selectors that respectively corresponds to the plurality of output ports, and each of which switches a queue from which packets that are output from the output port are read, between the plurality of queues each time the time period elapses; and a second selector that switches a first selector from which packets are output, between the plurality of first selectors, at time intervals in accordance with output rates of packets of the plurality of output ports. | 05-07-2015 |
20150326488 | VEHICLE NETWORK NODE MODULE - A vehicle network node module includes device buffers, a network buffer, a switch circuit, and a processing module. The device buffers temporarily store outgoing device packets from, and temporarily store incoming device packets for, vehicle devices in accordance with a locally managed prioritization scheme. The network buffer receives incoming network packets from, and outputs the outgoing network packets to, a vehicle network fabric in accordance with a global vehicle network protocol. The network buffer also temporarily stores the incoming network packets and the outgoing network packets in accordance with the locally managed prioritization scheme. The switching circuit selectively couples the network buffer to individual ones of the device buffers in accordance with the locally managed prioritization scheme. The processing module interprets the outgoing device packets and the incoming network packets to determine types of packets and determines the locally managed prioritization scheme based on the types of packets. | 11-12-2015 |
20150334036 | Hierarchical Quality of Service Scheduling Method and Device - Provided are an HQoS scheduling method and device. A received uplink data packet is encapsulated and stored in a queue in uplink direction, and an uplink queue scheduling component is requested to perform scheduling. In this manner, HQoS scheduling in the uplink direction is implemented, and a personalized demand of a user can be met by scheduling uplink data, to carry out more flexible function customization. According to the method and device, the data packet may be further sent to a downlink direction after the HQoS scheduling in the uplink direction is completed, and the HQoS scheduling can be performed on the data in the downlink direction, so that the HQoS scheduling is respectively performed on the data in both the uplink direction and the downlink direction; in this manner, the real bidirectional HQoS scheduling control is implemented, and QoS of the user service can be guaranteed in both directions. | 11-19-2015 |
20160021031 | GLOBAL SHARED MEMORY SWITCH - Embodiments of the present invention provide functionality, within a storage-shelf-router integrated circuit, an I/O-controller integrated circuit, or other integrated-circuit implementations of complex electronic devices, for interconnecting all possible pairs of communications ports, a first member of each pair selected from a first set of communications ports and a second member of each pair selected from a second set of communications ports. Embodiments of the present invention employ a time-division-multiplexed global shared memory in order to provide full cross-communications between two or more sets of serial-communications ports, using modest controlling clock rates and wide data-transfer channels. | 01-21-2016 |
20160036731 | VIRTUAL OUTPUT QUEUE LINKED LIST MANAGEMENT SCHEME FOR SWITCH FABRIC - Implementations of the present disclosure involve an apparatus, device, component, and/or method for a virtual output queue linked list management scheme for a high-performance network switch. In general, the linked list management scheme utilizes one or more look-ahead links associated with one or more descriptors in the linked list of descriptors that describe the storage of the incoming data packets to the switch. The look-ahead links allow the switch to schedule reads of memory locations included in the descriptors at the same speed at which the data packets are stored in memory. | 02-04-2016 |
20160087908 | SERVICE INTERFACE FOR QOS-DRIVEN HPNA NETWORKS - An in-band signaling model media control (MC) terminal for an HPNA network includes a frame classification entity (FCE) and a frame scheduling entity (FSE) and provides end-to-end Quality of Service (QoS) by passing the QoS requirements from higher layers to the lower layers of the HPNA network. The FCE is located at an LLC sublayer of the MC terminal, and receives a data frame from a higher layer of the MC terminal that is part of a QoS stream. The FCE classifies the received data frame for a MAC sublayer of the MC terminal based on QoS information contained in the received data frame, and associates the classified data frame with a QoS stream queue corresponding to a classification of the data frame. The FSE is located at the MAC sublayer of the MC terminal, and schedules transmission of the data frame to a destination for the data frame based on a QoS requirement associated with the QoS stream. | 03-24-2016 |
20160182409 | RECIPIENT-DRIVEN TRAFFIC OPTIMIZING OVERLAY SYSTEMS AND METHODS | 06-23-2016 |
370418000 | Contention resolution for output | 2 |
20140321475 | SCHEDULER FOR DECIDING FINAL OUTPUT QUEUE BY SELECTING ONE OF MULTIPLE CANDIDATE OUTPUT QUEUES AND RELATED METHOD - A scheduler performs a plurality of scheduler operations each scheduling an output queue selected from a plurality of output queues associated with an egress port. The scheduler includes a candidate decision logic and a final decision logic. The candidate decision logic is arranged to decide a plurality of candidate output queues for a current scheduler operation, regardless of a resultant status of packet transmission of at least one scheduled output queue decided by at least one previous scheduler operation. The final decision logic is arranged to select one of the candidate output queues as a scheduled output queue decided by the current scheduler operation after obtaining the resultant status of packet transmission of the at least one scheduled output queue decided by the at least one previous scheduler operation. | 10-30-2014 |
20140321476 | PACKET OUTPUT CONTROLLER AND METHOD FOR DEQUEUING MULTIPLE PACKETS FROM ONE SCHEDULED OUTPUT QUEUE AND/OR USING OVER-SCHEDULING TO SCHEDULE OUTPUT QUEUES - One packet output controller includes a scheduler and a dequeue device. The scheduler performs a single scheduler operation to schedule an output queue selected from a plurality of output queues associated with an egress port. The dequeue device dequeues multiple packets from the scheduled output queue decided by the single scheduler operation. Another packet output controller includes a scheduler and a dequeue device. The scheduler performs a plurality of scheduler operations each scheduling an output queue selected from a plurality of output queues associated with an egress port. The scheduler performs a current scheduler operation, regardless of a status of a packet transmission of a scheduled output queue decided by a previous scheduler operation. The dequeue device dequeues at least one packet from the scheduled output queue decided by the current scheduler operation after the packet transmission of the scheduled output queue decided by the previous scheduler operation is complete. | 10-30-2014 |