Patent application number | Description | Published |
20080209186 | Method for Reducing Buffer Capacity in a Pipeline Processor - The invention presents a method for a processor ( | 08-28-2008 |
20110038261 | TRAFFIC MANAGER AND A METHOD FOR A TRAFFIC MANAGER - The present invention relates to a traffic manager ( | 02-17-2011 |
20110085464 | NETWORK PROCESSOR UNIT AND A METHOD FOR A NETWORK PROCESSOR UNIT - A method of and a network processor unit ( | 04-14-2011 |
20120317398 | METHOD FOR REDUCING BUFFER CAPACITY IN A PIPELINE PROCESSOR - A method to reduce buffer capacity in a processor includes giving the data packets admittance to the processor through at least one interface, storing the data packets in at least one input buffer, and using a packet rate shaper outside of a processing pipeline to control flow of the data packets to the pipeline before the data packets enter the pipeline. First and second data packets are given admittance to the pipeline in dependence on cost information per packet that is dependent upon an expected time period of residence of the first data packet in the pipeline. Cost information dependent upon an expected time period of residence of the second data packet in the pipeline differs from said cost information dependent upon the expected time period of residence of the first data packet in the pipeline. | 12-13-2012 |
20130003556 | SCHEDULING PACKETS IN A PACKET-PROCESSING PIPELINE - The disclosed embodiments relate to a packet-processing system. This system includes an input which is configured to receive packets, wherein the packets include control-message (CM) packets and traffic packets. It also includes a pipeline to process the packets, wherein the pipeline includes access points for accessing an engine which services requests for packets, wherein CM packets and traffic packets access the engine through different access points. The system additionally includes an arbiter to schedule packets entering the pipeline. While scheduling the packets, the arbiter is configured to account for empty slots in the pipeline to ensure that when CM packets and traffic packets initiate accesses to the engine through different access points, the accesses do not cause an overflow at an input queue for the engine. | 01-03-2013 |
20130003752 | Method, Network Device, Computer Program and Computer Program Product for Communication Queue State - Aspects of the disclosure provide a method for communicating queue information. The method includes determining a queue state for each one of a plurality of queues at least partially based on respective queue length, selecting a queue with a greatest difference between the queue state of the queue and a last reported queue state of the queue, and reporting the queue state of the selected queue to at least one node. | 01-03-2013 |
20130258845 | METHOD AND APPARATUS FOR SCHEDULING PACKETS FOR TRANSMISSION IN A NETWORK PROCESSOR HAVING A PROGRAMMABLE PIPELINE - A network processor includes an arbitration device, a processing device, and a pipeline. The arbitration device receives a first packet and a second packet. The second packet includes a first control message. The pipeline includes access devices, where the access devices include first and second access devices. The pipeline, based on a clock signal, forwards the first and second packets between successive ones of the access devices. The arbitration device: sets a timer based on at least one of (i) an amount of time for data to travel between the first and second access devices, or (ii) a number of pipeline stages between the first and second access devices; adjusts a variable based on (i) the clock signal, and (ii) transmission of the first packet from the arbitration device to the pipeline; and based on the timer and the variable, schedules transmission of the second packet through the pipeline. | 10-03-2013 |
20130326530 | METHOD FOR PACKET FLOW CONTROL USING CREDIT PARAMETERS WITH A PLURALITY OF LIMITS - The present invention relates to a processor and a method for processing a data packet, the method including steps of decreasing a value of a first credit parameter when the data packet is admitted to a processor at least partly based on the value of the first credit parameter and a first limit of the first credit parameter, and increasing the value of the first credit parameter, in dependence on a data storage level in a buffer in which the data packet is stored before being admitted to the processor, the value of the first credit parameter not being increased, so as to become larger than a second limit of the first credit parameter, when the buffer is empty. | 12-05-2013 |
20140146827 | NETWORK PROCESSOR UNIT AND A METHOD FOR A NETWORK PROCESSOR UNIT - A method of and a network processor unit for processing of packets in a network, the network processor comprising: communication interface configured to receive and transmit packets; at least one processing means for processing packets or parts thereof; an embedded switch configured to switch packets between the communication interface and the processing means; and wherein the embedded switch is configured to analyze a received packet and to determine whether the packet should be dropped or not; if the packet should not be dropped, the switch is configured to store the received packet, to send a first part of the packet to the processing means for processing thereof, to receive the processed first part of the packet from the processing means, and to transmit the processed first part of the packet. | 05-29-2014 |
20140328196 | TIME EFFICIENT COUNTERS AND METERS ARCHITECTURE - A network device includes a plurality of interfaces configured to receive, from a network, packets to be processed by the network device. A load determination circuit of the network device is configured to determine whether a packet traffic load of the network device is above a traffic load threshold, and a dual-mode counter module is configured to (i) determine a count of quanta associated with the received packets using a first counting mode in response to the load determination unit determining that the packet traffic load is above the traffic load threshold, and (ii) determine a count of quanta associated with the received packets using a second counting mode, different than the first counting mode, in response to the load determination unit determining that the packet traffic load is not above the traffic load threshold. | 11-06-2014 |
20140369360 | SYSTEMS AND METHODS FOR MANAGING TRAFFIC IN A NETWORK USING DYNAMIC SCHEDULING PRIORITIES - A system for managing traffic in a communication network. The system includes a plurality of queues each configured to store data packets and a plurality of scheduling nodes each configured to process data packets from one or more of the plurality of queues. A scheduler is configured to schedule, using the plurality of scheduling nodes, respective transfers of the data packets from the plurality of queues. Each of the plurality of scheduling nodes is assigned to one or more of the plurality of queues. Each of the plurality of scheduling nodes and each of the plurality of queues is assigned a respective scheduling priority. The respective scheduling priorities are selectively changeable between a predetermined scheduling priority and a dynamic scheduling priority, wherein the dynamic scheduling priority corresponds to a priority propagated from the one or more of the plurality of queues. | 12-18-2014 |