Class / Patent application number | Description | Number of patent applications / Date published |
370415000 | Having input queuing only | 17 |
20080285581 | Method Apparatus and System for Accelerated Communication - A TCP acceleration apparatus includes input queues each having a service level and storing at least one session packet list having packets from a same TCP session. The apparatus also includes a distributor connected to the input queues and to the client and configured to retrieve a session packet from a session packet list at a top of an input queue for transmission to the client. The input queue at the service level selected by the distributor moves the session packet list at a top of the input queue to a bottom of the input queue after the session packet is retrieved by the distributor. Acceleration apparatuses including other features, as well as a method, computer program product and system for TCP acceleration are also discussed. | 11-20-2008 |
20090086749 | Methods and apparatus for stimulating packet-based systems - In one embodiment, a packet-based system having a number of buses is stimulated using apparatus having 1) a hardware interface configured to provide data packets to the buses; 2) a plurality of hardware-based queue schedulers, each configured to schedule data packets received from a respective one of a plurality of data packet sources, in a respective one of a plurality of hardware-based queues; and 3) a hardware-based priority scheduler configured to cause each particular queue to transmit a next highest priority data packet over one of the buses, based on i) timing requirements of the next highest priority data packet in the particular queue, and ii) a determination that transmission of the next highest priority data packet in the particular queue will not delay a transmission of a higher priority data packet in another one of the hardware-based queues. | 04-02-2009 |
20100002715 | THERMALLY FLEXIBLE AND PERFORMANCE SCALABLE PACKET PROCESSING CIRCUIT CARD - Embodiments of the invention provide a packet processing circuit card with scalable performance at specified operational bandwidths over a given range of bandwidths. Advantageously, these embodiments enable a packet processing circuit card developed for a high bandwidth application to be used in a lower bandwidth application. This allows for cost-effective scaling of packet processing performance to the needs of the data communications system. | 01-07-2010 |
20100008378 | Ethernet Controller Implementing a Performance and Responsiveness Driven Interrupt Scheme - A method of generating frame receive interrupts in an Ethernet controller including receiving incoming data frames and storing data frames into a receive queue, monitoring the number of received data frames, and when the number of received data frames exceeds a first threshold, generating a frame receive interrupt. In another embodiment, the method further includes monitoring the amount of received data stored in the receive queue and generating a frame receive interrupt when the first threshold is exceeded and when the amount of received data stored in the receive queue exceeds a second threshold. In yet another embodiment, the method further includes monitoring the time duration of the data frames stored in the receive queue, and generating a frame receive interrupt when the number of received data frames exceeds the first threshold or when the time duration of the data frames stored in the receive queue exceeds a third threshold. | 01-14-2010 |
20100150165 | METHOD AND SYSTEM FOR HSDPA BIT LEVEL PROCESSOR ENGINE - A method for processing signals in a communication system is disclosed and may include pipelining processing of a received HSDPA bitstream within a single chip. The pipelining may include calculating a memory address for a current portion of a plurality of information bits in the received HSDPA bitstream, while storing on-chip, a portion of the plurality of information bits in the received HSDPA bitstream that is subsequent to the current portion. A portion of the plurality of information bits in the received HSDPA bitstream that is previous to the current portion may be decoded during the calculating and storing. The calculation of the memory address for the current portion of the plurality of information bits may be achieved without the use of a buffer. Processing of the plurality of information bits in the received HSDPA bitstream may be partitioned into a functional data processing path and functional address processing path. | 06-17-2010 |
20100254399 | METHOD OF UPLINK IP PACKET FILTERING CONTROL IN MOBILE TERMINAL - A method for controlling uplink IP packet filtering in a mobile terminal in a 3GPP Evolved Packet System (EPS) is provided, including an information receiving operation of receiving IP address information allocated to user equipment, and filtering information required for delivering an uplink IP packet received from the user equipment; and a filtering operation for determining which packet data network and a bearer the IP packet is delivered to, based on the IP address information and the filtering information. In a 3GPP evolved packet system supporting a default bearer function, a packet data network to which an uplink IP packet is delivered and a bearer identifier can be efficiently determined when the user equipment simultaneously accesses one or more packet data networks and is allocated several IP addresses, resulting in effective uplink packet filtering. | 10-07-2010 |
20110051742 | Flexible Bandwidth Allocation in High-Capacity Grooming Switches - Apparatus for flexible sharing of bandwidth in switches with input buffering by dividing time into a plurality of frames of time slots, wherein each frame has a specified integer value of time slots. The apparatus includes modules where inputs sequentially select available outputs to which the inputs send packets in specified future time slots. The selection of outputs by the inputs is done using a pipeline technique and a schedule is calculated within multiple time slots. | 03-03-2011 |
20110069717 | Data transfer device, information processing apparatus, and control method - A data transfer device includes a plurality of input queues, a plurality of arbitration control units provided for the respective input queues, and an input queue selecting unit that selects any one of the input queues based on a priority set for each input queue, and outputs data from the selected input queue. Each arbitration control unit includes a register that stores therein a predetermined upper limit, a counter that counts the amount of data output from a corresponding input queue, and a control circuit that, when a value of the counter becomes equal to or greater than the upper limit stored in the register, causes the input queue selecting unit to update the priority and resets the value of the counter. | 03-24-2011 |
20120093170 | Direct Memory Access Memory Management - A method, computer program product, and apparatus for managing data packets are presented. A data packet in the data packets is stored in a first portion of a memory in response to receiving the data packet at a device. The first portion of the memory is allocated to the device. A determination is made whether a size of the data packet is less than a threshold size. The data packet is copied from the first portion of the memory allocated to the device to a second portion of the memory in response to a determination that the size of the data packet stored in the memory is less than the threshold size. | 04-19-2012 |
20130315260 | Flow-Based TCP - A system and method for sharing a WAN TCP tunnel between multiple flows without having head of the line blocking problem is disclosed. When a complete but out of order PDU is stuck behind an incomplete PDU in a TCP tunnel, the complete but out of order PDU is removed from the tunnel. To do that, first the boundaries of the PDUs of the different flows are preserved and the TCP receive window advertisement is increased. The receive window is opened when initially receiving out-of-order data. As out-of-order complete PDUs are pulled out of the receive queue, to address double counting, place holders are used in the receive queue to indicate data that was in the queue. As out-of-order data PDUs are pulled out of the queue the window advertisement is increased. This keeps the sending side from running out of TX window and stopping transmission of new data. | 11-28-2013 |
20140334499 | Method and Apparatus for Time Stretching to Hide Data Packet Pre-Buffering Delays - A special rendering mode for the first few seconds of play out of multimedia data minimizes the delay caused by pre-buffering of data packets in multimedia streaming applications. Instead of pre-buffering all incoming data packets until a certain threshold is reached, the streaming application starts playing out some of the data packets immediately after the arrival of the first data packet. Immediate play out of the first data packet, for example, results in minimum delay between channel selection and perception, thereby allowing a user to quickly scan through all available channels to quickly get a notion of the content. The immediate play out is done at a reduced speed. | 11-13-2014 |
20160057081 | PPI DE-ALLOCATE CPP BUS COMMAND - Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing the storing of packet portions into the memory, a packet engine is provided. The PDRSDs use a PPI addressing mode in communicating with the packet engine and in instructing the packet engine to store packet portions. A PDRSD requests a PPI from the packet engine, and is allocated a PPI by the packet engine, and then tags the packet portion to be written with the PPI and sends the packet portion and the PPI to the packet engine. Once the packet portion has been processed, a PPI de-allocation command causes the packet engine to de-allocate the PPI so that the PPI is available for allocating in association with another packet portion. | 02-25-2016 |
20160065484 | FLEXIBLE RECIRCULATION BANDWIDTH MANAGEMENT - A method for managing recirculation path traffic in a network node comprises monitoring an input packet stream received at an input port of the network node and monitoring a recirculation packet stream at a recirculation path of the network node. A priority level associated with individual packets of the monitored input packet stream is detected and low priority packets are stored in a virtual queue. The method also includes determining an average packet length associated with packets of the monitored recirculation packet stream. The method further comprises queuing one or more of the low priority packets or the recirculation packets for transmission based on the average packet length and a weighted share schedule. | 03-03-2016 |
20160127251 | SYSTEMS AND METHODS OF QOS FOR AND SINGLE STREAM ICA - The present solution provides quality of service (QoS) for a stream of protocol data units via a single transport layer connection. A device receives via a single transport layer connection a plurality of packets carrying a plurality of protocol data units. Each protocol data unit identifies a priority. The device may include a filter for determining an average priority for a predetermined window of protocol data units and an engine for assigning the average priority as a connection priority of the single transport layer connection. The device transmits via the single transport layer connection the packets carrying those protocol data units within the predetermined window of protocol data units while the connection priority of the single transport layer connection is assigned the average priority for the predetermined window of protocol data units. | 05-05-2016 |
370416000 | Contention resolution for output | 3 |
20090059942 | METHOD AND APPARATUS FOR SCHEDULING PACKETS AND/OR CELLS - A system and method of scheduling packets or cells for a switch device that includes a plurality of input ports each having at least one input queue, a plurality of switch units, and a plurality of output ports. There is generated, by each input port having a packet or cell in its at least one queue, a request to output the corresponding packet or cell to each of the output ports to which a corresponding packet or cell is to be sent to, wherein the request includes a specific one of the plurality of switch units to be used in a transfer of the packet or cell from the corresponding input port to the corresponding output port. Access is granted, per output port per switch unit, to the request made on a first priority scheme. Grants are accepted per input port per switch unit, the accepting being based on a second priority scheme. Packets and/or cells are outputted from the respective input ports to the respective output ports, based on the accepted grants, utilizing the corresponding switch units identified in the accepted grants. | 03-05-2009 |
20090086750 | Non-Random Access Rapid I/O Endpoint In A Multi-Processor System - A system and method for using a doorbell command to allow sRIO devices to operate as bus masters to retrieve data packets stored in a serial buffer, without requiring the SRIO devices to specify the sizes of the data packets. The serial buffer includes a plurality of queues that store data packets. A doorbell frame request packet identifies the queue to be accessed within the serial buffer, but does not specify the size of the data packet(s) to be retrieved. Upon detecting a doorbell frame request packet, the serial buffer operates as a bus master to transfer the requested data packets out of the selected queue. The selected queue can be configured to operate in a flush mode or a non-flush mode. The serial buffer may also indicate that a received doorbell frame request has attempted to access an empty queue. | 04-02-2009 |
20100232449 | Method and Apparatus For Scheduling Packets and/or Cells - A system and method of switching data using a switch device that includes a plurality of input ports, a plurality of switch units, and a plurality of output ports. Each input port storing data to be sent may generate a request to output data to each of the output ports to which stored data is to be sent, wherein each request identifies a specific one of the plurality of switch units to be used to transfer the data from the corresponding input port to the corresponding output port. Grants may be generated per output port per switch unit. Grants may be accepted per input port per switch unit. Data may be outputted from the respective input ports to the respective output ports, based on the accepted grants, utilizing the switch units identified in the requests corresponding to the accepted grants. | 09-16-2010 |