Patent application number | Description | Published |
20080215772 | SYSTEM METHOD STRUCTURE IN NETWORK PROCESSOR THAT INDICATES LAST DATA BUFFER OF FRAME PACKET BY LAST FLAG BIT THAT IS EITHER IN FIRST OR SECOND POSITION - A method and structure for determining when a frame of information comprised of one or more buffers of data being transmitted in a network processor has completed transmission is provided. The network processor includes several control blocks, one for each data buffer, each containing control information linking one buffer to another. Each control block has a last bit feature which is a single bit settable to “one” or “zero” and indicates when the data buffer having the last bit is transmitted. The last bit is in a first position when an additional data buffer is to be chained to a previous data buffer indicating an additional data buffer is to be transmitted and a second position when no additional data buffer is to be chained to a previous data buffer. The position of the last bit is communicated to the network processor indicating the ending of a particular frame. | 09-04-2008 |
20080222324 | SYSTEM METHOD STRUCTURE IN NETWORK PROCESSOR THAT INDICATES LAST DATA BUFFER OF FRAME PACKET BY LAST FLAG BIT THAT IS EITHER IN FIRST OR SECOND POSITION - A method and structure for determining when a frame of information comprised of one or more buffers of data being transmitted in a network processor has completed transmission is provided. The network processor includes several control blocks, one for each data buffer, each containing control information linking one buffer to another. Each control block has a last bit feature which is a single bit settable to “one or “zero” and indicates the transmission of when the data buffer having the last bit. The last bit is in a first position when an additional data buffer is to be chained to a previous data buffer indicating an additional data buffer is to be transmitted and a second position when no additional data buffer is to be chained to a previous data buffer. The position of the last bit is communicated to the network processor indicating the ending of a particular frame. | 09-11-2008 |
20080222394 | Systems and Methods for TDM Multithreading - Systems and methods for distributing thread instructions in the pipeline of a multi-threading digital processor are disclosed. More particularly, hardware and software are disclosed for successively selecting threads in an ordered sequence for execution in the processor pipeline. If a thread to be selected cannot execute, then a complementary thread is selected for execution. | 09-11-2008 |
20080229271 | DATA ALIGNER IN RECONFIGURABLE COMPUTING ENVIRONMENT - A data aligner in a reconfigurable computing environment is disclosed. Embodiments employ hardware macros in field configurable gate arrays (FPGAs) to minimize the number of configurable logic blocks (CLBs) needed to shift bytes of data. The alignment mechanism allows flexibility, scalability, configurability, and reduced costs as compared to application specific integrated circuits. | 09-18-2008 |
20080229272 | DATA ALIGNER IN RECONFIGURABLE COMPUTING ENVIRONMENT - A data aligner in a reconfigurable computing environment is disclosed. Embodiments employ hardware macros in field configurable gate arrays (FPGAs) to minimize the number of configurable logic blocks (CLBs) needed to shift bytes of data. The alignment mechanism allows flexibility, scalability, configurability, and reduced costs as compared to application specific integrated circuits. | 09-18-2008 |
20080273539 | SYSTEM FOR PERFORMING A PACKET HEADER LOOKUP - A system for performing a lookup for a packet in a computer network are disclosed. The packet includes a header. The system includes a parser, a lookup engine coupled with the parser, and a processor coupled with the lookup engine. The parser parses the packet for the header prior to receipt of the packet being completed. The lookup engine performs a lookup for the header and returns a resultant. In one aspect, the lookup includes performing a local lookup of a cache that includes resultants of previous lookups. The processor processes the resultant. | 11-06-2008 |
20080298372 | STRUCTURE AND METHOD FOR SCHEDULER PIPELINE DESIGN FOR HIERARCHICAL LINK SHARING - A pipeline configuration is described for use in network traffic management for the hardware scheduling of events arranged in a hierarchical linkage. The configuration reduces costs by minimizing the use of external SRAM memory devices. This results in some external memory devices being shared by different types of control blocks, such as flow queue control blocks, frame control blocks and hierarchy control blocks. Both SRAM and DRAM memory devices are used, depending on the content of the control block (Read-Modify-Write or ‘read’ only) at enqueue and dequeue, or Read-Modify-Write solely at dequeue. The scheduler utilizes time-based calendars and weighted fair queueing calendars in the egress calendar design. Control blocks that are accessed infrequently are stored in DRAM memory while those accessed frequently are stored in SRAM. | 12-04-2008 |
20080307439 | REDUCING MEMORY ACCESSES IN PROCESSING TCP/IP PACKETS - A method, computer program product and system for processing TCP/IP packets. A TCP protocol stack may store a payload of a received TCP/IP packet in a data fragment list. The TCP protocol stack may further read the header of the received packet to extract a value used to index into a table storing a list of transport control blocks (TCBs). The TCP protocol stack may further perform a lock and a read operation on the TCB indexed in the table. The TCP protocol stack may further transmit the payload to the TCP application without requiring the application to perform a lock, read, write or unlock operation on the indexed TCB since the TCP protocol stack and the TCP application are operating on the same thread. By the TCP application foregoing the lock, read, write and unlock operations on the TCB, there is a reduction in the number of memory accesses. | 12-11-2008 |
20080317027 | SYSTEM FOR REDUCING LATENCY IN A HOST ETHERNET ADAPTER (HEA) - A system for reducing latency in a host Ethernet adapter (HEA) includes the following. First, the HEA receives a packet with an internet protocol (IP) header and data in the HEA. The HEA parses a connection identifier from the IP header and accesses a negative cache in the HEA to determine if the connection identifier is not in a memory external to the HEA. The HEA applies a default treatment to the packet if the connection identifier is not in the memory, thereby reducing latency by decreasing access to the memory. | 12-25-2008 |
20090034530 | PACKET CLASSIFICATION USING MODIFIED RANGE LABELS - A method and system for encoding a set of range labels for each parameter field in a packet classification key in such a way as to require preferably only a single entry per rule in a final processing stage of a packet classifier. Multiple rules are sorted accorded to their respective significance. A range, based on a parameter in the packet header, is previously determined. Multiple rules are evaluated according to an overlapping of rules according to different ranges. Upon a determination that two or more rules overlap, each overlapping rule is expanded into multiple unique segments that identify unique range intersections. Each cluster of overlapping ranges is then offset so that at least one bit in a range for the rule remains unchanged. The range segments are then converted from binary to Gray code, which results in the ability to determine a CAM entry to use for each range. | 02-05-2009 |
20090070778 | INTER PROCESS COMMUNICATIONS IN A DISTRIBUTED CP AND NP ENVIRONMENT - A lightweight, low cost solution provides inter process communications (IPC) in a network processing environment. A method of inter process communication (IPC) between General Purpose Processors in a network processing environment uses software based functions (Application Program Interfaces (APIs)) that enable inter process communication between processors in a network processing environment. The software enabled functions open and close inter process communication paths for transmitting and receiving of inter process communication frames and allow the inter process communication frames to be transmitted to one or several processors in said network processing environment. The software has the capability of selecting either data or control path in said network processing environment to transmit or receive said inter process communication frames. | 03-12-2009 |
20090080461 | FRAME ALTERATION LOGIC FOR NETWORK PROCESSORS - Packet switching node in a communication system includes apparatus for receiving incoming information packets or frames which contain header portions with formatting control blocks. Information in the frame's header contains frame alteration commands for modifying the information in the frame. The modifications include adding new information, deleting information, and overlaying information. Decoders and control devices in an alteration engine interpret the commands and apply the modifications to the frame data. Common and standard data patterns are stored for insertion or overlaying to conserve data packet space. | 03-26-2009 |
20090083611 | APPARATUS FOR BLIND CHECKSUM AND CORRECTION FOR NETWORK TRANSMISSIONS - Apparatus for providing a checksum in a network transmission. In one aspect of the invention, a checksum for a packet to be transmitted on a network is determined by retrieving packet information from a storage device, the packet information to be included in the packet to be transmitted. A blind checksum value is determined based on the retrieved packet information, and the blind checksum value is adjusted to a protocol checksum based on descriptor information describing the structure of the packet. The protocol checksum is inserted in the packet before the packet is transmitted. | 03-26-2009 |
20090097404 | Oversubscribing Bandwidth In A Communications Network - A system and computer readable medium for oversubscribing bandwidth in a communication network, is disclosed. The system and computer readable medium includes policing a first data flow and outputting a first output data flow from the first meter, in relation to a first Committed Information Rate (CIR) and a first Peak Information Rate (PIR); policing a second data flow and outputting a second output data flow from the second meter in relation to a second CIR and a second PIR; and policing an aggregated output data flow of the first output data flow and the second output data through a third meter of the oversubscription module, where the aggregated output data flow is policed in relation to a third CIR and a third PIR. | 04-16-2009 |
20090103536 | Method and System for Reducing Look-Up Time in Packet Forwarding on Computer Networks - A method and system for reducing the lookup time in packet forwarding on computer networks. A first lookup is performed in a memory tree to find a first protocol forwarding entry in the memory tree. The forwarding entry includes first protocol (e.g., EGP) information and cached associated second protocol (e.g., IGP) information. Both EGP and IGP information are retrievable with the first lookup and used in the determination of an EGP route for the data packet. If the cached IGP information has been invalidated due to address updates, a second lookup can be performed to find an original IGP entry in the memory tree, the information from which can be cached in the EGP forwarding entry if a background maintenance task has finished designating all the EGP entries as having out-of-date caches. | 04-23-2009 |
20090125535 | STRUCTURE FOR DELETING LEAVES IN TREE TABLE STRUCTURES - Techniques and articles of manufacture are provided comprising computer readable programs that, when executed on the computer, cause the computer to delete a leaf from a Patricia tree having a direct table and a plurality of PSCB's which decode portions of the pattern of a leaf in the tree without shutting down the functioning of the tree. A leaf having a pattern is identified as a leaf to be deleted. Using the pattern, the tree is walked to identify the location of the leaf to be deleted. The leaf to be deleted is identified and deleted, and any relevant PSCB modified, if necessary. The technique also is applicable to deleting a prefix of a prefix. | 05-14-2009 |
20090198837 | System and Method for Providing Remotely Coupled I/O Adapters - A heterogeneous processing element model is provided where I/O devices look and act like processors. In order to be treated like a processor, an I/O processing element, or other special purpose processing element, must follow some rules and have some characteristics of a processor, such as address translation, security, interrupt handling, and exception processing, for example. The heterogeneous processing element model abstracts an I/O device such that communication intended for the I/O device may be packetized and sent over a network. Thus, a virtualization platform may packetize communication intended for a remotely located I/O device and transmit the packetized communication over a distance, rather than having to make a call to a library, call a device driver, pin memory, and so forth. | 08-06-2009 |
20090198951 | Full Virtualization of Resources Across an IP Interconnect - An addressing model is provided where all resources, including memory and devices, are addressed with internet protocol (IP) addresses. A task, such as an application, may be assigned a range of IP addresses rather than an effective address range. Thus, a processing element, such as an I/O adapter or even a printer, for example, may also be addressed using IP addresses without the need for library calls, device drivers, pinning memory, and so forth. This addressing model also provides full virtualization of resources across an IP interconnect, allowing a process to access an I/O device across a network. | 08-06-2009 |
20090198953 | Full Virtualization of Resources Across an IP Interconnect Using Page Frame Table - An addressing model is provided where devices, including I/O devices, are addressed with internet protocol (IP) addresses, which are considered part of the virtual address space. A task, such as an application, may be assigned an effective address range, which corresponds to addresses in the virtual address space. The virtual address space is expanded to include Internet protocol addresses. Thus, the page frame tables are also modified to include entries for IP addresses and additional properties for devices and I/O. Thus, a processing element, such as an I/O adapter or even a printer, for example, may also be addressed using IP addresses without the need for library calls, device drivers, pinning memory, and so forth. This addressing model also provides full virtualization of resources across an IP interconnect, allowing a process to access an I/O device across a network. | 08-06-2009 |
20100122011 | Method and Apparatus for Supporting Multiple High Bandwidth I/O Controllers on a Single Chip - An integrated processor design includes physical interface macros supporting heterogeneous electrical properties. The processor design comprises a plurality of processing cores and a plurality of physical interfaces to connect to a memory interface, a peripheral component interconnect express (PCI Express or PCIe) interface for input/output, an Ethernet interface for network communication, and/or a serial attached SCSI (SAS) interface for storage. Each physical interface may be programmatically connected to a selected interface controller, such as a memory controller, a PCI Express controller, or an Ethernet controller, for example. A plurality of such controllers may be connected to a switch within the processor design, with the switch also being connected to each physical interface macro. Thus, the physical interface macros may be programmatically connected to a subset of the plurality of controllers. | 05-13-2010 |
20100153541 | TECHNIQUES FOR DYNAMICALLY ASSIGNING JOBS TO PROCESSORS IN A CLUSTER BASED ON PROCESSOR WORKLOAD - A technique for operating a high performance computing (HPC) cluster includes monitoring workloads of multiple processors included in the HPC cluster. The HPC cluster includes multiple nodes that each include two or more of the multiple processors. One or more threads assigned to one or more of the multiple processors are moved to a different one of the multiple processors based on the workloads of the multiple processors. | 06-17-2010 |
20100153542 | TECHNIQUES FOR DYNAMICALLY ASSIGNING JOBS TO PROCESSORS IN A CLUSTER BASED ON BROADCAST INFORMATION - A technique for operating a high performance computing cluster (HPC) having multiple nodes (each of which include multiple processors) includes periodically broadcasting information, related to processor utilization and network utilization at each of the multiple nodes, from each of the multiple nodes to remaining ones of the multiple nodes. Respective local job tables maintained in each of the multiple nodes are updated based on the broadcast information. One or more threads are then moved from one or more of the multiple processors to a different one of the multiple processors (based on the broadcast information in the respective local job tables). | 06-17-2010 |
20100153965 | TECHNIQUES FOR DYNAMICALLY ASSIGNING JOBS TO PROCESSORS IN A CLUSTER BASED ON INTER-THREAD COMMUNICATIONS - A technique for operating a high performance computing (HPC) cluster includes monitoring communication between threads assigned to multiple processors included in the HPC cluster. The HPC cluster includes multiple nodes that each include two or more of the multiple processors. One or more of the threads are moved to a different one of the multiple processors based on the communication between the threads. | 06-17-2010 |
20100153966 | TECHNIQUES FOR DYNAMICALLY ASSIGNING JOBS TO PROCESSORS IN A CLUSTER USING LOCAL JOB TABLES - A technique for operating a high performance computing cluster includes monitoring workloads of multiple processors. The high performance computing cluster includes multiple nodes that each include two or more of the multiple processors. Workload information for the multiple processors is periodically updated in respective local job tables maintained in each of the multiple nodes. Based on the workload information in the respective local job tables, one or more threads are periodically moved to a different one of the multiple processors. | 06-17-2010 |
20100217905 | Synchronization Optimized Queuing System - A synchronization optimized queuing method and device to minimize software/hardware interaction in network interface hardware during an end-of-initiative process, including network adapter queue implementations for network interface hardware for optimized communication in a computer system. An end-of-initiative procedure to ensure that the network interface hardware has received an interrupt enable and to recheck the interrupt queue is unnecessary in the present invention. | 08-26-2010 |
20110158249 | Assignment Constraint Matrix for Assigning Work From Multiple Sources to Multiple Sinks - An assignment constraint matrix method and apparatus used in assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device. The assignment constraint matrix is implemented as a plurality of qualifier matrixes adapted to operate simultaneously in parallel. Each of the plurality of qualifier matrixes is adapted to determine sources in a subset of supported sources that are qualified to provide work to a set of sinks based on assignment constraints. The determination of qualified sources may be based sink availability information that may be provided for a set of sinks on a single chip or distributed on multiple chips. | 06-30-2011 |
20110158250 | Assigning Work From Multiple Sources to Multiple Sinks Given Assignment Constraints - A method and apparatus for assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device. In a given processing period, sinks that are available to receive work are identified and sources qualified to send work to the available sinks are determined taking into account any assignment constraints. A single source is selected from an overlap of the qualified sources and sources having work available. This selection may be made using a hierarchical source scheduler for processing subsets of supported sources simultaneously in parallel. A sink to which work from the selected source may be assigned is selected from available sinks qualified to receive work from the selected source. | 06-30-2011 |
20110158254 | DUAL SCHEDULING OF WORK FROM MULTIPLE SOURCES TO MULTIPLE SINKS USING SOURCE AND SINK ATTRIBUTES TO ACHIEVE FAIRNESS AND PROCESSING EFFICIENCY - A method and apparatus for assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device. In a given processing period, a source is selected in a manner that maintains fairness in the selection process. A corresponding sink is selected for the selected source based on processing efficiency. If, due to assignment constraints, no sink is available for the selected source, the selected source is retained for selection in the next scheduling period, to maintain fairness. In this case, to optimize efficiency, a most efficient currently available sink is identified and a source for providing work to that sink is selected. | 06-30-2011 |
20110243134 | Data Frame Forwarding Using a Distributed Virtual Bridge - Systems and methods to forward data frames are provided. A particular method may include receiving a data frame at a distributed virtual bridge. The distributed virtual bridge includes a first bridge element coupled to a first server computer and a second bridge element coupled to the first bridge element and to a second server computer. The distributed virtual bridge further includes a controlling bridge coupled to the first bridge element and to the second bridge element. The controlling bridge includes a global forwarding table. The data frame is forwarded from the first bridge element to the second bridge element of the distributed virtual bridge using address data associated with the data frame. A logical network associated with the frame may additionally be used to forward the data frame. | 10-06-2011 |
20110243146 | Data Frame Forwarding Using a Multitiered Distributed Virtual Bridge Hierarchy - Systems and methods to forward data frames are provided. A particular method may include evaluating address data of a first data frame at a first virtual bridge coupled to a first virtual machine of a first server computer of a plurality of server computers. Based upon the evaluation at the first virtual bridge, the first data frame may be forwarded to a second virtual bridge associated with an adapter that is coupled to the first virtual machine. The address data of the first data frame may be evaluated at the second virtual bridge. Based upon the evaluation, the data frame may be forwarded to a third virtual bridge configured to forward the data frame based upon the address data to a second server computer of the plurality of server computers. | 10-06-2011 |
20110258340 | Distributed Virtual Bridge Management - Systems and methods to forward data frames are described. A particular method may include receiving a data frame at a switch of a plurality of networked switches coupled to a plurality of server computers. The data frame may be forwarded from a controlling bridge coupled to the plurality of networked switches. The data frame may be determined to include management data, and an operating parameter of the switch may be modified. | 10-20-2011 |
20110261687 | Priority Based Flow Control Within a Virtual Distributed Bridge Environment - Systems and methods to communicate data frames are provided. A particular apparatus may include a first adapter having a first queue configured to store a data frame associated with a first priority. The adapter is configured to generate a first priority pause frame. A distributed virtual bridge may be coupled to the first adapter. The distributed virtual bridge may include an integrated switch router and a first transport layer module configured to provide a frame-based interface to the integrated switch router. The transport layer module may include a first buffer associated with the first priority. A first bridge element of the distributed virtual bridge may be coupled to the first adapter queue and to the first transport layer module. The first bridge element is configured to receive the first priority pause frame from the adapter and to communicate an interrupt signal to the first transport layer module to interrupt delivery of the data frame to the first queue. | 10-27-2011 |
20110261815 | Multicasting Using a Multitiered Distributed Virtual Bridge Hierarchy - Systems and methods to multicast data frames are provided. A particular apparatus includes a plurality of computing nodes and a distributed virtual bridge. The distributed virtual bridge includes a plurality of bridge elements coupled to the plurality of computing nodes. The plurality of bridge elements are configured to forward a copy of a multicast data frame to the plurality of computing nodes using group member information associated with addresses of the plurality of server computers. A controlling bridge coupled to the plurality of bridge elements is configured to communicate the group member information to the plurality of bridge elements. | 10-27-2011 |
20110261827 | Distributed Link Aggregation - Systems and methods to forward data frames are described. A particular method may include generating a plurality of management frames at a controlling bridge. The management frames may include routing information. The plurality of management frames may be communicated to a plurality of bridge elements coupled to a plurality of server computers. The plurality of bridge elements are each configured to selectively forward a plurality of data frames according to the routing information. | 10-27-2011 |
20110262134 | Hardware Accelerated Data Frame Forwarding - Systems and methods to forward data frames are described. A particular method may include evaluating header data of a data frame at a bridge element, where the header data includes address data that corresponds to a Fiber Channel Forwarder in communication with the bridge element. Based upon the evaluation, the header data of the data frame may be modified at the bridge element in such a manner that the data frame is not routed through the Fiber Channel Forwarder. | 10-27-2011 |
20110264610 | Address Data Learning and Registration Within a Distributed Virtual Bridge - Systems and methods to forward data frames are provided. A particular apparatus may include a plurality of server computers and a distributed virtual bridge. The distributed virtual bridge may include a plurality of bridge elements coupled to the plurality of server computers and configured to forward a data frame between the plurality of server computers. The plurality of bridge elements may further be configured to automatically learn address data associated with the data frame. A controlling bridge may be coupled to the plurality of bridge elements. The controlling bridge may include a global forwarding table that is automatically updated to include the address data and is accessible to the plurality of bridge elements. | 10-27-2011 |
20110299394 | Translating Between An Ethernet Protocol And A Converged Enhanced Ethernet Protocol - Translating between an Ethernet protocol used by a first network component and a Converged Enhanced Ethernet (CEE) protocol used by a second network component, the first and second components coupled through a CEE Converter that translates by: for data flow from the first network component to the second network component: receiving, by the CEE converter, traffic flow definition parameters for a single CEE protocol data flow; calculating, by a credit manager, available buffer space in an outbound frame buffer of the CEE converter for the data flow; communicating, by the credit manager to a CEE credit driver of the first component, the calculated size of the buffer space together with a start sequence number and a flow identifier; and responding, by the CEE credit driver to the CEE converter, with Ethernet frames comprising a private header that includes the flow identifier and a sequence number. | 12-08-2011 |
20120192190 | Host Ethernet Adapter for Handling Both Endpoint and Network Node Communications - A host Ethernet adapter (HEA) and method of managing network communications is provided. The HEA includes a host interface configured for communication with a multi-core processor over a processor bus. The host interface comprises a receive processing element including a receive processor, a receive buffer and a scheduler for dispatching packets from the receive buffer to the receive processor; a send processing element including a send processor and a send buffer; and a completion queue scheduler (CQS) for dispatching completion queue elements (CQE) from the head of the completion queue (CQ) to threads of the multi-core processor in a network node mode. The method comprises operatively coupling an Ethernet adapter to a multi-core processor system via a processor bus, selectively assigning a first plurality of packets to a first queue pair for servicing in an endpoint mode, running a device driver on the multi-core processing system, the device driver controlling the servicing of the first queue pair by dispatching the first plurality of packets to only one processor core of the multi-core processor system, selectively assigning a second plurality of packets to a second queue pair for servicing in a network node mode; and the Ethernet adapter controlling the servicing of the second queue pair by dispatching the second plurality of packets to multiple processor threads. | 07-26-2012 |
20120204002 | Providing to a Parser and Processors in a Network Processor Access to an External Coprocessor - A mechanism is provided for sharing a communication used by a parser (parser path) in a network adapter of a network processor for sending requests for a process to be executed by an external coprocessor. The parser path is shared by processors of the network processor (software path) to send requests to the external processor. The mechanism uses for the software path a request mailbox comprising a control address and a data field accessed by MMIO for sending two types of messages, one message type to read or write resources and one message type to trigger an external process in the coprocessor and a response mailbox for receiving response from the external coprocessor comprising a data field and a flag field. The other processors of the network poll the flag until set and get the coprocessor result in the data field. | 08-09-2012 |
20120204190 | Merging Result from a Parser in a Network Processor with Result from an External Coprocessor - A mechanism is provided for merging in a network processor results from a parser and results from an external coprocessor providing processing support requested by said parser. The mechanism enqueues in a result queue both parser results needing to be merged with a coprocessor result and parser results which have no need to be merged with a coprocessor result. An additional queue is used to enqueue the addresses of the result queue where the parser results are stored. The result from the coprocessor is received in a simple response register. The coprocessor result is read by the result queue management logic from the response register and merged to the corresponding incomplete parser result read in the result queue at the address enqueued in the additional queue. | 08-09-2012 |
20120218885 | SELECTION OF RECEIVE-QUEUE BASED ON PACKET ATTRIBUTES - According to embodiments of the invention, there is provided a method for operating a network processor. The network processor receiving a first data packet in a stream of data packets and a set of receive-queues adapted to store receive data packets. The network processor processing the first data packet by reading a flow identification in the first data packet; determining a quality of service for the first data packet; mapping the flow identification and the quality of service into an index for selecting a first receive-queue for routing the first data packet; and utilizing the index to route the first data packet to the first receive-queue. | 08-30-2012 |
20120221928 | CHECKSUM VERIFICATION ACCELERATOR - Disclosed a method for validating a data packet by a network processor supporting a first, network protocol and a second network protocol and utilizing shared hardware. The network processor receives a data packet: identities a network packet protocol for the data packet; and processes the data packet according to the network packet protocol comprising: updating a first register with a first partial packet length specific to the first network protocol; updating a second register with a second partial packet length specific to the second network protocol; and updating a third register with a first checksum computed from fields independent of the network protocol. The method produces a second checksum utilizing a function that combines values from the first register, the second register, and the third register. The method validates the data packet by comparing the data packet checksum to the second checksum. | 08-30-2012 |
20120230334 | MESSAGE FORWARDING TOWARD A SOURCE END NODE IN A CONVERGED NETWORK ENVIRONMENT - A network node that forwards traffic of a converged network received from a source end node receives a second message addressed to the network node, but intended for the source end node. The second message includes at least a portion of a first message originated by the source end node and previously forwarded by the network node. The network node extracts from the first message a source identifier of the source end node in a first communication protocol and determines by reference to a data structure a destination address of the second message in a second communication protocol. The network node modifies the second message to include the destination address and forwards the second message toward the source end node in accordance with the destination address. | 09-13-2012 |
20120230340 | MESSAGE FORWARDING TOWARD A SOURCE END NODE IN A CONVERGED NETWORK ENVIRONMENT - A network node that forwards traffic of a converged network received from a source end node receives a second message addressed to the network node, but intended for the source end node. The second message includes at least a portion of a first message originated by the source end node and previously forwarded by the network node. The network node extracts from the first message a source identifier of the source end node in a first communication protocol and determines by reference to a data structure a destination address of the second message in a second communication protocol. The network node modifies the second message to include the destination address and forwards the second message toward the source end node in accordance with the destination address. | 09-13-2012 |
20130266021 | BUFFER MANAGEMENT SCHEME FOR A NETWORK PROCESSOR - The invention provides a method for adding specific hardware on both receive and transmit sides that will hide to the software most of the effort related to buffer and pointers management. At initialization, a set of pointers and buffers is provided by software, in quantity large enough to support expected traffic. A Send Queue Replenisher (SQR) and Receive Queue Replenisher (RQR) hide RQ and SQ management to software. RQR and SQR fully monitor pointers queues and perform recirculation of pointers from transmit side to receive side. | 10-10-2013 |
20130308653 | Techniques for Connecting an External Network Coprocessor to a Network Processor Packet Parser - A network processor includes first communication protocol ports that each support ‘M’ minimum size packet data path traffic on ‘N’ lanes at ‘S’ Gigabits per second (Gbps) and traffic with different communication protocol units on ‘n’ additional lanes at ‘s’ Gbps. The first communication protocol ports support access to an external coprocessor using parsing logic located in each of the first communication protocol ports. The parsing logic, during a parsing period, is configured to send a request to the external coprocessor at reception of a ‘M’ size packet and to receive a response from the external coprocessor. The parsing logic sends a request maximum ‘m’ size byte word to the external coprocessor on one of the additional lanes and receives a response maximum ‘m’ size byte word from the external coprocessor on the one of the additional lanes while complying with the equation N×S/M=11-21-2013 | |
20140337677 | Merging Result from a Parser in a Network Processor with Result from an External Coprocessor - A mechanism is provided for merging in a network processor results from a parser and results from an external coprocessor providing processing support requested by said parser. The mechanism enqueues in a result queue both parser results needing to be merged with a coprocessor result and parser results which have no need to be merged with a coprocessor result. An additional queue is used to enqueue the addresses of the result queue where the parser results are stored. The result from the coprocessor is received in a simple response register. The coprocessor result is read by the result queue management logic from the response register and merged to the corresponding incomplete parser result read in the result queue at the address enqueued in the additional queue. | 11-13-2014 |