Marvell Israel (MISL) Ltd. Patent applications |
Patent application number | Title | Published |
20160134435 | SCALING ADDRESS RESOLUTION FOR MASSIVE DATA CENTERS - There is provided a network device disposed at an interface between an access segment and an interconnecting layer of a data center. The network device includes an address cache and an address resolution processor configured to receive an address request addressed to virtual machines in a transmission domain of the network device. The address request requesting a layer 2 address of a target virtual machine in the data center, and specifying a layer 3 address of the target virtual machine. In response to receiving a reply, the network device updates the address cache to include an entry specifying the layer 2 address of an edge device of an access segment which has the target virtual machine having a respective layer 3 address corresponding to the specified layer 3 address. | 05-12-2016 |
20150256466 | DISTRIBUTED COUNTERS AND METERS IN PACKET-SWITCHED SYSTEM - Aspects of the disclosure provide a method for counting packets and bytes in a distributed packet-switched system. The method includes receiving a packet stream having at least one packet flow at a device of a packet-switched system having a plurality of distributed devices, statistically determining whether to update a designated device based on receipt of a packet belonging to the packet flow, and transmitting packet counting information to the designated device based on the statistical determination, where the designated device counts packets of the packet flow based on the packet counting information. | 09-10-2015 |
20150180482 | APPARATUS AND METHOD FOR REACTING TO A CHANGE IN SUPPLY VOLTAGE - Aspects of the disclosure provide an integrated circuit (IC). The IC includes a clock generation and supply voltage monitoring circuit configured to monitor a supply voltage to the IC and selectively modify an operating frequency of the IC in response to a sensed change in the supply voltage. The IC further includes a frequency comparing and compensating circuit configured to output a control signal, based on the operating frequency, to a voltage supply to modify the supply voltage so as to compensate for changes in the operating frequency and return the operating frequency to a target operating frequency. | 06-25-2015 |
20150117471 | METHOD AND APPARATUS FOR SECURING CLOCK SYNCHRONIZATION IN A NETWORK - Aspects of the disclosure provide a method that includes receiving a first packet through a network at a first device. The first packet includes a first message generated according to a precision time protocol and a first encapsulation that encapsulates one or more fields of the first message. Further, the method includes security-verifying the first packet based on the first message and the first encapsulation, and processing the first message according to the precision time protocol after the first packet is security-verified. | 04-30-2015 |
20150109750 | HEAT DISSIPATING HIGH POWER SYSTEMS - An electronic system includes a printed circuit board (PCB), and a heat dissipating element. The PCB includes one or more first electronic components mounted on a first side of the PCB, and one or more second electronic components mounted on a second side of the PCB. The first electronic components have a power consumption that is greater than a threshold and have a height over the first side of the PCB that is higher than any other electronic components mounted on the first side of the PCB. At least one of the second electronic components has a height over the second side of the PCB that is higher than the height of the first electronic components. The heat dissipating element is adjacent to the first electronic components so as to provide a thermal coupling for dissipating heat generated by the first electronic components. | 04-23-2015 |
20140310307 | Exact Match Lookup with Variable Key Sizes - In a method for performing an exact match lookup in a network device, a network packet is received at the network device. A lookup key for the network packet is determined at least based on data included in a header of the received network packet. A hash function is selected, from among a plurality of possible hash functions, at least based on a size of the lookup key, and a hash operation is performed on the lookup key using the selected hash function to compute a hashed lookup key segment. A database is queried using the hashed lookup key segment to extract a value exactly corresponding to the lookup key. | 10-16-2014 |
20140307737 | Exact Match Lookup with Variable Key Sizes - In a method for populating an exact match lookup table in a network device, a lookup key to be stored in a database of the network device is determined. The database is distributed among two or more memory banks. At least based on a size of the lookup key, (i) a first memory bank from among the two or more memory banks, and (ii) a hash function from among a plurality of possible hash functions, are selected. A hash operation is performed on the lookup key using the selected hash function to compute a first hashed lookup key segment. The first hashed lookup key segment is stored in the selected first memory bank, and one or more hashed lookup key segments corresponding to the lookup key are stored in one or more subsequent memory banks of the two or more memory banks. | 10-16-2014 |
20140301394 | EXACT MATCH HASH LOOKUP DATABASES IN NETWORK SWITCH DEVICES - In a method for forwarding packets in a network device a plurality of hash values is generated based on a lookup key. The plurality of hash values includes at least a first hash value generated using a first hash function, a second hash value generated using a second hash function and a third hash value generated using a third hash function. The third hash function is different from the first hash function and the second hash function. A lookup table is searched using the first hash value and the second hash value to determine an offset for the lookup key. Then, a forwarding table is searched using the third hash value and the offset determined for the lookup key to select a forwarding entry corresponding to the lookup key. The packet is forwarded to one or more ports of the network device based on the selected forwarding entry. | 10-09-2014 |
20140215144 | ARCHITECTURE FOR TCAM SHARING - Aspects of the disclosure provide a packet processing system. The packet processing system includes a plurality of processing units, a ternary content addressable memory (TCAM) engine, and an interface. The plurality of processing units is configured to process packets received from a computer network, and to perform an action on a received packet. The action is determined responsively to a lookup in a table of rules to determine a rule to be applied to the received packet. The TCAM engine has a plurality of TCAM banks defining respective subsets of a TCAM memory space to store the rules. The interface is configured to selectably associate the TCAM banks to the processing units. The association is configurable to allocate the subsets of the TCAM memory space to groups of the processing units to share the TCAM memory space by the processing units. | 07-31-2014 |
20140204695 | METHOD AND APPARATUS FOR INCREASING YIELD - Aspects of the disclosure provide an integrated circuit (IC) that is configured to have an increased yield. The IC includes a memory element configured to store a specific value determined based on a characteristic of the IC, and a controller configured to control an input regulator based on the specific value of the IC. The input regulator is operative to provide a regulated input to the IC during operation, such that the IC performance satisfies performance requirement. | 07-24-2014 |
20140192815 | MAINTAINING PACKET ORDER IN A PARALLEL PROCESSING NETWORK DEVICE - A plurality of packets that belong to a data flow are received and are distributed to two or more packet processing elements, wherein a packet is sent to a first packet processing element. A first instance of the packet is queued at a first packet processing element according to an order of the packet within the data flow. The first instance of the packet is caused to be transmitted when processing of the first instance is completed and the first instance of the packet is at a head of a queue at the first ordering unit. A second instance of the packet is queued at a second ordering unit. The second instance of the packet is caused to be transmitted when processing of the second instance is completed and the second instance of the packet is at a head of a queue at the second ordering unit. | 07-10-2014 |
20140169382 | Packet Forwarding Apparatus and Method - A network device includes a plurality of physical ports configured to be coupled to one or more networks, and a processor device configured to process packets. The processor device includes a processor configured to implement a logical port assignment mechanism to assign source logical port information to a data packet received via a source physical port of the plurality of physical ports. The source logical port information is assigned based on one or more characteristics of the data packet, and the source logical port information corresponds to a logical entity that is different from any physical port. The processor device also includes a forwarding engine processor configured to determine one or more egress logical ports for forwarding the data packet, map the egress logical port(s) to respective egress physical port(s) of the plurality of physical ports, and forward the data packet to the egress physical port(s) based on the mapping. | 06-19-2014 |
20140169378 | MAINTAINING PACKET ORDER IN A PARALLEL PROCESSING NETWORK DEVICE - A plurality of packets are received by a packet processing device, and the packets are distributed among two or more packet processing node elements for processing of the packets. The packets are assigned to respective packet classes, each class corresponding to a group of packets for which an order in which the packets were received is to be preserved. The packets are queued in respective queues corresponding to the assigned packet classes and according to an order in which the packets were received by the packet processing device. The packet processing node elements issue respective instructions indicative of processing actions to be performed with respect to the packets, and indications of at least some of the processing actions are stored. A processing action with respect to a packet is performed when the packet has reached a head of a queue corresponding to the class associated with the packet. | 06-19-2014 |
20140161143 | Clock Synchronization Using Multiple Network Paths - A packet transmitted by a master clock via a network is received via a port of a network device. The packet includes a time stamp from the master clock. It is determined via which one of a plurality of communication path in the network the packet was received. An application layer module of the network device uses (i) the time stamp in the packet and (ii) the determination of the communication path to determine time information. | 06-12-2014 |
20140160934 | LOAD BALANCING HASH COMPUTATION FOR NETWORK SWITCHES - Techniques to load balance traffic in a network device or switch include a network device or switch having a first interface to receive a data unit or packet, a second interface to transmit the packet, and a mapper to map between virtual ports and physical ports. The network device includes hash value generator configured to generate a hash value based on information included in the packet and based on at least one virtual port. The hash value may be optionally modified to load balance egress traffic of the network device. The network device selects a particular virtual port for egress of the packet, such as by determining an index into an egress table based on the (modified) hash value. The packet is transmitted from the network device using a physical port mapped to the particular virtual port. | 06-12-2014 |
20140115167 | Load Balancing Hash Computation for Networks - A data unit is received at a network device associated with a link aggregate group. An initial key is determined based on information included in the data unit. Another key is generated based on a first field of the initial key and a second field of the initial key. A hash function is applied to the other key to generate a hash value. A communication link in the link aggregate group is determined based on the hash value, and the data unit is transmitted over the communication link. | 04-24-2014 |
20140105603 | Systems and Methods for Advanced Power Management for Optical network Terminal Systems on Chip - Systems and methods are provided for customer premises equipment (CPE) on a passive optical network (PON). A system includes a packet processor having at least an active mode and a sleep mode, the packet processor configured to processes streams of data packets received in a data plane from an optical line terminal (OLT) through the PON when in an active mode and to enter the sleep mode when not receiving data packets in the data plane. A system further includes a micro-controller, separate from the packet processor, configured to receive from an OLT operation and management (OAM) messages that are transmitted in a control plane, and to process the OAM messages by, selectively transmitting to a central office, without waking up the packet processor, an acknowledgement message, or waking up the packet processor to receive data packets in the data plane. | 04-17-2014 |
20140077853 | RACE FREE SEMI-DYNAMIC D-TYPE FLIP FLOP - Some of the embodiments of the present disclosure provide a D-type flip-flop, comprising a first latch configured to generate a sample enable signal, based on logical states of an input signal, and generate a sampled signal, based on logical states of the input signal and the sample enable signal; and a second latch configured to generate an output signal responsively to the sampled signal. Other embodiments are also described and claimed. | 03-20-2014 |
20140029368 | METHOD AND APPARATUS FOR INCREASING YIELD - Aspects of the disclosure provide an integrated circuit (IC) that is configured to have an increased yield. The IC includes a memory element configured to store a specific value determined based on a characteristic of the IC, and a controller configured to control an input regulator based on the specific value of the IC. The input regulator is operative to provide a regulated input to the IC during operation, such that the IC performance satisfies performance requirement. | 01-30-2014 |
20130259049 | CLOCK SYNCHRONIZATION USING MULTIPLE NETWORK PATHS - A network device includes one or more ports coupled to a network, and a time synchronization module. The time synchronization module processes (i) respective path information, and (ii) respective time synchronization information included in each of at least some of a plurality of time synchronization packets received from a master clock device over two or more different communication paths and via at least one of the one or more ports, wherein the respective path information indicates a respective communication path in the network via which the respective time synchronization packet was received. The time synchronization module determines a system time clock responsive to the processing of the path information and the time synchronization information included in the at least some of the plurality of time synchronization packets. | 10-03-2013 |
20130249656 | PACKAGE WITH PRINTED FILTERS - Aspects of the disclosure provide a circuit package. The circuit package includes a first signal terminal electrically coupled with a serializer/deserializer (SERDES), a second signal terminal electrically coupled with an external electronic component, and a trace disposed on an insulating layer. The trace is configured to transfer an electrical signal between the first signal terminal and the second signal terminal. The trace is patterned to provide a specific filtering characteristic to filter the electrical signal. | 09-26-2013 |
20130232384 | METHOD AND SYSTEM FOR ITERATIVELY TESTING AND REPAIRING AN ARRAY OF MEMORY CELLS - A memory system includes an array of memory cells and a repair module. Multiple memory cells in the array are redundant to other memory cells in the array. The repair module iteratively tests the array. During the iterative testing of the array, the repair module, during each test of the array, (i) identifies one or more defective memory cells in the array, if any, and (ii) in response to one or more defective memory cells being identified during the test, respectively replaces the one or more defective memory cells with one or more memory cells that are redundant to other memory cells in the array. The repair module performs the iterative testing of the array until (i) the repair module does not detect a defective memory cell or (ii) no memory cells of the memory cells that are redundant remain available for replacement of a defective memory cell. | 09-05-2013 |
20130208735 | CLOCK SYNCHRONIZATION USING MULTIPLE NETWORK PATHS - In a network device communicatively coupled to a master clock via a plurality of different communication paths, a clock synchronization module is configured to determine a plurality of path time data sets corresponding to the plurality of different communication paths based on signals received from the master clock via the plurality of different communication paths between the network device and the master clock. A clock module is configured to determine a time of day as a function of the plurality of path time data sets. | 08-15-2013 |
20130194922 | DYNAMIC RESHUFFLING OF TRAFFIC MANAGEMENT SCHEDULER HIERARCHY - There is provided a network device comprising a physical queue management processor configured to manage attributes of physical queues of data packets. The network device further comprises a scheduling processor which is configured to manage scheduling nodes that establish a scheduling hierarchy among the physical queues in a network, utilizing a bi-directional mapping of the physical queues to logical queues. The network device further comprises a traffic management processor which is configured to modify the bi-directional mapping mentioned above. | 08-01-2013 |
20130185343 | SPACE EFFICIENT COUNTERS IN NETWORK DEVICES - A network device includes a memory and a counter update logic module. The memory is configured to store a plurality of bits. The counter update logic module is configured to estimate a count of quanta within a plurality of data units in a data flow based on statistical sampling of the plurality of data units, and to store the estimated count of quanta in the memory as m mantissa bits and e exponent bits. Them mantissa bits represent a mantissa value M and the e exponent bits represent an exponent value E. | 07-18-2013 |
20130155906 | SCALING ADDRESS RESOLUTION FOR MASSIVE DATA CENTERS - There is provided a network device disposed at an interface between an access segment and an interconnecting layer of a data center. The network device includes an address resolution processor configured to receive an address request addressed to virtual machines in a transmission domain of the network device. The address request specifying a source layer 2 address, requesting a layer 2 address of a target virtual machine in the data center, and specifying a layer 3 address of the target virtual machine. The network device is further configured to transmit a local message over the first access segment requesting the respective layer 2 address of a virtual machine which has the specified layer 3 address. In response to receiving a reply, the network device transmits a message to the specified source layer 2 address to provide the layer 2 address of the network device and the specified layer 3 address. | 06-20-2013 |
20130114960 | Method and Apparatus for Transmitting Data on a Network - Systems and methods are provided for a network unit for transmitting packets on a network that includes a computer-readable medium encoded with an array data structure that is populated by plurality of entries, each entry corresponding to a packet in a queue of packets to be transmitted, a particular entry including a value that is based on a sum of packet sizes stored in a neighboring entry and a packet size of a packet corresponding to the particular entry. A search engine is configured to receive a gate size and to search the array to identify a particular entry in the data structure that has a value nearest to but not greater than the gate size as a transmission entry. A transmission engine is configured to transmit packets from the beginning of the queue up to a particular packet associated with the transmission entry. | 05-09-2013 |
20110078547 | RATE MATCHING FOR A WIRELESS COMMUNICATIONS SYSTEM - Apparatuses and methods are provided for generating a plurality of redundancy versions using various rate matching algorithms. In some embodiments, a rate matcher is provided that allocates systematic and parity bits to the redundancy versions in a manner that allows all, of these bits to be transmitted in at least one redundancy version. In some embodiments, the rate matcher uses a first puncturing algorithm to generate both a first redundancy version and a third redundancy version, but allocates a different proportion of the systematic bits to these redundancy versions. In these embodiments, the second redundancy version may include only bits that were not transmitted in the first redundancy version. | 03-31-2011 |
20110078546 | RATE MATCHING FOR A WIRELESS COMMUNICATIONS SYSTEM - Apparatuses and methods are provided for generating a plurality of redundancy versions using various rate matching algorithms. In some embodiments, a rate matcher is provided that allocates systematic and parity bits to the redundancy versions in a manner that allows all of these bits to be transmitted in at least one redundancy version. In some embodiments, the rate matcher uses a first puncturing algorithm to generate both a first redundancy version and a third redundancy version, but allocates a different proportion of the systematic bits to these redundancy versions. In these embodiments, the second redundancy version may include only bits that were not transmitted in the first redundancy version. | 03-31-2011 |