Patent application number | Description | Published |
20090167355 | High performance pulsed buffer - An integrated circuit ( | 07-02-2009 |
20090167394 | INTEGRATED CIRCUITS HAVING DEVICES IN ADJACENT STANDARD CELLS COUPLED BY THE GATE ELECTRODE LAYER - An integrated circuit ( | 07-02-2009 |
20090167395 | HIGH PERFORMANCE LATCHES - An integrated circuit includes at least one latch circuit ( | 07-02-2009 |
20090167396 | HIGH PERFORMANCE CLOCKED LATCHES AND DEVICES THEREFROM - An integrated circuit ( | 07-02-2009 |
20090265406 | DECISION FEEDBACK EQUALIZER HAVING PARALLEL PROCESSING ARCHITECTURE - An integrated circuit includes a decision feedback equalizer (DFE) including a first and second digital equalizer logic including circuitry to compensate first and second bits in a received stream and to provide first and second sign bits. The second equalizer logic can run concurrently and can be connected in parallel relative to the first equalizer logic. The second equalizer logic can include a low and high sign bit pipelines providing first and second conditional sign bits by assuming a low and high sign bits, respectively, for a first bits being concurrently processed by the first equalizer logic and a sign bit selection element to select between the first and second conditional sign bits based on the sign bit outcome of the first equalizer logic. The first and second pipelines compensate bits using compensation weights chosen using most recent first and second conditional sign bits and sign bit outcome. | 10-22-2009 |
20120319741 | REDUCED CROSSTALK WIRING DELAY EFFECTS THROUGH THE USE OF A CHECKERBOARD PATTERN OF INVERTING AND NONINVERTING REPEATERS - A buffer arrangement in wire lines in which at least one aggressor wire line is located adjacent and substantially parallel to a victim wire line has a plurality of alternately arranged inverting and noninverting buffers. The alternately arranged in a checkerboard pattern in which noninverting and inverting buffers are located in the victim wire line in locations corresponding to locations of the inverting and noninverting buffers in the at least one aggressor wire line. | 12-20-2012 |
20140129568 | REDUCED COMPLEXITY HASHING - Hashing complexity is reduced by exploiting a hashing matrix structure that permits a corresponding hashing function to be implemented such that an output vector of bits is produced in response to an input vector of bits without combining every bit in the input vector with every bit in any row of the hashing matrix. | 05-08-2014 |
20160098198 | PROXY HASH TABLE - Some embodiments of the invention provide novel methods for storing data in a hash-addressed memory and retrieving stored data from the hash-addressed memory. In some embodiments, the method receives a search key and a data tuple. The method then uses a first hash function to generate a first hash value from the search key, and then uses this first hash value to identify an address in the hash-addressed memory. The method also uses a second hash function to generate a second hash value, and then stores this second hash value along with the data tuple in the memory at the address specified by the first hash value. To retrieve data from the hash-addressed memory, the method of some embodiments receives a search key. The method then uses the first hash function to generate a first hash value from the search key, and then uses this first hash value to identify an address in the hash-addressed memory. At the identified address, the hash-addressed memory stores a second hash value and a data tuple. The method retrieves a second hash value from the memory at the identified address, and compares this second hash value with a third hash value that the method generates from the search key by using the second hash function. When the second and third hash values match, the method retrieves the data tuple that the memory stores at the identified address. | 04-07-2016 |
20160099872 | FAST ADJUSTING LOAD BALANCER - Some embodiments of the invention provide a load balancer for distributing packet flows that are addressed to a group of data compute nodes (DCNs) amongst the DCNs of the group. In some embodiments, the load balancer includes a connection data storage comprising several different destination network address translation (DNAT) tables. Each particular DNAT table is defined at a particular instance in time and stores the identity of a plurality DCNs that are part of the group at the particular instance in time. Each time a DCN is added to the group, the load balancer of some embodiments creates a new DNAT table in the connection data storage for processing new packet flows, while using previously created DNAT tables to process packets that are part of previously processed packet flows. | 04-07-2016 |