Patent application number | Description | Published |
20100325371 | SYSTEMS AND METHODS FOR WEB LOGGING OF TRACE DATA IN A MULTI-CORE SYSTEM - A method and system for generating a web log that includes transaction entries from transaction queues of one or more cores of a multi-core system. A transaction queue is maintained for each core so that either a packet engine or web logging client executing on the core can write transaction entries to the transaction queue. In some embodiments, a timestamp value obtained from a synchronized timestamp variable can be assigned to the transaction entries. When a new transaction entry is added to the transaction queue, the earliest transaction entry is removed from the transaction queue and added to a heap. Periodically the earliest entry in the heap is removed from the heap and written to a web log. When an entry is removed from the heap, the earliest entry in a transaction queue corresponding to the removed entry is removed from the transaction queue and added to the heap. | 12-23-2010 |
20120226804 | SYSTEMS AND METHODS FOR SCALABLE N-CORE STATS AGGREGATION - The present invention is directed towards systems and methods for aggregating and providing statistics from cores of a multi-core system intermediary between one or more clients and servers. The system may maintain in shared memory a global device number for each core of the multi-core system. The system may provide a thread for each core of the multi-core system to gather data from the corresponding core. A first thread may generate aggregated statistics from a corresponding core by parsing the gathered data from the corresponding core. The first thread may transfer the generated statistics to a statistics log according to a schedule. The system may adaptively reschedule the transfer by monitoring the operation of each computing thread. Responsive to a request from a client, an agent of the client may obtain statistics from the statistics log. | 09-06-2012 |
20120250512 | Systems and Methods for Handling NIC Congestion via NIC Aware Application - The present solution is directed to a system for handling network interface card (NIC) congestion by a NIC aware application. The system may include a device having a plurality of network interface cards (NICs), a transmission queue corresponding to a NIC of the plurality of NICs; and an overflow queue for storing packets for the NIC when congested. The system may also include an application executing on the device outputting a plurality of packets to the transmission queue responsive to detecting that the NIC is identified as not congested. The device identifies the NIC as congested responsive to determining that a number of packets stored in the transmission queue has reached a predetermined threshold and responsive to detecting identification of the NIC as congested, the application stores one or more packets to the overflow queue. The device transmits one or more of the plurality of packets stored in the transmission queue and transmits a predetermined number of packets from the overflow queue. | 10-04-2012 |
20120250530 | Systems and Methods for Learning MSS of Services - The virtual Server (vServer) of an intermediary device deployed between a plurality of clients and services supports parameters for setting maximum segment size (MSS) on a per vServer/service basis and for automatically learning the MSS among the back-end services. In case of vServer/service setting, all vServers will use the MSS value set through the parameter for the MSS value set in TCP SYN+ACK to clients. In the case of learning mode, the backend service MSS will be learnt through monitor probing. The vServer will monitor and learn the MSS that is being frequently used by the services. When the learning is active, the intermediary device may keep statistics of the MSS of backend services picked up during load balancing decisions and once an interval timer expires, the MSS value may be picked by a majority and set on the vServer. If there is no majority, then the highest MSS is picked up to be set on the vServer. | 10-04-2012 |
20130007239 | SYSTEMS AND METHODS FOR TRANSPARENT LAYER 2 REDIRECTION TO ANY SERVICE - The present solution is directed to providing, transparently and seamlessly to any client or server, layer 2 redirection of client requests to any services of a device deployed in parallel to an intermediary device An intermediary device deployed between the client and the server may intercept a client request and check if the request is to be processed by a service provided by one of the devices deployed in parallel with the intermediary device. The service may be any type and form of service or feature for processing, checking or modifying the request, including a firewall, a cache server, a encryption/decryption engine, a security device, an authentication device, an authorization device or any other type and form of service or device described herein. The intermediary device may select the machine to process the request and use layer 2 redirection to the machine. The intermediary device may change a Media Access Control (MAC) address of a destination of the request to a MAC address of the selected machine. Once the selected machine processes the request, the intermediary device may receive from this machine a response to processing the request. The intermediary device may then continue processing the request of the client responsive to the response from the machine or in response to identifying that the response to the request is from that particular selected machine. The forwarding to and processing by the parallel deployed machine may be performed seamlessly and transparently to the server and/or client. | 01-03-2013 |
20130041934 | Systems and Methods for Tracking Application Layer Flow Via a Multi-Connection Intermediary Device - The present disclosure is directed towards tracking application layer flow via a multi-connection intermediary. Transaction level or application layer information may be tracked via the intermediary, including one or more of: (i) the request method; (ii) response codes; (iii) URLs; (iv) HTTP cookies; (v) RTT of both ends of the transaction in a quad flow arrangement; (vi) server time to provide first byte of a communication; (vii) server time to provide the last byte of a communication; (viii) flow flags; or any other type and form of transaction level data may be captured, exported, and analyzed. The application layer flow or transaction level information may be provided in an IPFIX-compliant data record. This may be done to provide template-based data record definition, as well as providing data on an application or transaction level of granularity. | 02-14-2013 |
20130297814 | SYSTEMS AND METHODS FOR A SPDY TO HTTP GATEWAY - The present disclosure is directed towards a system and method for providing a SPDY to HTTP gateway via a device intermediary to a plurality of clients and a server. An NPN handshake by the intermediary device may establish SPDY support. The intermediary device may receive and process one or more control frames via SPDY session with the client. The intermediary device may generate and transmit HTTP communication to server corresponding to SPDY control frames. The intermediary device may receive and process one or more HTTP responses from server. The intermediary device may generate and transmit SPDY communication via SPDY session to client corresponding to HTTP response. | 11-07-2013 |
20130339503 | SYSTEMS AND METHODS FOR SUPPORTING A SNMP REQUEST OVER A CLUSTER - The present disclosure is directed towards systems and methods for supporting Simple Network Management Protocol (SNMP) request operations over clustered networking devices. The system includes a cluster that includes a plurality of intermediary devices and an SNMP agent executing on a first intermediary device of the plurality of intermediary devices. The SNMP agent receives an SNMP GETNEXT request for an entity. Responsive to receipt of the SNMP GETNEXT request, the SNMP agent requests a next entity from each intermediary device of the plurality of intermediary devices of the cluster. To respond to the SNMP request, the SNMP agent selects a lexicographically minimum entity. The SNMP agent may select the lexicographically minimum entity from a plurality of next entities received via responses from each intermediary device of the plurality of intermediary devices. | 12-19-2013 |
20140149605 | SYSTEMS AND METHODS FOR DICTIONARY BASED COMPRESSION - This disclosure is directed to dictionary-based compression, which may be employed to achieve stateful header compression without maintaining a complete deflate state. The compressor may maintain a history of data streams compressed by the compressor, compressed according to a compression dictionary. Responsive to the compression of the one or more data streams, the compressor may delete the first compression dictionary from the memory. Subsequent to the deletion, the compressor may compress an additional data stream using the maintained history. The compressor may generate a second compression dictionary from at least one of: the maintained history and a portion of the additional data stream. The compressor may allocate memory for a compression state of the additional data stream and may load the maintained history into the compression state. | 05-29-2014 |
20140247737 | SYSTEMS AND METHODS FOR LEARNING MSS OF SERVICES - The virtual Server (vServer) of an intermediary device deployed between a plurality of clients and services supports parameters for setting maximum segment size (MSS) on a per vServer/service basis and for automatically learning the MSS among the back-end services. In case of vServer/service setting, all vServers will use the MSS value set through the parameter for the MSS value set in TCP SYN+ACK to clients. In the case of learning mode, the backend service MSS will be learnt through monitor probing. The vServer will monitor and learn the MSS that is being frequently used by the services. When the learning is active, the intermediary device may keep statistics of the MSS of backend services picked up during load balancing decisions and once an interval timer expires, the MSS value may be picked by a majority and set on the vServer. If there is no majority, then the highest MSS is picked up to be set on the vServer. | 09-04-2014 |
20140301213 | SYSTEMS AND METHODS FOR CAPTURING AND CONSOLIDATING PACKET TRACING IN A CLUSTER SYSTEM - The present solution relates to systems and methods for capturing and consolidating packet tracing in a cluster system. A multi-nodal cluster processing network traffic contains multiple nodes each handling some of the processing. A node may initially receive a flow and transfer processing of the flow to another node for processing. A flow may therefore pass from one node to another, from two nodes to many nodes. In some instances, it is helpful to generate a trace of a flow. For example, in debugging a network communication flow, a trace of the flow through the cluster can be helpful. Each node has a packet engine (“PE”) which processes data packets and can, when trace is enabled, generate a trace file for the packets processed at the respective node. A trace aggregator merges these distinct trace files into an aggregate trace for the cluster | 10-09-2014 |
20140301388 | SYSTEMS AND METHODS TO CACHE PACKET STEERING DECISIONS FOR A CLUSTER OF LOAD BALANCERS - The present disclosure is directed towards methods and systems for caching packet steering sessions for steering data packets between intermediary devices of a cluster of intermediary devices intermediary to a client and a plurality of servers. A first intermediary device receives a first data packet and determines, from a hash of a tuple of the first packet, a second intermediary device to which to steer the first packet. The first device stores, to a session for storing packet steering information, the identity of the second device and the tuple. The first device receives a second packet having a corresponding tuple that matches the tuple of the first packet and determines, based on a lookup for the session using the tuple of the second packet, that the second device is the intermediary device to which to steer the second packet. The first device steers the second packet to the second device. | 10-09-2014 |
20140303934 | SYSTEMS AND METHODS FOR EXPORTING CLIENT AND SERVER TIMING INFORMATION FOR WEBPAGE AND EMBEDDED OBJECT ACCESS - The present disclosure is directed towards systems and methods for application performance measurement. A device may receive a first document for transmission to a client, comprising instructions for the client to transmit a request for an embedded object. A flow monitor executed the device may generate a unique identification associated with the first document, the unique identification identifying a first access of the first document, and transmit the first document and unique identification to the client. The device may receive, from the client, a request for the embedded object comprising the unique identification, and transmit, to a server, the request for the embedded object at a transmit time. The device may receive, from the server, the embedded object at a receipt time, and may transmit a performance record comprising an identification of the object, the server, the transmit time, the receipt time, and the unique identification to a data collector. | 10-09-2014 |
20140304320 | SYSTEMS AND METHODS FOR DYNAMIC RECEIVE BUFFERING - The present disclosure relates to methods and systems for dynamically changing an advertised window for a transport layer connection. A device can receive data from a server destined for an application. The device identifies the size of the application buffer corresponding to the application and advertises the application buffer size as a window size to the server. The device stores the data in the device memory. The device then determines the memory usage by comparing the memory usage to one or more predetermined thresholds. If the device determines that the memory usage is below a first predetermined threshold, the device can implement an aggressive dynamic receive buffering policy in which the device increases the advertised window size by a first increment. If the device determines that the memory usage is above the first threshold and below a second threshold, the device executes a more conservative dynamic receive buffering policy. | 10-09-2014 |
20140304325 | SYSTEMS AND METHODS FOR ETAG PERSISTENCY - The systems and methods of the present solution are directed to providing Entity Tag persistency by a device intermediary to a client and a plurality of servers. An intermediary device between a client and one or more back-end servers can receive an entity requested by the client from an origin server that provides the requested content. The intermediary device can encode the back-end server information onto an ETag of the entity, cache the entity with the encoded ETag and serve the entity with the encoded ETag to the client. In this way, when the client attempts to validate the entity by sending a request including the encoded ETag to the intermediary device, the intermediary device decodes the encoded ETag to extract the identity of the backend server and sends the request to validate the entity to the identified server that originally sent the entity that included the requested content. | 10-09-2014 |
20140304393 | SYSTEMS AND METHODS FOR EXPORTING APPLICATION DETAILS USING APPFLOW - The present disclosure is directed towards systems and methods for lightweight identification of flow information by application. A flow monitor executed by a processor of a device may maintain a counter. The flow monitor may associate an application with the value of the counter and transmit, to a data collector executed by a second device, the counter value and a name of the application. The flow monitor may monitor a data flow associated with the application to generate a data record. The flow monitor may transmit the data record to the data collector, the data record including an identification of the application consisting of the counter value and not including the name of the application. The data collector may then re-associate the data record with the application name based on the previously received counter value. | 10-09-2014 |
20140304401 | SYSTEMS AND METHODS TO COLLECT LOGS FROM MULTIPLE NODES IN A CLUSTER OF LOAD BALANCERS - The systems and methods of the present solution are directed to collecting log information from multiple nodes in a multi-nodal cluster. Generally, a logging process runs to collect log information from multiple nodes in a multi-nodal cluster, e.g., a cluster of appliances. The logging process collects the log information and merges the collected log information to create a coherent unified log. The logging process may run on a node designated for the purpose. The designated node may be internal or external to the cluster. The logging process determines a topology for the cluster, establishes a communication channel with each active intermediary device identified in the topology, collects log entries from each active intermediary device, each log entry comprising information on network traffic traversing the respective intermediary device, and merges the collected log entries into a unified cluster log comprising information on network traffic traversing the cluster. | 10-09-2014 |
20140304425 | SYSTEMS AND METHODS FOR TCP WESTWOOD HYBRID APPROACH - Methods and systems for providing congestion control to a transport control protocol implementation are described. A device detects that there is a congestion event on a transport control protocol (TCP) connection of the device. The device determines that a bandwidth estimate is lower than half a current value of a slow start threshold for the TCP connection. In response to the determination, the device changes the slow start threshold to half of the current value of the slow start threshold for the TCP connection. The bandwidth estimate can be the product of the eligible rate estimate and the minimum round trip time. In some implementations, the transport control protocol implementation is a TCP Westwood implementation. | 10-09-2014 |
20140304798 | SYSTEMS AND METHODS FOR HTTP-BODY DOS ATTACK PREVENTION WITH ADAPTIVE TIMEOUT - The present disclosure is directed generally to systems and methods for changing an application layer transaction timeout to prevent Denial of Service attacks. A device intermediary to a client and a server may receive, via a transport layer connection between the device and the client, a packet of an application layer transaction. The device may increment an attack counter for the transport layer connection by a first predetermined amount responsive to a size of the packet being less than a predetermined fraction of a maximum segment size for the transport layer connection. The device may increment the attack counter by a second predetermined amount responsive to an inter-packet-delay between the packet and a previous packet being more than a predetermined multiplier of a round trip time. The device may change a timeout for the application layer transaction responsive to comparing the attack counter to a predetermined threshold. | 10-09-2014 |
20140304810 | SYSTEMS AND METHODS FOR PROTECTING CLUSTER SYSTEMS FROM TCP SYN ATTACK - The present solution is directed to systems and methods for synchronizing a random seed value among a plurality of multi-core nodes in a cluster of nodes for generating a cookie signature. The cookie signature may be used for protection from SYN flood attacks. A cluster of nodes comprises one master node and one or more other nodes. Each node comprises one master core and one or more other cores. A random number is generated at the master core of the master node. The random number is synchronized across every other core. The random number is used to generated a secret key value that is attached in the encoded initial sequence number of a SYN-ACK packet. If the responding ACK packet does not contain the secret key value, then the ACK packet is dropped. | 10-09-2014 |