Entries |
Document | Title | Date |
20080209069 | Congestion control and avoidance method in a data processing system - A congestion control and avoidance method including a method check step of determining whether the request contents is cacheable or uncacheable on the basis of the request inputted from the client terminal, a first Uniform Resource Identifier (URI) check step of, when it is determined that the request contents is cacheable in the method check step, checking a URI included in the request from the client terminal to determine whether the request contents is cacheable or uncacheable, a first URI hash search step of, when it is determined that the request contents is cacheable based on determination of the first URI check step, searching a URI hash to determine to execute any of regular caching, priority caching and access limitationing operation, and a step of executing any of the regular caching, priority caching and access limitationing operation according to determination in the first URI hash search step. | 08-28-2008 |
20080228939 | SYSTEMS AND METHODS FOR PROVIDING DYNAMIC AD HOC PROXY-CACHE HIERARCHIES - Systems and methods of storing previously transmitted data and using it to reduce bandwidth usage and accelerate future communications are described. By using algorithms to identify long compression history matches, a network device may improve compression efficiently and speed. A network device may also use application specific parsing to improve the length and number of compression history matches. Further, by sharing compression histories, compression history indexes and caches across multiple devices, devices can utilize data previously transmitted to other devices to compress network traffic. Any combination of the systems and methods may be used to efficiently find long matches to stored data, synchronize the storage of previously sent data, and share previously sent data among one or more other devices. | 09-18-2008 |
20080250155 | Server Side Tftp Flow Control - Methods and apparatuses for server side flow control. Receive a request from a first client device to multicast a file as a plurality of packets of data from a server device to multiple client device; transmit the plurality of packets of data from a server to the multiple client devices using a multicast trivial file transfer protocol (TFTP); and apply, by the server, one or more flow control techniques not defined by the multicast TFTP. | 10-09-2008 |
20080250156 | Method and apparatus for overload control and audit in a resource control and management system - A system and method for resource control management implementing Diameter protocol is disclosed. The system includes a border gateway which is configured to send an overload message and a controller that is acting as a Diameter protocol server which is configured to receive the overload message and block a number of calls to the border gateway. The controller would also be configured to perform an audit function which includes clearing media resource ports. The overload message sent by the border gateway may also include a reduction percentage, which will dictate the percentage of calls which will be blocked. | 10-09-2008 |
20080294793 | REDUCING INFORMATION RECEPTION DELAYS - A technique for reducing information reception delays is provided. The technique reduces delays that may be caused by protocols that guarantee order and delivery, such as TCP/IP. The technique creates multiple connections between a sender and recipient computing devices and sends messages from the sender to the recipient on the multiple corrections redundantly. The recipient can then use the first arriving message and ignore the subsequently arriving redundant messages. The recipient can also wait for a period of time before determining which of the arrived messages to use. The technique may dynamically add connections if messages are not consistently received in a timely manner on multiple connections. Conversely, the technique may remove connections if messages are consistently received in a timely manner on multiple connections. The technique can accordingly be used with applications that are intolerant of data reception delays such as Voice over IP, real-time streaming audio, or real-time streaming video. | 11-27-2008 |
20080313345 | Adaptive polling - A distributed computing system manages execution of jobs and their associated tasks. A broker manages assignment of computing tasks from clients to available computing resources. Clients and available computing resources contact the broker by polling. To prevent “ringing,” the broker specifies wait times for the polling entities, and randomizes the wait times in a range around a desired target latency. That is, a pseudo-random number generator is used to select values within a range of the target value, to avoid the situation in which deterministic patterns in the polling and response times result in highly synchronized message traffic, which might otherwise overwhelm the broker and/or the communication network. | 12-18-2008 |
20090089449 | Mutiplexing and congestion control - Methods, systems and devices for network congestion control exploit the inherent burstiness of network traffic, using a wave-based characterization of network traffic and corresponding multiplexing methods and approaches. | 04-02-2009 |
20090094378 | Software Deployment Using Client Location - A software distribution mechanism evaluates network addresses of requesting clients to determine a location for each client. The clients from a particular location are grouped together and a fraction of those clients in a particular group are recipients of a software distribution. The faction is adjusted to enable more or fewer clients to download, thus effectively throttling the amount of bandwidth consumed by a mass distribution event. The fraction may be adjusted for particular geographical locations and the time of day to make more effective use of network bandwidth. | 04-09-2009 |
20090113069 | APPARATUS AND METHOD FOR PROVIDING A CONGESTION MEASUREMENT IN A NETWORK - Example embodiments of a system and method for providing a congestion measurement in a network are disclosed. In an example embodiment information is received at an information transfer rate, from a source network device. A sample of the information may be taken before the information is transmitted to a destination network device. In an example embodiment, a congestion measurement value is computed that corresponds to the sample and represented with at least two bits. A multi-bit indicator of the congestion measurement value is then transmitted to control the information transfer rate of information arriving in the future. | 04-30-2009 |
20090132722 | ADAPTIVE COMMUNICATION INTERFACE - Embodiments of the invention include a communication interface and protocol for allowing communication between devices, circuits, integrated circuits and similar electronic components having different communication capacities or clock domains. The interface supports communication between any components having any difference in capacity and over any distance. The interface utilizes request and acknowledge phases and signals and an initiator-target relationship between components that allow each side to throttle the communication rate to an accepted level for each component or achieve a desired bit error rate. | 05-21-2009 |
20090138615 | SYSTEM AND METHOD FOR AN IMPROVED HIGH AVAILABILITY COMPONENT IMPLEMENTATION - The invention relates to a computer system and method for high availability processing through a session on a transport connection, for use in a cluster with at least two nodes. The system comprises a protocol component; a cluster with at least two nodes, said cluster being arranged for running the protocol component; and a server arranged for maintaining a protocol session on a transport connection with a node of the cluster. The cluster is arranged for maintaining on each of said at least two nodes one instance of the protocol component, so that at least two instances are active; the server is arranged for simultaneously maintaining a protocol session with each instance. | 05-28-2009 |
20090138616 | UTILIZING INFORMED THROTTLING TO GUARANTEE QUALITY OF SERVICE TO I/O STREAMS - A system for utilizing informed throttling to guarantee quality of service to a plurality of clients includes a server core having a performance analyzer that compares a performance level received by a client to a corresponding contracted service level and determines if the client qualifies as a victim whose received performance level is less than the corresponding contracted service level. The performance analyzer is further configured to identify one or more candidates for throttling in response to an I/O stream receiving insufficient resources by determining if the client qualifies as a candidate whose received performance level is better than the corresponding contracted service level. The server core further includes a scheduler that selectively and dynamically issues a throttling command to the candidate client, and provides a quality of service enforcement point by concurrently monitoring a plurality of I/O streams to candidate clients and concurrently throttling commands to the candidate clients. | 05-28-2009 |
20090144441 | METHOD AND SYSTEM FOR PEER TO PEER WIDE AREA NETWORK COMMUNICATION - A method and system for peer to peer wide area network communication is provided. A peer in the network receives one or more media and one or more associated control signaling from any one of a plurality of Logical Media/Control Channels, wherein each Logical Media/Control Channel is associated with a Transport Resource on a base station; formats the media into a Formatted Media Packet; formats the control signaling into an Internet Peer to Peer Control Signaling; concatenates the Formatted Media with the Internet-Peer to Peer Control Signaling to form a Concatenated Packet comprising an Internet-Peer to Peer Control Signaling and a Media Packet; duplicates the Concatenated Packet, thereby forming a duplicated Concatenated Packet comprising at least one of a unicast packet and a multicast packet; and transmits the duplicated Concatenated Packets via the wide area network. | 06-04-2009 |
20090157899 | CONTENT DELIVERY NETWORK - A content delivery system for providing content from a content delivery network to end users may include a plurality of delivery servers that host one or more content items and an inventory server having an inventory of content. The inventory of content can indicate which of the delivery servers host the content items. The inventory server may receive a request for a content item from an end user system and may access the inventory of content to determine one or more delivery servers that host the content item. In response to this determination, the inventory server may redirect the request for the content item to a selected one of the delivery servers. The selected delivery server can then serve the content item to the end user system. | 06-18-2009 |
20090164658 | METHOD AND APPARATUS FOR PROCESSING IN A CONNECTED STATE BY AN ACCESS TERMINAL AND ACCESS NETWORK IN WIRELESS COMMUNICATION SYSTEMS - A method and apparatus for processing on entering a Connected State by an access terminal and an access network is provided comprising issuing a ConnectedState.Activate command and an ActiveSetManagement.Open command, determining whether protocol receives an indication and determining whether protocol receives a Redirect message. | 06-25-2009 |
20090164659 | COMMUNICATION SYSTEM ALLOWING REDUCTION IN CONGESTION BY RESTRICTING COMMUNICATION - A signal processing server apparatus compares a predetermined threshold value with the usage rate of a processor which processes a signal in Internet Protocol communication, determines, based on the comparison result, whether or not congestion has occurred, and gives an instruction to restrict the communication during occurrence of congestion. An opposed station apparatus restricts Internet Protocol communication with the signal processing server apparatus according to the instruction from the signal processing server apparatus. | 06-25-2009 |
20090193140 | SYSTEM AND METHOD FOR THROTTLING HOST THROUGHPUT - A method for throttling host throughput in a computer storage subsystem is provided. The host throughput is compared to a throughput limit for a predetermined time period. If the host throughput exceeds the throughput limit during the predetermined time period, an input/output (I/O) delay is set equal to the remainder of the predetermined time period, and the delay is implemented for an associated storage device of the computer storage subsystem. | 07-30-2009 |
20090193141 | ADAPTIVE FLOW CONTROL TECHNIQUES FOR QUEUING SYSTEMS WITH MULTIPLE PRODUCERS - Provided is a method, computer program and system for controlling the flow of service requests originated by a plurality of requesters. The method includes adding an additional control mechanism, which includes a serializer and a serializer queue, between the requesters and the service provider. The serializer inhibits the requesters when the serializer queue size reaches a threshold for a period proportional to the number of requesters already waiting, the queue length and the serializer service time. When the service provider queue is full or at a critical level, the serializer is inhibited for a period of time that is the approximately the difference between the service times of the serializer and the service provider. In addition, when the service provider queue is full, the service provider service time is recalculated as a function of the serializer service time and of the time required to process requests by the service provider. | 07-30-2009 |
20090216900 | SCALABLE NETWORK APPARATUS FOR CONTENT BASED SWITCHING OR VALIDATION ACCELERATION - A network apparatus is provided that may include one or more security accelerators. The network apparatus also includes a plurality of network units cascaded together. According to one embodiment, the plurality of network units comprise a plurality of content based message directors, each to route or direct received messages to one of a plurality of application servers based upon the application data in the message. According to another embodiment, the plurality of network units comprise a plurality of validation accelerators, each validation accelerator to validate at least a portion of a message before outputting the message. | 08-27-2009 |
20090222573 | SYSTEM AND METHOD FOR APPLICATION LAYER RESOURCE TRAFFIC CONTROL - Methods and systems are presented for controlling application layer message traffic at a central web services resource in which a web services gateway associated with the central resource sends a backoff message to a gateway associated with a remote web service client, which in turn slows the application layer message traffic to the central resource. | 09-03-2009 |
20090300210 | METHODS AND SYSTEMS FOR LOAD BALANCING IN CLOUD-BASED NETWORKS - A cloud management system can be configured to monitor and allocate resources of a cloud computing environment. The cloud management system can be configured to receive a request to instantiate a virtual machine. In order to instantiate the virtual machine, the cloud management system can be configured to determine the current resource usage and available resources of the cloud in order to allocate resources to the requested virtual machine. The cloud management system can be configured to scale the resources of the cloud in the event that resources are not available for a requested virtual machine. | 12-03-2009 |
20090300211 | REDUCING IDLE TIME DUE TO ACKNOWLEDGEMENT PACKET DELAY - Mechanisms for reducing the idle time of a computing device due to delays in transmitting/receiving acknowledgement packets are provided. A first data amount corresponding to a window size for a communication connection is determined. A second data amount, in excess of the first data amount, which may be transmitted with the first data amount, is calculated. The first and second data amounts are then transmitted from the sender to the receiver. The first data amount is provided to the receiver in a receive buffer of the receiver. The second data amount is maintained in a switch port buffer of a switch port without being provided to the receive buffer. The second data amount is transmitted from the switch port buffer to the receive buffer in response to the switch port detecting an acknowledgement packet from the receiver. | 12-03-2009 |
20090307372 | CONGESTION MANAGEMENT AND LATENCY PREDICTION IN CSMA MEDIA - A facility for congestion management and latency prediction is described. In various embodiments, the facility sums a series of fractional transmission delays wherein each fractional transmission delay is measured as a probability of a failed transmission attempt multiplied by the cost of the failed transmission attempt, and provides the sum. | 12-10-2009 |
20090313383 | DATACASTING SYSTEM WITH AUTOMATIC DELIVERY OF SERVICE MANGEMENT CAPABILITY - A datacast system, and associated apparatus and method for automatically managing a data object or objects within a hierarchical carousel structure by enabling, among other functions, the dynamic allocation of bandwidth to each carousel within the structure. The dynamic bandwidth allocation enables a server platform to redistribute the bandwidth allocated to a carousel or set of data objects to adjust to desired changes in object transmission policies or priorities of a datacast application. | 12-17-2009 |
20090319682 | METHOD AND DEVICE FOR TRANSMITING DATA - A video consisting of data organized in the form of a plurality of images is transmitted in a communication network. The method comprises a step of coding images with motion compensation, which consists in compressing the images of the video and in creating dependencies between compressed images, a step of scheduling the transmission of packets representing the compressed images, which consists in sending the compressed images over the network in a selected order, and a step of controlling the rate of the video. At least one of reconsidering the selected order of sending already compressed but not yet transmitted images and deleting at least one compressed image is performed at the time of coding a new image. Furthermore, the dependencies between the new image to be compressed and the compressed images are selected by taking into account the reconsidered sending order at the time of coding the new image. | 12-24-2009 |
20100005189 | Pacing Network Traffic Among A Plurality Of Compute Nodes Connected Using A Data Communications Network - Methods, apparatus, and products are disclosed for pacing network traffic among a plurality of compute nodes connected using a data communications network. The network has a plurality of network regions, and the plurality of compute nodes are distributed among these network regions. Pacing network traffic among a plurality of compute nodes connected using a data communications network includes: identifying, by a compute node for each region of the network, a roundtrip time delay for communicating with at least one of the compute nodes in that region; determining, by the compute node for each region, a pacing algorithm for that region in dependence upon the roundtrip time delay for that region; and transmitting, by the compute node, network packets to at least one of the compute nodes in at least one of the network regions in dependence upon the pacing algorithm for that region. | 01-07-2010 |
20100005190 | METHOD AND SYSTEM FOR A NETWORK CONTROLLER BASED PASS-THROUGH COMMUNICATION MECHANISM BETWEEN LOCAL HOST AND MANAGEMENT CONTROLLER - A network controller in a communication device may be operable to route local host-management traffic between a local host and a management controller within the communication device, wherein the local host may be operable to utilize its network processing resources and function during communication of the local host-management traffic. A dedicated management port may be configured in the network controller to enable receiving and/or transmitting local host-management traffic communicated from and/or to the local host separate from the local host's network traffic communicated via the network controller. The host-management traffic is communicated between the network controller and the management controller via NC-SI interface. The management controller may be assigned Internet protocol (IP) based addressing information for use during routing of local host-management traffic. The IP addressing information may be preset statically, assigned automatically from a list of available addresses, or configured dynamically via a DHCP server function. | 01-07-2010 |
20100011118 | Call admission control and preemption control over a secure tactical network - In a secure network where the network characteristics are not known, a call admission control algorithm and a preemption control algorithm based on a destination node informing the source node of the observed carried traffic are used to regulate the amount of traffic that needs to be preempted by the source. The amount of traffic that needs to be preempted is based on the carried traffic measured at the destination node. The traffic to be preempted is based on the priority of the traffic, where the lowest priority traffic is the first to be preempted until the amount of traffic preempted is sufficient to allow the remaining traffic to pass through the network without congestion. | 01-14-2010 |
20100011119 | AUTOMATIC BIT RATE DETECTION AND THROTTLING - A computer system receives a request from a client computer system for data that is to be presented to a user, accesses a portion of the requested data and determines the encoded bit rate from the accessed portion of requested data. Based on the encoded bit rate for the requested data, the computer system determines an initial amount of data that is to be transferred to the client computer system to enable prompt access to that portion of data and determines the transfer rate for transferring the remaining data to the client computer system. The transfer rate for the remaining data is lower than the transfer rate for the initial amount. The computer system transfers the initial amount of data to the client computer system and transfers the remainder of the requested data to the client computer system at the determined lower transfer rate, subsequent to transferring the initial amount. | 01-14-2010 |
20100011120 | CANONICAL NAME (CNAME) HANDLING FOR GLOBAL SERVER LOAD BALANCING - Canonical name (CNAME) handling is performed in a system configured for global server load balancing (GSLB), which orders IP addresses into a list based on a set of performance metrics. When the GSLB switch receives a reply from an authoritative DNS server, the GSLB switch scans the reply for CNAME records. If a CNAME record is detected and it points to a host name configured for GSLB, then a GSLB algorithm is applied to the reply. This involves identifying the host name (pointed to by the CNAME record) in the reply and applying the metrics to the list of returned IP addresses corresponding to that host name, to reorder the list to place the “best” IP address at the top. If the CNAME record in the reply points to a host name that is not configured for GSLB, then the GSLB sends the reply unaltered to the inquiring client. | 01-14-2010 |
20100017535 | Method and System for Transparent TCP Offload (TTO) with a User Space Library - Certain aspects of a method and system for transparent TCP offload with a user space library are disclosed. Aspects of a method may include collecting TCP segments in a network interface card (NIC) without transferring state information to a host system. When an event occurs that terminates the collection of TCP segments, a single aggregated TCP segment based on the collected TCP segments may be generated. The aggregated TCP segment may be posted directly to a user space library, bypassing kernel processing of the aggregated TCP segment. | 01-21-2010 |
20100030914 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR THROTTLING TRAFFIC TO AN INTERNET PROTOCOL (IP) NETWORK SERVER USING ALIAS HOSTNAME IDENTIFIERS ASSIGNED TO THE IP NETWORK SERVER WITH A DOMAIN NAME SYSTEM (DNS) - Methods, systems, and computer readable media for throttling traffic to an IP network server using alias hostname identifiers assigned to the IP network server with a domain name system are disclosed. One method includes maintaining a plurality of weight values and corresponding alias hostname identifiers for the IP network server that are associated with the IP network server in a DNS system. The method further includes throttling network traffic sent to an IP network server by sending, from the IP network server, messages to nodes that send the traffic to the IP network server, where the messages selectively enable or disable traffic flow to the individual alias hostnames. | 02-04-2010 |
20100070648 | TRAFFIC GENERATOR AND METHOD FOR TESTING THE PERFORMANCE OF A GRAPHIC PROCESSING UNIT - The present invention relates to a traffic generator and a method for testing the performance of the memory system of graphic processing unit. The traffic generator comprises: at least one simulated engine module, each for generating at least one read stream and/or at least one write stream; and an output arbiter for selecting a stream to be output from a group comprising the at least one read stream and/or the at least one write stream; wherein the selected stream is arranged to be output to the memory system of graphic processing unit. | 03-18-2010 |
20100082839 | SMART LOAD BALANCING FOR CALL CENTER APPLICATIONS - Methods, devices, and systems for smart load balancing are provided. SIP Requests destined for a particular AOR are delivered to one of several registered contact addresses according to associated availability score stored in routing element's contact resolution table. The availability score is periodically updated by the contact entity itself using the SIP PUBLISH mechanism to push the score to the routing element. | 04-01-2010 |
20100095021 | SYSTEMS AND METHODS FOR ALLOCATING BANDWIDTH BY AN INTERMEDIARY FOR FLOW CONTROL - The present disclosure is directed towards systems and methods for allocating a bandwidth credit or an annuity of bandwidth credit to a sender by an intermediary deployed between the sender and a receiver. The sender may be allocated a bandwidth credit or an annuity of bandwidth credit which may identify an amount of data the sender may transmit over a predetermined time period to the receiver, via the intermediary. The intermediary may determine an allocation of a one-time bandwidth credit based on the determination that a difference between the rate of transmission of the sender and the bandwidth usage of the sender falls below a predetermined threshold of the bandwidth credit. The intermediary may determine an annuity of bandwidth credit based on a determination that a difference between the bandwidth usage of the sender over the annuity period and the annuity of bandwidth credit exceeds a predetermined threshold. | 04-15-2010 |
20100146143 | System and Method for Analyzing Data Traffic - A method of analyzing data traffic includes receiving a request at a data analysis system to store a string related to header information associated with a data packet. The method also includes applying a hash function to the string, thereby obtaining a 32-bit intermediate, and applying another hash function to the 32-bit intermediate, thereby obtaining a hash number. Further, the method includes storing the string in an array position corresponding to the hash number, when the array position is empty. | 06-10-2010 |
20100146144 | Method and System for Determining Overall Content Values for Content Elements in a Web Network and for Optimizing Internet Traffic Flow Through the Web Network - Disclosed is a method for optimizing internet traffic flow through a web network including the steps of collecting content data corresponding to the content elements, determining a revenue value for each content element, calculating an overall content value for each content element based on the corresponding revenue value and revenue generated from subsequent traffic flow of a user during a visit to the web network, and modifying the web network based on the overall content value and the content data, so as to maximize the value of the web network. Also disclosed is a system for determining overall content values for a plurality of content elements including an analytic server for receiving content data corresponding to the content elements, and a processor for determining a revenue value for each content element, and to calculate an overall content value for each content element based on the corresponding revenue value and revenue generated from subsequent traffic flow of a user during a visit to the web network. | 06-10-2010 |
20100153579 | Flow Control of Events Based on Threshold, Grace Period, and Event Signature - A method for controlling sender events arriving at a recipient system is provided. An event transmitted from a sender is received at recipient system, and an event signature is determined. An elapse time between received event and a previous event is calculated. If elapse time is less than or equal to a critical time, it is determined if a counter is equal to or greater than a maximum value. If yes, event is rejected. If not, counter is incremented and the event is processed. If elapse time is greater than critical time, it is determined if elapse time is less than or equal to a grace period and if counter is greater than zero. If yes, counter is decremented and the event is processed. If not, counter is set to zero and event is processed. The critical time, maximum value, and increment/decrement factor are set based on the event signature. | 06-17-2010 |
20100161831 | OPTIMIZING CONTENT AND COMMUNICATION IN MULTIACCESS MOBILE DEVICE EXHIBITING COMMUNICATION FUNCTIONALITIES RESPONSIVE OF TEMPO SPATIAL PARAMETERS - A content and traffic managing system operatively associated with and a computer implemented method of managing traffic of a mobile device exhibiting communication functionality. The mobile device is connectable to users and to content providers via communication links. The system tracks various parameters over time, and schedules communication, both in relation to predefined or projected content responsive of the following: users' content related behavior, users' communication behavior, users' external behavior, and parameters of communication links. The method comprises: (i) tracking users' content related behavior, communication behavior and users' external behavior over time; (ii) tracking parameters of communication links over time; (iii) scheduling and initiating communication related to predefined or projected content responsive of the above mentioned criteria at time slots selected such that the communication is performed in view of users' predefined or projected preferences in accordance with the parameters of communication links. | 06-24-2010 |
20100174826 | Information gathering system and method - A system and a method for gathering information in a secure and efficient manner is provided. A two-level security procedure ensures that communication occurs only between authorized parties. Communications between parties are according to the XML convention, which enables the parties to communicate or transfer information with each other even if they use incompatible communications systems. Communications may occur synchronously or asynchronously depending on predetermined parameters, such as the complexity of the communication and the amount of information being communicated or transferred. | 07-08-2010 |
20100211692 | GRACEFUL DEGRADATION FOR COMMUNICATION SERVICES OVER WIRED AND WIRELESS NETWORKS - A method for gracefully extending the range and/or capacity of voice communication systems is disclosed. The method involves the persistent storage of voice media on a communication device. When the usable bit rate on the network is poor and below that necessary for conducting a live conversation, voice media is transmitted and received by the communication device at the available usable bit rate on the network. Although latency may be introduced, the persistent storage of both transmitted and received media of a conversation provides the ability to extend the useful range of wireless networks beyond what is required for live conversations. In addition, the capacity and robustness in not being affected by external interferences for both wired and wireless communications is improved. | 08-19-2010 |
20100223397 | METHOD AND SYSTEM FOR VIRTUAL MACHINE NETWORKING - Aspects of a method and system for networking are provided. In this regard, one or more circuits and/or processors in a network adapter of a first network device may determine whether to communicate traffic between virtual machines running on the first network device via a path that resides solely within the first network device, or via a path that comprises a second network device that is external to the first network device. The determination may be based, at least in part, on characteristics of the traffic. The determination may be based, at least in part, on capabilities and/or available resources of the network adapter. The determination may be based, at least in part, on management information exchanged between the one or more circuits and/or processors and one or more of: software running on the first network device, the second network device, and a third network device. | 09-02-2010 |
20100235538 | CONTROL OF PREEMPTION-BASED BEAT-DOWN EFFECT - In one embodiment, a node determines an overload ratio for an output as a ratio of a total rate of received traffic at the output to a preemption threshold of the output. The node also determines a ratio of traffic that is to be marked at the output based on the overload ratio and a ratio of previously marked traffic destined for the output from each input to the total traffic from each input to the output, and whether, for a particular input, the ratio of previously marked traffic is less than the ratio of traffic that is to be marked at the output. If so, the node marks unmarked traffic of the particular input corresponding to a difference between the ratio of traffic that is to be marked at the output and the ratio of previously marked traffic destined for the output from the particular input. | 09-16-2010 |
20100241760 | Web Front-End Throttling - A server computer includes a performance monitor module and a throttling logic module. The performance monitor module includes performance monitors that monitor system parameters of the server computer. The throttling logic module determines whether a system parameter monitored by a performance monitor exceeds a predetermined threshold. When a system parameter exceeds a predetermined threshold, the throttling logic module sets a throttling flag. The throttling logic module activates throttling at the server computer when at least one throttling flag is set for each of a predetermined number of time snapshots. The activation of throttling limits the processing of request messages received by the server computer. | 09-23-2010 |
20100287303 | NETWORK TRAFFIC RATE LIMITING SYSTEM AND METHOD - A system and method is provided for rate limiting network traffic flow of an untrusted application. A master module in a server environment manages network traffic flow restrictions. A slave module executes client applications in the server environment. A services module in the server environment executes a trusted application to validate the client application to the master module. A traffic restriction module on the master module sets network traffic restrictions when validation has not been received for the client application on the slave blade, and receives client application validations from the trusted application to unrestrict network traffic flow for the client application on the slave blade. | 11-11-2010 |
20100318675 | METHOD OF SENDING DATA AND ASSOCIATED DEVICE - A method of sending a data stream of video images between a server and at least one client device in a communication network, employing a rate setting for the sending of data over the communication network, the method comprising the following steps:
| 12-16-2010 |
20100332678 | SMART NAGLING IN A TCP CONNECTION - An approach is provided to improve network efficiency. A send segment size, such as a maximum segment size (MSS), that corresponds to data segments being sent to a receiver over a computer network. A data block is identified in a send buffer and the data block includes more than one data segments. Based on the determined send segment size, all but a remaining data segment of the data segments are sent to the receiver. The sent data segments are each the determined send segment size and the remaining data segment is smaller than the send segment size. The remaining data segment is sent to the receiver in response to identifying that the remaining data segment is a portion of the data block. | 12-30-2010 |
20110004698 | Defining Network Traffic Processing Flows Between Virtual Machines - Network devices include hosted virtual machines and virtual machine applications. Hosted virtual machines and their applications implement additional functions and services in network devices. Network devices include data taps for directing network traffic to hosted virtual machines and allowing hosted virtual machines to inject network traffic. Network devices include unidirectional data flow specifications, referred to as hyperswitches. Each hyperswitch is associated with a hosted virtual machine and receives network traffic received by the network device from a single direction. Each hyperswitch processes network traffic according to rules and rule criteria. A hosted virtual machine can be associated with multiple hyperswitches, thereby independently specifying the data flow of network traffic to and from the hosted virtual machine from multiple networks. The network device architecture also enables the communication of additional information between the network device and one or more virtual machine applications using an extended non-standard network protocol. | 01-06-2011 |
20110004699 | METHOD, SYSTEM AND SESSION CONTROL UNIT FOR RELIEVING NETWORK TRAFFIC - The present invention relates to the Internet field, and discloses a method, system and session control unit for relieving network traffic. The method includes: receiving a content request message; searching a network for data corresponding to content requested by the content request message, and selecting a content replication point according to a search result; and instructing the selected content replication point to replicate and forward the data to a requesting node. The session control unit includes a receiving unit, a searching unit, a selecting unit, and an instruction sending unit. The system includes a session control unit and a content replication point. In the technical solution under the present invention, a content replication point is selected intelligently by identifying the user request information, and the selected content replication point is responsible for replicating and forwarding the data, thus relieving the network traffic and load. | 01-06-2011 |
20110022722 | POLICY AND CHARGING CONTROL ARCHITECTURE - Apparatus for generating policy and charging rules to control IP flows across a packet switched network. The apparatus comprises a first interface for coupling to an application function and a second interface for coupling to a policy and charging enforcement function implemented at a node through which said IP flows pass. A processor or processors is/are configured to receive service information at said first interface, and, via said second interface to trigger the removal of one or more pre-existing policy and charging rules at the policy and charging enforcement function following a predefined delay. | 01-27-2011 |
20110029686 | Capacity Management - Capacity management is described. In an implementation, a method includes executing a module on a computing device to monitor use of a product during a measurement interval to determine a maximum capacity of the product used during the measurement interval and whether a capacity failure point is reached during the measurement interval. A learned capacity limit is set based on the monitoring for determining whether one or more of a plurality of clients, if any, are to receive a list which references at least the monitored product, wherein the learned capacity limit is set such that when the capacity failure point is not reached, the learned capacity limit is set according to the maximum capacity. | 02-03-2011 |
20110035508 | AUTOMATIC TRANSPORT DISCOVERY FOR MEDIA SUBMISSION - Systems and methods for transporting media content data over a network to a media submission system are disclosed. A client media submission program may be provided that supports media submission to the media submission system using a plurality of transport mechanisms. One of the transport mechanisms to be utilized for the media submission may be determined based at least in part on configuration criteria. The media content data may be submitted over the network to the media submission system using the determined one of the transport mechanisms. | 02-10-2011 |
20110040892 | LOAD BALANCING APPARATUS AND LOAD BALANCING METHOD - A load balancing apparatus stores a transfer rule in which a path control identifier for identifying a path for a message sent from a client device is associated with relay device information for specifying a relay device that creates the path control identifier. When receiving a message from the client device, the load balancing apparatus determines whether the message contains the path control identifier. If the load balancing apparatus determines that the path control identifier is contained, the load balancing apparatus specifies, from the transfer rule, relay device information with which the path control identifier is associated and then sends the message to a relay device that is specified by the specified relay device information. In contrast, if the load balancing apparatus determines that the path control identifier is not contained, the load balancing apparatus sends the message to the relay device specified in accordance with a predetermined condition. | 02-17-2011 |
20110047287 | SYSTEMS AND METHODS FOR OPTIMIZING MEDIA CONTENT DELIVERY BASED ON USER EQUIPMENT DETERMINED RESOURCE METRICS - A user equipment for optimizing a media content delivery based on a state of resident resources. The user equipment may include a memory component having a resource manager application stored therein, one or more processor components, a resident power source, and a transceiver. The resource manager is configured to determine one or more device resource metrics, compare the device resources metric(s) to one or more corresponding device resource thresholds(s), and then generate an instruction to throttle a media content delivery when it is determined that at least one resource metric has exceeded a resource threshold value or that a local policy metric has achieved a local policy threshold. | 02-24-2011 |
20110072152 | APPARATUS AND METHOD FOR RECEIVING DATA - An apparatus and method for receiving data over a network are provided. The data reception apparatus may include a receiver, a congestion decision unit and a suspension session selector. The receiver is configured to receive segments of data, using sessions corresponding to data transmissions apparatuses. The congestion detection unit is configured to determine whether a network to be utilized by a corresponding segment is congested, based on a status of each of the sessions. The suspension selector is configured to select a suspension session from the sessions where the network is determined to be congested. | 03-24-2011 |
20110087799 | Flyways in Data Centers - Described is a technology by which additional network communications capacity is provided to an oversubscribed base network where needed, through the use of dynamically provisioned communications links referred to as flyways. A controller detects a need for additional network communications capacity between two network machines, e.g., between two racks of servers with top-of-rack switches. The controller configures flyway mechanisms (e.g., one per rack) to carry at least some of the network traffic between the machines of the racks and thereby provide the additional network communications capacity. The flyway mechanisms may be based on any wireless or wired technologies, including 60 GHz technology, optical links, 802.11n or wired commodity switches. | 04-14-2011 |
20110099290 | METHOD FOR DETERMINING METRICS OF A CONTENT DELIVERY AND GLOBAL TRAFFIC MANAGEMENT NETWORK - A method for determining metrics of a content delivery and global traffic management network provides service metric probes that determine the service availability and metric measurements of types of services provided by a content delivery machine. Latency probes are also provided for determining the latency of various servers within a network. Service metric probes consult a configuration file containing each DNS name in its area and the set of services. Each server in the network has a metric test associated with each service supported by the server which the service metric probes periodically performs metric tests on and records the metric test results which are periodically sent to all of the DNS servers in the network. DNS servers use the test result updates to determine the best server to return for a given DNS name. The latency probe calculates the latency from its location to a client's location using the round trip time for sending a packet to the client to obtain the latency value for that client. The latency probe updates the DNS servers with the clients' latency data. The DNS server uses the latency test data updates to determine the closest server to a client. | 04-28-2011 |
20110119397 | Protection of network flows during congestion in a communications network - In one embodiment, an apparatus includes a processor for mapping packets associated with network flows to policy profiles independent of congestion level at the apparatus, and enforcing the policy profiles for the packets based on a congestion state. Packets associated with the same network flow are mapped to the same policy profile and at least some of the network flows are protected during network congestion. The apparatus further includes memory for storing the policy profiles. A method for protecting network flows during network congestion is also disclosed. | 05-19-2011 |
20110125920 | INTELLIGENT COMPUTER NETWORK ROUTING USING LOGICALLY CENTRALIZED, PHYSICALLY DISTRIBUTED SERVERS DISTINCT FROM NETWORK ROUTERS - A route control architecture allows a network operator to flexibly control routing between the traffic ingresses and egresses in a computer network, without modifying existing routers. An intelligent route service control point (IRSCP) replaces distributed BGP decision processes of conventional network routers with a route computation that is flexible and logically centralized but physically distributed. One embodiment supplements the traditional BGP decision process with a ranking decision process that allows route-control applications to explicitly rank traffic egresses on a per-destination, per-router basis. A straightforward set of correctness requirements prevents routing anomalies in implementations that are scalable and fault-tolerant. | 05-26-2011 |
20110138072 | Server-Side Rendering - A solution for server-side rendering includes, at a server configured to store a video images representing states of users in a computer application, identifying future user actions based at least in part on a state of a user in the computer application. The solution also includes, responsive to the identifying, rendering video images for sending to a user device associated with the user. At the user device, a state of the user in a computer application is sent to the server. Responsive to the sending, video images are stored, each of the video images representing a future state of the user after the user performs a future action. Responsive to a user action, one of the video images is selected for display on a user display of the user device. According to one aspect, the future user actions identified by the server are limited to less than a possible number of user actions for users having the state. | 06-09-2011 |
20110153863 | DISTRIBUTING BANDWIDTH ACROSS COMMUNICATION MODALITIES - Embodiments are configured to provide communication environments to communicating participants using a number of modality control features, but are not so limited. In an embodiment, a system includes a communications manager to manage an amount of available communication bandwidth to a number of communication modalities that include an audio modality, a video modality, an application sharing modality, and/or a file transfer modality. In one embodiment, available bandwidth can be distributed by controlling an audio state, a video state, an application sharing state, and/or a file transfer state, including using first and second distribution ratios as part of allocating available bandwidth. | 06-23-2011 |
20110219141 | Modification of Small Computer System Interface Commands to Exchange Data with a Networked Storage Device Using AT Attachment Over Ethernet - A process executed by a computing device uses commands having a first format to exchange data through a network with a storage device configured to execute commands having a second format. A storage device controller identifies a command type associated with a command received from the process and identifies one or more physical memory addresses associated with the command. The storage device controller identifies a command having a second format associated with the received command and generates a network request including the command having the second format, the one or more physical memory addresses, a device identifier associated with the storage device and a tag. The network request is transmitted through a network to the storage device which executes the command having the second format. For example, an AoE request including an ATA command is generated from a received SCSI command. | 09-08-2011 |
20110219142 | Path Selection In Streaming Video Over Multi-Overlay Application Layer Multicast - A method and a tool based on achievable bandwidth as a metric are provided for selecting paths for overlay construction in an application layer multicast system. An in-band bandwidth probing tool according to the invention can estimate achievable bandwidth, i.e., the data throughput that can be realized between two peers over the transport protocol employed. The tool can determine the amount of extra bandwidth available in the target network path so that excess data traffic can be diverted from congested path without causing new congestion in the target path. | 09-08-2011 |
20110246666 | Method and System for Transparent TCP Offload (TTO) with a User Space Library - Certain aspects of a method and system for transparent TCP offload with a user space library are disclosed. Aspects of a method may include collecting TCP segments in a network interface card (NIC) without transferring state information to a host system. When an event occurs that terminates the collection of TCP segments, a single aggregated TCP segment based on the collected TCP segments may be generated. The aggregated TCP segment may be posted directly to a user space library, bypassing kernel processing of the aggregated TCP segment. | 10-06-2011 |
20110264821 | Method And Devices For Performing Traffic Control In Telecommunication Networks - A method is disclosed for performing traffic control in a network, the network comprising at least one link, the method comprising: —measuring the data traffic rate, the data traffic comprising at least one data flow, at at least one link which carries the data traffic; —defining a first and a second threshold value, the second threshold value being larger than the first threshold value; —determining whether the measured data rate is larger than the first threshold value; and if so, starting congestion signaling of a first type, —determining whether the data rate is larger than the second threshold value; and if so, starting congestion signaling of a second type, wherein at least one of the first and the second threshold values are modified over time, based on data traffic information. | 10-27-2011 |
20110264822 | FILTERING AND ROUTE LOOKUP IN A SWITCHING DEVICE - Methods and devices for processing packets are provided. The processing device may Include an input interface for receiving data units containing header information of respective packets; a first module configurable to perform packet filtering based on the received data units; a second module configurable to perform traffic analysis based on the received data units; a third module configurable to perform load balancing based on the received data units; and a fourth module configurable to perform route lookups based on the received data | 10-27-2011 |
20110276715 | Communicating Data In Flows Between First And Second Computers Connected Over A Network - A network arrangement includes a first computer connected over a network to a second computer. Data over plural flows is communicated over the network between the first computer and second computer, where the second computer has a resource remotely accessible by the first computer over the data network, and where the second computer has a device driver to receive user input at a user input device attached to the first computer. According to different priorities assigned to the corresponding plural flows, at least a first one of the plural flows of data is caused to be throttled such that at least a second one of the plural flows is provided a greater portion of a bandwidth of the network, where the second flow is used for communication of data related to remote access of the resource of the second computer by the first computer. | 11-10-2011 |
20110283016 | LOAD DISTRIBUTION SYSTEM, LOAD DISTRIBUTION METHOD, APPARATUSES CONSTITUTING LOAD DISTRIBUTION SYSTEM, AND PROGRAM - A load distribution system that can further reduce load on a single apparatus and a network and that can distribute load for each process. The load distribution system of the present invention comprises: at least one packet forwarding apparatus comprising a packet forwarding unit that forwards a packet by using a forwarding rule sent from a flow control apparatus and a flow end check unit that detects a flow end; a load distribution apparatus that determines a load distribution destination from among a plurality of service provider servers by referring to a flow end notification sent from the flow control apparatus; and a flow control apparatus comprising a flow route setting unit that determines a forwarding route for a flow using a service provider server determined by the load distribution apparatus and notifies a packet forwarding apparatus on the forwarding route of a forwarding rule realizing the forwarding route and a flow end determination unit that notifies the load distribution apparatus of a flow end, based on flow end information detected by the packet forwarding apparatus. | 11-17-2011 |
20110302320 | SYSTEMS AND METHODS FOR NETWORK CONTENT DELIVERY - A content delivery system including a subscriber controller and cache, a source controller configured to transmit content to the subscriber controller and cache via a multicast transmission; and a network content delivery controller (NCDC) in communication with the subscriber controller and cache and source controller. A control plane is used to communicate the delivery of control information using Extensible Messaging and Presence Protocol (XMPP) between the subscriber controller and cache, source controller, and NCDC. | 12-08-2011 |
20120005368 | ADAPTIVE BIT RATE METHOD AND SYSTEM USING RETRANSMISSION AND REPLACEMENT - An adaptive method and system for dynamically facilitating access to higher quality content in the event transport of the higher quality content requires a greater allocation of network resources when compared to transport of the same content at a lower quality. | 01-05-2012 |
20120072612 | Method and an Arrangement of Identifying Traffic Flows in a Communication Network - A method of identifying traffic flows is provided in a traffic generating node, where each traffic flow is being associated with an application process running on the traffic generating node. The method is configured to perform a mapping operation, such that an application process is being linked to a signature that uniquely identifies a traffic flow and an associated socket, and such that the obtained linked information is maintained in a list. The mapping operation is configured to be executed in response to recognising a change to a socket associated with the application process at the traffic generating node. Based on accumulated mapping information, one or more processing element located at the traffic generating node, or at another node, may classify and/or control traffic flows associated with any of the application processes of the traffic generating node. | 03-22-2012 |
20120102217 | Multi-Adapter Link Aggregation for Adapters with Hardware Based Virtual Bridges - Mechanisms for providing a network adapter and functionality for performing link aggregation within a network adapter are provided. With these mechanisms, a network adapter is provided that includes a plurality of physical network ports for coupling to one or more switches of a data network and a link aggregation module, within the network adapter, and coupled to the plurality of physical network ports. The link aggregation module comprises logic for aggregating links associated with the plurality of physical network ports into a single virtual link. The link aggregation module interfaces with a virtual Ethernet bridge (VEB) of the network adapter to send data to the VEB and receive data from the VEB. | 04-26-2012 |
20120131222 | ELEPHANT FLOW DETECTION IN A COMPUTING DEVICE - Example embodiments relate to elephant flow detection in a computing device. In example embodiments, a computing device may monitor a socket for a given flow. The computing device may then determine whether the flow is an elephant flow based on the monitoring of the socket. If so, the computing device may signal the network that transmits the flow that the flow is an elephant flow. | 05-24-2012 |
20120131223 | Object-Based Transport Protocol - Methods and apparatuses are provided that facilitate providing an object-based transport protocol that allows transmission of arbitrarily sized objects over a network protocol layer. The object-based transport protocol can also provide association of metadata with the objects to control communication thereof, and/or communication of response objects. Moreover, the object-based transport protocol can maintain sessions with remote network nodes that can include multiple channels, which can be updated over time to seamlessly provide mobility, increased data rates, and/or the like. In addition, properties can be modified remotely by network nodes receiving objects related to the properties. | 05-24-2012 |
20120151088 | RECEIVE WINDOW AUTO-TUNING - Methods of tuning a receive window. A receiving device and a sending device may be in communication over a network. The receiving device may advertise a receive window to the sending device. The size of the receive window may be adjusted over time based on one or more connection parameters, application parameters and/or operating system parameters. | 06-14-2012 |
20120158989 | SYSTEM AND METHOD FOR PROVIDING ARGUMENT MAPS BASED ON ACTIVITY IN A NETWORK ENVIRONMENT - A method is provided in one example and includes receiving network traffic associated with a first user and a second user; evaluating keywords in the network traffic in order to identify a topic of discussion involving the first and the second users; determining a first sentiment associated with a first data segment associated with the first user; determining a second sentiment associated with a second data segment associated with the second user; and generating an argument map based on the first data sentiment and the second data sentiment. | 06-21-2012 |
20120203925 | METHOD AND APPARATUS FOR PROVIDING MEDIA MIXING WITH REDUCED UPLOADING - A method for providing media mixing with reduced uploading may include receiving device situation description data and content analysis data from each of a plurality of devices. The device situation description data and content analysis data received from each of the plurality of devices may be descriptive of media data associated with a common event and recorded separately at respective ones of the plurality of devices. The method may further include determining media segments defining one or more portions of the media data to be requested from selected ones of the plurality of devices based on the device situation description data and content analysis data, causing communication of a request for corresponding ones of the media segments to respective devices among the selected ones of the plurality of devices, and causing generation of mixed content based on receipt of the media segments. A corresponding apparatus and user terminal-side method and apparatus are also provided. | 08-09-2012 |
20120215936 | Feedback-Based Internet Traffic Regulation for Multi-Service Gateways - A method for regulating network traffic may be provided. The method may comprise: measuring usage of a CPU; determining if the CPU usage is greater than an overload threshold value; halting the increase of a data traffic shaping rate associated with traffic regulated by the CPU if the CPU usage is greater than the overload threshold value; determining if the CPU usage is greater than an overflow threshold value; and decreasing the data traffic shaping rate associated with traffic regulated by the CPU if the CPU usage is greater than the overflow threshold value for improving session setup speed. | 08-23-2012 |
20120215937 | NETWORK BOTTLENECKS - A method comprises receiving a request for a network connection and determining if the requested network connection is available. Based on the network connection not being available, the method comprises incrementing a counter. Based on the counter exceeding a threshold value, the method comprises setting a status indicating a bottleneck condition and further responding to the status indicative of the bottleneck condition. | 08-23-2012 |
20120265898 | ADJUSTING THE QUALITY OF SERVICE BASED ON NETWORK ADDRESSES ASSOCIATED WITH A MOBILE DEVICE - Implementations and techniques for adjusting the quality of service on an application-by-application basis based at least in part on a plurality of network addresses associated with a given mobile device are generally discussed. | 10-18-2012 |
20120271964 | Load Balancing for Network Devices - In one embodiment, an electronic device receives a request; obtains a current state from each of a plurality of electronic devices; and selects one of the plurality of electronic devices to service the request based on the current state of each of the plurality of electronic devices. The current state of each of the plurality of electronic devices is one of a plurality of states in a state model. Each of the plurality of states in the state model indicates a discrete level of workload for the plurality of electronic devices. | 10-25-2012 |
20120303835 | Implementing EPC in a Cloud Computer with Openflow Data Plane - A method for implementing a control plane of an evolved packet core (EPC) of a third generation partnership project (3GPP) long term evolution (LTE) network in a cloud computing system, including initializing the plurality of control plane modules of the EPC within the controller, each control plane module in the plurality of control plane modules initialized as a separate virtual machine by the cloud manager; monitoring resource utilization of each control plane module and the control plane traffic handled by each control plane module; detecting a threshold level of resource utilization or traffic load for one of the plurality of control plane modules of the EPC; and initializing a new control plane module as a separate virtual machine by the cloud manager in response to detecting the threshold level, the new control plane module to share the load of the one of the plurality of control plane modules. | 11-29-2012 |
20120317306 | Statistical Network Traffic Signature Analyzer - A network traffic analyzer may identify applications transmitting information across a network by analyzing various protocol attributes of the communication. A set of signatures may be created by training a machine learning system using network traffic with and without a specific application. The machine learning system may generate a signature for the specific application, and the signature may be analyzed using a monitoring system to identify the presence of the application's traffic on the network. In some embodiments, a decision tree may be used to detect the application within a statistical confidence. The monitoring system may be used for malware detection as well as other applications. | 12-13-2012 |
20130111061 | Method and Apparatus for Processing Network Congestion and core network entity | 05-02-2013 |
20130117466 | SPLITTING A NETWORK TRAFFIC FLOW - Systems and methods for splitting a network traffic flow in a data network are described. A flow of traffic between a source node and a destination node in the data network is split into a set of data paths. A data path includes one or more data links between nodes in the data network. A submap of the data network that excludes at least one data link is used to determine the set of flow paths. | 05-09-2013 |
20130124752 | DIAGNOSTIC HEARTBEAT THROTTLING - A method, system, and computer program product for a diagnostic heartbeat throttling are provided in the illustrative embodiments. A component, executing using a processor and a memory in a data processing system, sends diagnostic heartbeat packets over a communication link at a first rate, wherein a diagnostic heartbeat packet is a packet comprises a header, a set of heartbeat parameters, and a set of diagnostic attributes. The component detects a change in data traffic over the communication link. The component changes a rate of sending diagnostic heartbeat packets from the first rate to a second rate responsive to the change in the data traffic over the communication link. | 05-16-2013 |
20130124753 | FAIR QUANTIZED CONGESTION NOTIFICATION (FQCN) TO MITIGATE TRANSPORT CONTROL PROTOCOL (TCP) THROUGHPUT COLLAPSE IN DATA CENTER NETWORKS - Technologies are generally described for an enhanced Quantized Congestion Notification (QCN) congestion control approach, referred to as Fair QCN (FQCN) for enhancing fairness of multiple flows sharing link capacity in a high bandwidth, low latency data center network. QCN messages may be fed back to flow sources (e.g., servers) which send packets with a sending rate over their share of the bottleneck link capacity. By enabling the flow sources to regulate their data traffic based on the QCN messages from a congestion control component, the queue length at the bottleneck link may converge to an equilibrium queue length rapidly and TCP throughput performance may be enhanced substantially in a TCP incast circumstance. | 05-16-2013 |
20130124754 | FRAMEWORK OF AN EFFICIENT CONGESTION EXPOSURE AUDIT FUNCTION - A method of operating a packet dropper in a congestion exposure-enabled network, wherein sending hosts and receiving hosts communicate with each other by sending flows of packets over network paths via intermediate routers, which, upon detecting congestion, mark packets of the flows as congestion packets, wherein congestion is indicated to the sending hosts by way of a congestion feedback mechanism, and wherein the sending hosts, upon receiving congestion indications, declare a subset of the packets they send as congestion response packets, thereby producing either conformant flows or non-conformant flows, depending on whether the amount of congestion response packets is balanced with the indicated congestion level or not, is characterized in that the packet dropper carries out in succession a series of traffic analyzing steps for identifying the non-conformant flows. Furthermore, a corresponding packet dropper for use in a congestion exposure-enabled network is described. | 05-16-2013 |
20130138831 | METHODS AND APPARATUS TO CHANGE PEER DISCOVERY TRANSMISSION FREQUENCY BASED ON CONGESTION IN PEER-TO-PEER NETWORKS - A method, a computer program product, and an apparatus are provided. The apparatus determines a resource congestion level based on signals received on a plurality of resources of a peer discovery channel. In addition, the apparatus adjusts a duty cycle of a peer discovery transmission based on the determined congestion level. Furthermore, the apparatus transmits peer discovery signals at the adjusted duty cycle. | 05-30-2013 |
20130185454 | TECHNIQUE FOR OBTAINING, VIA A FIRST NODE, INFORMATION RELATING TO PATH CONGESTION - A method for obtaining, by a first node, information relating to a congestion on a path allowing the routing of packets from said first node destined for a second node in a packet communications network, said congestion potentially degrading said routing. | 07-18-2013 |
20130205038 | LOSSLESS SOCKET-BASED LAYER 4 TRANSPORT (RELIABILITY) SYSTEM FOR A CONVERGED ETHERNET NETWORK - A reliability system for a Converged Enhanced Ethernet network may include a plurality of end points each comprising a layer 4 transport layer, where each end point is connected to a data center bridging (DCB) layer 2 network. The system may also include an adaptor between the layer 4 transport layer and the DCB layer 2 network to translate at least one of flow and congestion control feedback signals, provided by at least one of the DCB network and the transport layer, to consolidated feedback signals for controlling transmission by the transport layer. | 08-08-2013 |
20130205039 | LOSSLESS SOCKET-BASED LAYER 4 TRANSPORT (RELIABILITY) SYSTEM FOR A CONVERGED ETHERNET NETWORK - A reliability system for a Converged Enhanced Ethernet network may include a plurality of end points each comprising a layer | 08-08-2013 |
20130212295 | APPLICATION LAYER NETWORK TRAFFIC PRIORITIZATION - Layer-7 application layer message (“message”) classification is disclosed. A network traffic management device (“NTMD”) receives incoming messages over a first TCP/IP connection from a first network for transmission to a second network. Before transmitting the incoming messages onto the second network, however, the NTMD classifies the incoming messages according to some criteria, such as by assigning one or more priorities to the messages. The NTMD transmits the classified messages in the order of their message classification. Where the classification is priority based, first priority messages are transmitted over second priority messages, and so forth, for example. | 08-15-2013 |
20130227164 | METHOD AND SYSTEM FOR DISTRIBUTED LAYER SEVEN TRAFFIC SHAPING AND SCHEDULING - A computer-implemented method of providing distributed layer seven traffic shaping includes receiving one or more service requests from one or more clients. The one or more service requests include network usage information associated with the one or more clients. The computer-implemented method also includes aggregating the network usage information from the one or more clients across one or more data centers. Further, the computer-implemented method includes computing a delay required for throttling the service request based on data transferred. Furthermore, the computer-implemented method includes communicating the delay to the one or more clients. Moreover, the computer-implemented method includes throttling the one or more service requests based on the delay across the one or more data centers. | 08-29-2013 |
20130246649 | NETWORK ASSESSMENT AND SHORT - TERM PLANNING PROCEDURE - A method for relieving network node congestion includes determining an average of an aggregated load on a network node that routes network traffic, projecting a demand on the network node based on extrapolating the average of the aggregated load to a future period, determining a current level of congestion on the network node, and projecting a future level of congestion on the network node based on the projected demand and the current level of congestion. An available capacity of other network nodes in a portion of the communication network that includes the network node is determined, as well as whether the projected future level of congestion on the network node can be relieved using the determined available capacity of the other network nodes. | 09-19-2013 |
20130254421 | METHOD AND APPARATUS FOR DEEP PACKET INSPECTION FOR NETWORK INTRUSION DETECTION - A method for processing packets in a network device includes receiving a packet at the network device, identifying a flow with which the packet is associated, and, based at least in part on the identified flow, selectively causing the packet, or a packet descriptor associated with the packet, to bypass at least a first packet processing unit of the network device. | 09-26-2013 |
20130297821 | CLIENT BASED CONGESTION MANAGEMENT - Systems and methods for network congestion management are provided. More particularly, network congestion management is performed by client devices, on the edge of the network. A client device can execute a client agent that is responsible for determining whether an item of content can be delivered to a network for delivery to a recipient system or device. The client can apply a category assigned to an item of content according to a taxonomy, against a profile, to determine whether the item of content can be sent immediately, or whether the content needs to be queued for sending at a later time. | 11-07-2013 |
20140019638 | NETWORK CONGESTION REDUCTION - Technologies and implementations for reducing congestion in a network are generally disclosed. | 01-16-2014 |
20140047127 | COORDINATED ENFORCEMENT OF TRAFFIC SHAPING LIMITS IN A NETWORK SYSTEM - Methods and protocols coordinate enforcement of application traffic shaping limits within clusters of middleware appliance information handling systems (MA IHSs). The protocols dynamically set the local traffic shaping requirements at each entry point of an MA IHS. Each MA IHS receives from other MA IHSs runtime statistics containing local shaping requirements and rates of requests. The method uses runtime statistics to measure performance against specified traffic shaping goals, and based on this comparison uses unique protocols to dynamically adjust the local shaping requirements in each MA IHS. The method may eliminate the need to statistically bind service domains to particular MA IHSs. Additional MA IHSs activate and/or deactivate service domains to accommodate service domain (server farm) CPU resource demands. | 02-13-2014 |
20140068098 | REDUCING NETWORK LATENCY RESULTING FROM NON-ACCESS STRATUM (NAS) AUTHENTICATION FOR HIGH PERFORMANCE CONTENT APPLICATIONS - Aspects relating to reducing network latency in systems that use NAS Authentication/Security procedures are disclosed. For example, a method for reducing latency due to NAS authentication can include determining a number (n) or time (t) of service requests from an idle state that trigger a NAS authentication. A penultimate service request is detected before the nth service request or after time (t). A gratuitous service request is sent after the penultimate service request. | 03-06-2014 |
20140082213 | METHOD AND APPARATUS OF SETTING DATA TRANSMISSION AND RECEPTION PERIOD - A method and an apparatus for setting a data transmission and reception period are provided. The method includes determining an average margin threshold for data communication with a second terminal, determining a data rate for the data communication with the second terminal, setting an active period and an idle period based on a ratio of the average margin threshold to the data rate, and synchronizing the active period and the idle period with the second terminal | 03-20-2014 |
20140101332 | METHOD AND SYSTEM FOR ACCESS POINT CONGESTION DETECTION AND REDUCTION - A method and system for detecting and reducing data transfer congestion in a wireless access point includes determining a round-trip-time value for an internet control message protocol (ICMP) packet transmitted from a source computing device to a first computing device of a plurality of computing devices via the wireless access point. A data rate for data transmissions from the source computing device is increased to a value no greater than a peak data rate value if the round-trip-time is less than a first threshold value. The data rate is decreased if the round-trip-time value is greater than a second threshold value. Additionally, the peak data rate value may also be decreased if the round-trip-time value is greater than the second threshold value. | 04-10-2014 |
20140115186 | RECEIVE WINDOW AUTO-TUNING - Methods of tuning a receive window. A receiving device and a sending device may be in communication over a network. The receiving device may advertise a receive window to the sending device. The size of the receive window may be adjusted over time based on one or more connection parameters, application parameters and/or operating system parameters. | 04-24-2014 |
20140164640 | SMALL PACKET PRIORITY CONGESTION CONTROL FOR DATA CENTER TRAFFIC - Network congestion management techniques are applied in a communication network. Network characteristics and target thresholds can be determined. A transmission mode can be determined. Further, a sending rate can be determined based on the transmission mode and network characteristics. In one aspect, network characteristics at a recent time can be determined to alter sending rates in a network to manage network congestion. | 06-12-2014 |
20140164641 | CONGESTION CONTROL FOR DATA CENTER TRAFFIC - Network congestion management techniques are applied in a communication network. Network characteristics and target thresholds can be determined. A transmission mode can be determined. Further, a sending rate can be determined based on the transmission mode and network characteristics. In one aspect, network characteristics at a recent time can be determined to alter sending rates in a network to manage network congestion. | 06-12-2014 |
20140215089 | SYSTEMS AND METHODS FOR DYNAMIC DATA TRANSFER MANAGEMENT ON A PER SUBSCRIBER BASIS IN A COMMUNICATIONS NETWORK - A method of dynamically managing transmission of packets is disclosed. The method, in some embodiments, may comprise establishing a network session over a communication link between a network and a user device of a user and associating a data transmission parameter with the user device. The method may further comprise receiving a packet and calculating a delay period associated with the packet based on the data transmission parameter and delaying transmission of the packet based on the delay period. | 07-31-2014 |
20140223026 | FLOW CONTROL MECHANISM FOR A STORAGE SERVER - Generally, this disclosure relates to a method of flow control. The method may include determining a server load in response to a request from a client; selecting a type of credit based at least in part on server load; and sending a credit to the client based at least in part on server load, wherein server load corresponds to a utilization level of a server and wherein the credit corresponds to an amount of data that may be transferred between the server and the client and the credit is configured to decrease over time if the credit is unused by the client. | 08-07-2014 |
20140281018 | Dynamic Optimization of TCP Connections - Transport control protocol (TCP) parameters can be dynamically selected to increase communication network performance. The TCP parameters may be selected before usage or at start-up such that a TCP connection is dynamically configured/re-configured prior to transporting the traffic flow over the network. The TCP connection parameters may be selected in accordance with a traffic characteristic, a network characteristic, a history of traffic activity, expected loads, desired throughput and latency or some other selection criteria. TCP parameters may also be selected after beginning to transport traffic flows over the network. More specifically, transportation of a traffic flow over the network may begin immediately using default TCP parameters, with the TCP parameters being updated or selected only upon the occurrence of a congestion or triggering condition. Further, multiple clients may share a set of persistent time-shared TCP connections. | 09-18-2014 |
20140281019 | Network Transmission Adjustment Based On Application-Provided Transmission Metadata - Application-provided transmission metadata is utilized, in conjunction with current network information, to adjust network transmissions. An interface between applications seeking to transmit data and networking components enables the application to provide destination information, communication type information, information regarding the quantity of data to be transferred, timeliness information, data location information, cost information, and other like transmission metadata. Current network information can be obtained by the networking components themselves, or can be provided by, or enhanced by, a centralized controller. The networking components can then optimize both the routing and the protocol settings in the form of adjustments to error control settings, flow control settings, receiver control settings, segmentation settings, and other like protocol settings. | 09-18-2014 |
20140281020 | SOURCE-DRIVEN SWITCH PROBING WITH FEEDBACK REQUEST - Embodiments relate to proactively probing the packet queues of elements in a physical or virtual network to predict and prevent the occurrence of congestion points. An aspect includes receiving a first feedback request at a central controller connected to a plurality of switches in a network. The first feedback request includes a request to periodically probe a status of queues of switches in the network. A second feedback request is then transmitted to one or all the switches in a path leading to a designated destination. Responses to the second feedback request are received at the central controller from a designated proxy switch, which aggregated the responses into a single data packet. Accordingly, the responses extracted from the single data packet at the central controller are used to preventing future congestion points. | 09-18-2014 |
20140281021 | ADAPTIVE SETTING OF THE QUANTIZED CONGESTION NOTIFICATION EQUILIBRIUM SETPOINT IN CONVERGED ENHANCED EITHERNET NETWORKS - Embodiments relate to controlling workload flow on converged Ethernet links. An aspect includes coupling, by a processing device, a first control loop to a second control loop. The second control loop monitors the operation of the first control loop. An equilibrium set point is initialized for the second control loop prior to commencing operation of the first control loop. Accordingly, the equilibrium set point value is adjusted in the second control loop continuously based on a rate of operation of the first control loop. | 09-18-2014 |
20140281022 | DATA TRANSMISSION SCHEDULING - A scheduler is disclosed. The scheduler can include a time-wheel structure configured to hold scheduling elements, an enqueuer configured to place a scheduling element on the time-wheel structure, and a delay manager configured to direct the scheduling element through the time-wheel structure and remove the scheduling element from the time-wheel structure. The time-wheel structure can include a plurality of decades that can rotate, and each of the plurality of decades can rotate respectively at one or more different rates of rotation. Multiple scheduling elements can be on the time-wheel structure at least partially during the same time. The scheduling elements can be on different decades or on the same decade. One of the plurality of decades can comprise an entry configured to hold a plurality of scheduling elements. | 09-18-2014 |
20140281023 | QUALITY OF SERVICE MANAGEMENT SERVER AND METHOD OF MANAGING QUALITY OF SERVICE - A quality of service (QoS) management server and a method of managing QoS. One embodiment of the QoS management server, includes: (1) a network interface controller (NIC) configured to receive QoS statistics indicative of conditions of a network over which rendered video is transmitted, the rendered video having a fidelity and a latency, and (2) a graphics processing unit (GPU) operable to employ said QoS statistics to tune said fidelity to affect said latency. | 09-18-2014 |
20140304425 | SYSTEMS AND METHODS FOR TCP WESTWOOD HYBRID APPROACH - Methods and systems for providing congestion control to a transport control protocol implementation are described. A device detects that there is a congestion event on a transport control protocol (TCP) connection of the device. The device determines that a bandwidth estimate is lower than half a current value of a slow start threshold for the TCP connection. In response to the determination, the device changes the slow start threshold to half of the current value of the slow start threshold for the TCP connection. The bandwidth estimate can be the product of the eligible rate estimate and the minimum round trip time. In some implementations, the transport control protocol implementation is a TCP Westwood implementation. | 10-09-2014 |
20140310426 | Devices and Methods Using Network Load Data in Mobile Cloud Accelerator Context to Optimize Network Usage by Selectively Deferring Content Delivery - Network devices, servers, and modules operating within MCA capable to selectively defer delivery of non-time sensitive content are provided. A network device ( | 10-16-2014 |
20140359159 | FACILITATING UTILIZATION OF DATAGRAM-BASED PROTOCOLS - Methods, systems, and computer-storage media for performing a method of facilitating utilization of datagram-based protocols are provided. In embodiments, the method includes initiating a connection with a datagram socket to establish a pathway using a datagram-based protocol. Thereafter, the datagram-based protocol can be used to communicate data to a virtual private network server. Upon recognizing that a virtual private network interface has been idle for a predetermined period of time, a connection with a connection socket is initiated to establish a pathway using a connection-based protocol. | 12-04-2014 |
20140365681 | DATA MANAGEMENT METHOD, DATA MANAGEMENT SYSTEM, AND DATA MANAGEMENT APPARATUS - A data management method includes acquiring, by a management computer, information of an amount of resource load from a plurality of computers; when a first computer having a higher amount of load than a threshold value is detected in a first area to which a first computer belongs, generating, by the management computer, a second identification range of identifier values by adding a first identification range of a first area to which the detected first computer belongs to a first identification range of a second area different from the first area; calculating, by the first computer, a first target identification of a second computer in the second area corresponding to the first data, based on the first identification ranges and the second identification range, when an operation request for first data is received; and transferring, by the first computer, the operation request for the first data to the second computer. | 12-11-2014 |
20150019752 | ADAPTIVE SETTING OF THE QUANTIZED CONGESTION NOTIFICATION EQUILIBRIUM SETPOINT IN CONVERGED ENHANCED EITHERNET NETWORKS - Embodiments relate to controlling workload flow on converged Ethernet links. An aspect includes coupling, by a processing device, a first control loop to a second control loop. The second control loop monitors the operation of the first control loop. An equilibrium set point is initialized for the second control loop prior to commencing operation of the first control loop. Accordingly, the equilibrium set point value is adjusted in the second control loop continuously based on a rate of operation of the first control loop. | 01-15-2015 |
20150089080 | METHOD OF PROCESSING BUS DATA - A method is provided for operating a communication controller coupling a device comprising a processor with a bus. The method comprises: receiving a plurality of types of data packets via the bus and processing received data packets before making available said received data packets to the device processor. The processing of received data packets comprises: evaluating each received data packet in accordance with predetermined criteria; rejecting any of the received data packets that fails to meet the predetermined criteria; identifying non-rejected data packets having high priority; identifying said non-rejected other data packets having lower priority; providing a high priority data path to the processor for the high priority data packets; providing at least one additional data path to the processor for the other data packets; and providing a high priority alert to the device processor to the presence of high priority data packets at the high priority channel. | 03-26-2015 |
20150127849 | TCP DATA TRANSMISSION METHOD, TCP OFFLOAD ENGINE, AND SYSTEM - Embodiments of the present invention provide a transmission control protocol TCP data transmission method, a TCP offload engine, and a system, which relate to the field of communications, and can reduce data migration between the TCP offload engine and a CPU, and at the same time reduce parsing work on data by the CPU, so as to achieve the effects of reducing resources of the CPU for processing TCP/IP data and reducing transmission delay. The method includes: a TCP offload engine receives TCP data sent by a remote device; performs TCP offloading on the TCP data; identifies the TCP offloaded data, and sends, according to an identification result, the TCP offloaded data to a CPU or a storage device corresponding to storage position information issued by the CPU. The embodiments of the present invention are applicable to TCP data transmission. | 05-07-2015 |
20150142989 | SHAPING VIRTUAL MACHINE COMMUNICATION TRAFFIC - Cloud computing platforms having computer-readable media that perform methods to shape virtual machine communication traffic. The cloud computing platform includes virtual machines and a controller. The controller limits the traffic associated with the virtual machines to enable the virtual machines to achieve desired communication rates, especially when a network servicing the virtual machines is congested. The controller may drop communication messages associated with the virtual machines based on a drop probability evaluated for the virtual machines. | 05-21-2015 |
20150358234 | Probabilistic Message Filtering and Grouping - Systems and methods for generating and using probabilistic filters are provided. One example method includes obtaining a plurality of beacon identifiers respectively associated with a plurality of beacon devices. The operations include determining a plurality of filter shards for each beacon identifier by applying a plurality of hash functions to each beacon identifier. The operations include providing the plurality of filter shards for each beacon identifier for local storage in a probabilistic filter at an observing entity, such that the observing entity can query the probabilistic filter to receive an indication of whether a received identifier is a member of a set that includes the plurality of beacon identifiers. One example system includes a plurality of beacon devices, at least one observing entity, and at least one verifying entity. | 12-10-2015 |
20160065475 | NETWORK LOAD BALANCING AND OVERLOAD CONTROL - Load balancing and overload control techniques are disclosed for use in a SIP-based network or other type of network comprising a plurality of servers. In a load balancing technique, a first server receives feedback information from at least first and second downstream servers associated with respective first and second paths between the first server and a target server, the feedback information comprising congestion measures for the respective downstream servers. The first server dynamically adjusts a message routing process based on the received feedback information to compensate for imbalance among the congestion measures of the downstream servers. In an overload control technique, the first server utilizes feedback information received from at least one downstream server to generate a blocking message for delivery to a user agent. | 03-03-2016 |
20160072713 | LOAD BALANCING AND MIGRATION OF TRANSPORT CONNECTIONS - Aspects of the subject disclosure may include, for example, a server comprising a memory to store instructions and a controller coupled to the memory, in which the controller, responsive to executing the instructions, performs operations. The operations include detecting a condition requiring a migration of an active transport connection at a source server to a target server without interrupting communications occurring in the active transport connection. The source server is directed to transmit to the target server a migration command with state information from the source server to enable migrating the active transport connection to the target server without interrupting communications occurring in the active transport connection. A message is then received from the source server indicating the source server has received from the target server an acknowledgment that the migrating has been performed. Other embodiments are disclosed. | 03-10-2016 |
20160164784 | DATA TRANSMISSION METHOD AND APPARATUS - A disclosed data transmission method includes; detecting that congestion has occurred in a network between the first information processing apparatus and a second information processing apparatus that is a transmission destination of one or more data blocks; identifying a first data block that satisfies a condition that a time period from a transmission time to a time limit of delivery is longer than a predetermined time period, based on data stored in a data storage unit that stores a transmission time and a time limit of delivery for each of the one or more data blocks; transmitting, to the second information processing apparatus, a request that includes a time limit of delivery of the first data block and requests to reset a transmission time of the first data block; and receiving, from the second information processing apparatus, a transmission time that is set by the second information processing apparatus. | 06-09-2016 |