Class / Patent application number | Description | Number of patent applications / Date published |
709234000 | Data flow compensating | 60 |
20080270625 | SYSTEM AND METHOD FOR ACCESSING DATA AND APPLICATIONS ON A HOST WHEN THE HOST IS IN A DORMANT STATE - A system for enabling the viewing, distributing and/or manipulation of stored data or applications on a host device when the host device is in a power-save, dormant or other semi-active state. In one embodiment, a peripheral device is provided which runs its own native operating system. When the peripheral device is connected to the host device, which is running a different native operating system, the peripheral device is capable of sharing files that are stored on the host device, thereby permitting the viewing, editing, transferring or other manipulation of the files. In another embodiment, a wireless modem or similar device is integrated into the host device. When the host device enters a power-save or dormant state, a hand-off occurs to the wireless modem such that the files contained on the host device remain accessible. | 10-30-2008 |
20090070483 | GROUP JUDGMENT DEVICE - In a server, an echo-request transmitting unit | 03-12-2009 |
20090172184 | TCP burst avoidance - A computer-implemented method including providing, at a first device, a packet scheduler layer between a network layer and a transport layer; receiving, at the packet scheduler layer, one or more transmission control protocol (TCP) packets from a sending layer on the first device, wherein the sending layer is one of the network layer or the transport layer; smoothing delivery of at least one of the one or more TCP packets by delaying the delivery; and sending the one or more TCP packets to a receiving layer, wherein the receiving layer is one of the network layer or the transport layer that is not the sending layer. | 07-02-2009 |
20090216899 | REDUCTION OF MESSAGE FLOW BETWEEN BUS-CONNECTED CONSUMERS AND PRODUCERS - A system, method, and computer readable medium for reducing message flow on a message bus are disclosed. The method includes determining if at least one logical operator in a plurality of logical operators requires processing on a given physical processing node in a group of physical nodes. In response to determining that the logical operator requires processing on the given physical processing node, the logical operator is pinned to the given physical processing node. Each logical operator in the plurality of logical operators is assigned to an initial physical processing node in the group of physical processing nodes on a message bus. | 08-27-2009 |
20090240832 | RECEIVING APPARATUS, TRANSMITTING APPARATUS, COMMUNICATION SYSTEM, AND METHOD OF DETECTING BUFFER SETTING OF RELAY SERVER - A receiving apparatus of the present invention includes: a relayed dummy data receiving unit for receiving relayed dummy data including dummy data of n bytes (n≧1) and/or dummy data of N bytes (N≧n) sequentially and repetitively transmitted from a transmitting apparatus to a relay server from the relay server; and a buffer setting detecting unit for detecting a buffer setting of the relay server based on a first size value indicative of data size of relayed dummy data received for the first time by the relayed dummy data receiving unit and a second size value indicative of not larger data size of relayed dummy data received for the second time and relayed dummy data received for the third time. | 09-24-2009 |
20090248891 | DATA RECEIVING APPARATUS, DATA RECEIVING METHOD, AND PROGRAM STORAGE MEDIUM - An apparatus includes: a receiver which receives a data sequence; a specifying unit which specifies a temporary buffer area in a data storage and specifies a destination buffer area in the data storage; a first identifying unit which identifies a destination number range depending on a size of the specified destination buffer area so that the range follows a range that was last identified; a writing unit which writes data that falls within one of the ranges in an area in the destination buffer area that corresponds to the sequence number of that data, and writes data that does not fall within it in the temporary buffer area; a copying unit which reads out data falling within the identified range from the temporary buffer area and writes the read-out data in an area in the destination buffer area that is associated with the sequence number of that data. | 10-01-2009 |
20090300209 | METHOD AND SYSTEM FOR PATH BASED NETWORK CONGESTION MANAGEMENT - Aspects of a method and system for path based network congestion management are provided. In this regard, an indication of conditions, such as congestion, in a network may be utilized to determine which data flows may be affected by congestion in a network. A path table may be maintained to associate conditions in the network with flows affected by the conditions. Flows which are determined as being affected by a condition may be paused or flagged and transmission of data belonging to those flows may be deferred. Flows affected by a condition such as congestion may be identified based on a class of service with which they are associated. Transmission of one or more of the plurality of flows may be scheduled based on the determination. The determination may be based on one or both of a forwarding table and a forwarding algorithm of the downstream network device. | 12-03-2009 |
20100070647 | FLOW RECORD RESTRICTION APPARATUS AND THE METHOD - A Flow Record restriction apparatus is provided for restricting a transmission number of Flow Records while maintaining measurement information of the whole traffic. The Flow Record restriction apparatus includes: a flow generation unit | 03-18-2010 |
20100077100 | METHOD AND RELATED DEVICE OF A TRIGGER MECHANISM OF BUFFER STATUS REPORT AND SCHEDULING REQUEST IN A WIRELESS COMMUNICATION SYSTEM - A method of a trigger mechanism of buffer status report (BSR) and scheduling request (SR) for a media access control layer of a user equipment in a wireless communication system, the method including receiving a first data, identifying a state of semi-persistent scheduling (SPS) resource configuration and a type of the first data when the first data arrives at a transmission buffer and deciding a state of a BSR-SR triggering according to the state of SPS resource configuration, the type of the first data and a comparison between a size of a second data in the transmission buffer and a threshold. | 03-25-2010 |
20100115123 | APPARATUS AND METHODS FOR BROADCASTING - A computer system for media content broadcasting comprising: a content generator operable to construct content for output by dividing the content into a plurality of content segments; a template generator for generating a template comprising an order of play of the plurality of content segments; and a communication interface operable to transmit the template to one or more user devices, and upon receipt of a request for at least one of the plurality of content segments indicated in the template, transmitting the requested content segments to a content buffer of the one or more user devices. | 05-06-2010 |
20100161830 | METHODS FOR AUTOMATIC CATEGORIZATION OF INTERNAL AND EXTERNAL COMMUNICATION FOR PREVENTING DATA LOSS - Disclosed are methods for automatic categorization of internal and external communication, the method including the steps of: defining groups of entities that transmit data; monitoring data flow of the groups; extracting the data, from the data flow, for learning traffic-flow characteristics of the groups; classifying the data into group flows; upon the data being transmitted, checking the data to determine whether the data is designated as group-internal; and blocking data traffic for data that is group-internal. Preferably, the step of monitoring includes assigning data weights to the data using Bayesian methods. Most preferably, the step of classifying includes classifying the data using Bayesian methods for evaluating the data weights. Preferably, the step of blocking includes blocking data traffic between members of two or more groups. Preferably, the method further includes the step of: enabling an authorized entity to unblock the data traffic. | 06-24-2010 |
20100180046 | DELTACASTING - Methods, apparatuses, and systems for improving utilization of a communications system (e.g., a satellite communications system) are provided, using techniques referred to herein as “deltacasting.” Embodiments operate in a client-server context, in which the server-side of the communication link intercepts requests and responses using a client-server optimizer (e.g., a transparent proxy or in-line optimizer between a client web browser and an Internet content provider). The optimizer uses techniques, such as dictionary coding techniques, to create fingerprints of content traversing the links of the communications system. These fingerprints are used to identify and exploit multicasting and/or other opportunities for increased utilization of the communications links. | 07-15-2010 |
20100198984 | Method and System for Transparent TCP Offload with Best Effort Direct Placement of Incoming Traffic - Certain aspects of a method and system for transparent transmission control protocol (TCP) offload with best effort direct placement of incoming traffic are disclosed. Aspects of a method may include collecting TCP segments in a network interface card (NIC) processor without transferring state information to a host processor every time a TCP segment is received. When an event occurs that terminates the collection of TCP segments, the NIC processor may generate a new aggregated TCP segment based on the collected TCP segments. If a placement sequence number corresponding to the generated new TCP segment for the particular network flow is received before the TCP segment is received, the generated new TCP segment may be transferred directly from the memory to the user buffer instead of transferring the data to a kernel buffer, which would require further copy by the host stack from kernel buffer to user buffer. | 08-05-2010 |
20100217888 | TRANSMISSION DEVICE, RECEPTION DEVICE, RATE CONTROL DEVICE, TRANSMISSION METHOD, AND RECEPTION METHOD | 08-26-2010 |
20100223396 | INTELLIGENT STATUS POLLING - Systems, methods, and computer program products are described that intelligently determines the status of a process. The process is performed with respect to a creative asset that may be included in an online ad, for example. The status of the process is requested at a poll time that is calculated based on at least one attribute of the creative asset. For example, the calculated poll time may be based on a duration of a video associated with the creative asset, a weight (i.e., bitsize) of the creative asset, etc. | 09-02-2010 |
20100262714 | Transmitting and receiving data - The present invention provides a method of transmitting data in a network of interconnectable end-user nodes comprising a source node, a recipient node and a plurality of further nodes, wherein each of the end-user nodes is executing an instance of a communication client application. The invention also provides corresponding method of receiving data, and corresponding computer programs and user terminals. The method of transmitting data comprises: comprising: the communication client of the source node receiving a command to transmit the data to the recipient node; the source node selecting from the plurality of further nodes at least one storage node to store the data from the source node before being retrieved by the recipient node; transmitting the data from the source node to the at least one storage node; and the source node providing a network identity for each of the at least one storage nodes to the recipient node. | 10-14-2010 |
20100274921 | Technique for coordinated RLC and PDCP processing - A technique for performing layer | 10-28-2010 |
20100281180 | Initiating Peer-to-Peer Tunnels - Initiating peer-to-peer tunnels between clients in a mobility domain. Client traffic in a mobility domain normally passes from the initiating client to an access node, and from the access node through a tunnel to a controller, and then through another tunnel from the controller to the destination access node, and the destination client. When initiated by the controller, the access nodes establish a peer-to-peer tunnel for suitable client traffic, bypassing the “slow” tunnels through the controller with a “fast” peer-to-peer tunnel. Traffic through this “fast” tunnel may be initiated once the tunnel is established, or traffic for the “fast” tunnel may be queued up until traffic has completed passing through the “slow” tunnel. This queue and release process may be bidirectional or unidirectional depending on the traffic. Completion of slow tunnel traffic may be sensed in a number of ways. Slow tunnel traffic may be timed out, and queued traffic released after a preset time since the last packet was sent through the slow tunnel. The identity of the last packet sent through the slow tunnel may be retained, and queued traffic released when an acknowledgement for that packet is received. A special packet may be sent through the slow tunnel and queued traffic released when an acknowledgement for that packet is received. | 11-04-2010 |
20100299448 | DEVICE FOR THE STREAMING RECEPTION OF AUDIO AND/OR VIDEO DATA PACKETS - A device for the streaming reception of audio and/or video data packets transmitted over a network from a source server, the device including a network buffer memory in which the packets may be stored, the network buffer memory exhibiting a variable buffering time; and a program memory encoded with instructions configured to adapt the buffering time of the packets for the purpose of improved playback performance of the packets and to locally determine the value of at least one quality of service indicator, the instructions configured to adapt the buffering time adapting the time as a function of the value of the indicator. | 11-25-2010 |
20100306405 | Prefetch Optimization of the Communication of Data Using Descriptor Lists - The size and location of an envelope of a data block are included in the posting to a second device of a descriptor list entry for the data block, thus allowing the second device to read the data block without having to first read the descriptor list entry. This envelope may be the same size and location of the data block, or this envelope may be larger than the data block. For example, as the size of the posted register may not be large enough to also store all of the bits required to specify the exact size and position of the data block, a larger data block envelope is defined without specifying the exact low order bits of the size and/or location of the data block envelope. | 12-02-2010 |
20100306406 | SYSTEM AND METHOD FOR ACCESSING A REMOTE DESKTOP VIA A DOCUMENT PROCESSING DEVICE INTERFACE - The subject application is directed to a system and method for accessing a remote desktop via a document processing device interface. A thin client interface is generated on a graphical display associated with a document processing device, and data communication is established between the thin client interface and an associated remote frame buffer server disposed on a workstation via a document processing device network interface associated with a document processing device. A user interface associated with the workstation is generated on the graphical display via the thin client in accordance with an established data communication, user input is received via the thin client, and remote operation of the workstation is enabled via received user input. | 12-02-2010 |
20100318674 | SYSTEM AND METHOD FOR PROCESSING LARGE AMOUNTS OF TRANSACTIONAL DATA - A system including a reference data server that stores a first set of data used in the plurality of processes, a load balancer that reconfigures the transactional data, a first stage processing system and a second stage processing system. The first stage processing system includes one or more first processing modules that execute at least one process of a first set of the plurality of processes on the reconfigured transactional data to generate first stage processed transactional data, each of the one or more first processing modules comprising an in-memory cache that stores a second set of data used in the at least one process, and a first stage data storage system that stores the first stage processed transactional data. The second stage processing system includes one or more second processing modules that execute at least one process of a second set of the plurality of processes on the first stage processed transactional data to generate second stage processed transactional data, each of the one or more second processing modules comprising an in-memory cache that dynamically stores a third set of data related to the at least one process, and a second stage data storage system that stores the second stage processed transactional data. | 12-16-2010 |
20100325307 | AUDIO-VISUAL NAVIGATION AND COMMUNICATION DYNAMIC MEMORY ARCHITECTURES - Buffering data associated with a spatial publishing object data store at a buffer distance proximate a user presence in a spatial publishing object space. The buffer distance comprises a measure based at least in part on: capacity of a communications path between the data store and the user platform; availability of memory at the user platform; movement of a user presence through the space; traffic in the communications path; processing resources available; amount of objects within a distance of the user presence; amount of objects in the space; type of objects; proximity of objects to the user presence; and rate of the user presence movement in the space. Movement of the user presence in the space buffers data such that data for a proximate object is available in the buffer for presenting to the user when the user's presence is a predetermined distance from the proximate object. | 12-23-2010 |
20110016225 | DIGITAL CONTENT DISTRIBUTION SYSTEM AND METHOD - One embodiment of the present invention sets forth a technique for selecting a content distribution network (CDN) comprising at least one content server, from a plurality of CDNs, and a playing digital content file from the CDN on a content player. Selecting the CDN is based on a rank order of CDNs, an assigned weight value for each CDN, and a bandwidth measured between the content player and each CDN. Advantageously, a given content player may select a CDN based on prevailing network and CDN loading conditions, thereby increasing overall robustness and reliability when downloading digital content file from a CDN. | 01-20-2011 |
20110082948 | SYSTEM FOR HIGH-SPEED LOW-LATENCY INTER-SYSTEM COMMUNICATIONS VIA CHANNEL-TO-CHANNEL ADAPTERS OR SIMILAR DEVICES - In one embodiment, a system includes at least one outgoing transmission engine implemented in hardware, wherein the at least one outgoing transmission engine is for transmitting data in the plurality of buffers queued to the at least one outgoing transmission engine to the intersystem transmission medium, and a memory for storing the plurality of buffers, wherein each of the buffers queued to the at least one outgoing transmission engine is dequeued after the data is transmitted therefrom and requeued to an available buffer queue. In another embodiment, a system includes the above, except that it includes one or more incoming reception engines instead of outgoing transmission engines. In another embodiment, a method includes buffering data to be sent out by executing a loop of commands on an intersystem communication device and disconnecting the buffers after data has been transferred. | 04-07-2011 |
20110087798 | METHOD AND APPARATUS TO REPORT RESOURCE VALUES IN A MOBILE NETWORK - In current systems, a typical way to collect application statistics includes sending requests to a resource manager that can access the resource hardware via a device driver. Current systems require multiple synchronous transactions between the processes, which results in the systems consuming large amounts of central processing unit resources that lead to sub-optimal rates of information retrieval. A method and apparatus configured to use asynchronous messaging across all modules and to return hardware statistics directly from the hardware to an application process, thereby bypassing transactions between the application and the resource manager, and bypassing similar transactions between the resource manager and a device driver. Embodiments of the invention are provided for minimizing the power consumed by the memory and minimizing the amount of dedicated memory necessary to perform. | 04-14-2011 |
20110099289 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR ACCESSING DATA FROM A SOURCE BY A VARIETY OF METHODS TO IMPROVE PERFORMANCE AND ENSURE DATA INTEGRITY - According to one embodiment, a system includes a data storage device having data stored therein and a native computer system having resident thereon a controlling operating system in communication with the data storage device. The system also includes a primary computer system having resident thereon a primary operating system in communication with the native computer system via a first connection, the primary computer system being in communication with the data storage device via a second connection that is not in communication with the native computer system, the primary computer system having a processor executing a primary application. A volume on the data storage device is under logical control of the controlling operating system of the native computer system, and the primary computer system reads or writes data to the volume directly via the second connection. Other systems, methods and computer program products are also described relating to accessing data. | 04-28-2011 |
20110153862 | Sender-Specific Counter-Based Anti-Replay for Multicast Traffic - Techniques are provided for more robust counter-based anti-replay protection with respect to packets sent between network devices. A network device receives packets sent over a network from another network device. Each packet contains a source identifier that identifies a device that is the source of the packet, a destination identifier that identifies a device that is the intended destination of the packet, a sender identifier that identifies a network device that encrypted and sent the packet and a sequence number associated with the packet. The network device stores data indicating source identifier, destination identifier, sender identifier and sequence number for packets received over time. The network device rejects a newly received packet when it is determined that the sequence number of the newly received packet is less than the last sequence number stored for a matching packet flow (same source identifier, destination identifier and sender identifier) and falls outside of the counter-based window with respect to the last sequence number stored for the matching packet flow. | 06-23-2011 |
20110185080 | STREAM GENERATING DEVICE, METHOD FOR CALCULATING INPUT BUFFER PADDING LEVEL INSIDE THE DEVICE AND STREAM CONTROL METHOD - The present invention concerns a method of calculating a filtered filling level of the input buffer of a gateway generating a data stream from a received data stream resisting the jitter of the stream received. It applies more particularly to a gateway receiving an MPEG (Moving Picture Experts Group) transport stream received according to the IP protocol (Internet Protocol) and retransmitted over an ASI interface (Asynchronous Serial Interface). | 07-28-2011 |
20110208873 | ARCHITECTURE-AWARE ALLOCATION OF NETWORK BUFFERS - A computer readable medium comprising software instructions for: obtaining an allocation policy by a MAC layer executing on a host; receiving, a request for a transmit kernel buffer (TxKB) by a sending application executing on at least one processor of the host; obtaining a location of a plurality of available TxKBs on the host; obtaining a location of at least one available network interface on the host; obtaining a location of the sending application; allocating one of the plurality of available TxKBs to obtain an allocated TxKB, wherein the one of the plurality of available TxKBs is selected according to the allocation policy using the location of the plurality of available TxKB, the location of the at least one available network interface, and the location of the sending application, to obtain an allocated TxKB; and providing, to the sending application, the location of the allocated TxKB. | 08-25-2011 |
20110231569 | ENHANCED BLOCK-REQUEST STREAMING USING BLOCK PARTITIONING OR REQUEST CONTROLS FOR IMPROVED CLIENT-SIDE HANDLING - A block-request streaming system provides for improvements in the user experience and bandwidth efficiency of such systems, typically using an ingestion system that generates data in a form to be served by a conventional file server (HTTP, FTP, or the like), wherein the ingestion system intakes content and prepares it as files or data elements to be served by the file server. A client device can be adapted to take advantage of the ingestion process. The client device might be configured to optimize use of resources, given the information available to it from the ingestion system. This may include configurations to determine the sequence, timing and construction of block requests based on monitoring buffer size and rate of change of buffer size, use of variable sized requests, mapping of block requests to underlying transport connections, flexible pipelining of requests, and/or use of whole file requests based on statistical considerations. | 09-22-2011 |
20110252158 | ETHERNET-BASED DATA TRANSMISSION METHOD, ETHERNET NODES AND CONTROL SYSTEM - The invention provides an Ethernet-based data transmission method, which is applied to a control system with a plurality of nodes. The method comprises: a first node caches the data needed to be sent (S | 10-13-2011 |
20110271006 | PIPELINING PROTOCOLS IN MISALIGNED BUFFER CASES - Systems, methods and articles of manufacture are disclosed for effecting a desired collective operation on a parallel computing system that includes multiple compute nodes. The compute nodes may pipeline multiple collective operations to effect the desired collective operation. To select protocols suitable for the multiple collective operations, the compute nodes may also perform additional collective operations. The compute nodes may pipeline the multiple collective operations and/or the additional collective operations to effect the desired collective operation more efficiently. | 11-03-2011 |
20120011271 | System and Method for Content and Application Acceleration in a Wireless Communications System - A system and method for content and application acceleration in a wireless communications system are provided. A method for transmitting data includes receiving a block of data from a content provider, generating a signature from the block of data, and determining if the signature exists in a content cache. The content cache includes previously transmitted signatures and blocks of data associated with the previously transmitted signatures. The method also includes if the signature exists in the content cache, saving the signature but not the block of data in a buffer. The method further includes if the signature does not exist in the content cache, saving the block of data in the buffer. The method additionally includes transmitting contents of the buffer over a backhaul link. | 01-12-2012 |
20120030369 | RELIABLE DATA MESSAGE EXCHANGE - Various embodiments of systems and methods for data message exchange in a client server network are described herein. In various embodiments, a client and a server network may implement a data message protocol for message exchanges. A method of an embodiment ensures message delivery, acknowledge message delivery, message delivery in a specific order, resending of lost data messages, and the like. In various embodiments, such data exchange may optimize data transmission and resource consumption in a client server network. A server can store data messages in a buffer and resend them only when requested by the client, as in the case of lost or out of sequence data message. A client with limited storage space need not concern itself with storing data messages and processing them at a later point in time. Furthermore, a client may optimize data transmission by acknowledging bulk data messages, rather than acknowledging them individually. | 02-02-2012 |
20120030370 | Administering Connection Identifiers For Collective Operations In A Parallel Computer - Administering connection identifiers for collective operations in a parallel computer, including prior to calling a collective operation, determining, by a first compute node of a communicator to receive an instruction to execute the collective operation, whether a value stored in a global connection identifier utilization buffer exceeds a predetermined threshold; if the value stored in the global ConnID utilization buffer does not exceed the predetermined threshold: calling the collective operation with a next available ConnID including retrieving, from an element of a ConnID buffer, the next available ConnID and locking the element of the ConnID buffer from access by other compute nodes; and if the value stored in the global ConnID utilization buffer exceeds the predetermined threshold: repeatedly determining whether the value stored in the global ConnID utilization buffer exceeds the predetermined threshold until the value stored in the global ConnID utilization buffer does not exceed the predetermined threshold. | 02-02-2012 |
20120084457 | RELAY DEVICE, RELAY METHOD AND RELAY SYSTEM - A relay device includes a screen information receiving unit that receives screen information from the application server, a cycle screen storage unit that stores screen information, a cycle detecting unit that detects a cycle of a change as first cycle information when the screen information cyclically changes, a cycle converting unit that converts the first cycle information into second cycle information, and a screen, information transmitting unit that acquires screen information from the cycle screen storage unit and transmits the screen information to the client terminal at a timing based on the second cycle information. | 04-05-2012 |
20120144060 | SHARED BUFFER FOR CONNECTIONLESS TRANSFER PROTOCOLS - Described herein are various principles for operating a connectionless content unit transfer protocol to transmit content of a content unit to multiple clients using a shared buffer. A server may transfer content of one or more content units to each of multiple clients upon request from the client using individual buffers. For each content unit being transferred, the server may maintain a count of the aggregate size of buffers for transferring content of that content unit. If the server determines that the aggregate size of the buffers transmitting a particular content unit is larger than the content unit itself, the server may establish a shared buffer for transferring that content unit to clients. A server using a shared buffer in this manner may transfer content of the content unit to clients using the shared buffer until all requesting clients have received the content unit. | 06-07-2012 |
20120158988 | Media Requests to Counter Latency and Minimize Network Bursts - A client media application sends a first request for a first chunk of a particular media stream. In response to the request, the client media application begins receiving data packets associated with the requested first chunk of the particular media stream. The data packets are received through a socket having a buffer. Rather than waiting until all of the data packets associated with the first chunk of the particular media stream have been read from the buffer by the client media application before sending a request for a second chunk of the particular media stream, the client media application monitors the amount of data that has been received compared to an expected amount of data, and sends the second request when it determines that the amount of data remaining to be received is less than the size of the buffer. | 06-21-2012 |
20120233349 | METHODS AND APPARATUS FOR PATH SELECTION WITHIN A NETWORK BASED ON FLOW DURATION - In some embodiments, an apparatus includes a forwarding module that is configured to receive a group of first data packets. The forwarding module is configured to modify a data flow value in response to receiving each first data packet. The forwarding module is also configured to store each first data packet in a first output queue based on the data flow value not crossing a data flow threshold after being modified. Furthermore, the forwarding module is configured to receive a second data packet. The forwarding module is configured to modify the data flow value in response to receiving the second data packet, such that the data flow value crosses the data flow threshold. The forwarding module is configured to store the second data packet in a second output queue based on the data flow value having crossed the data flow threshold. | 09-13-2012 |
20120297085 | REDUCTION OF MESSAGE FLOW BETWEEN BUS-CONNECTED CONSUMERS AND PRODUCERS - A system, method, and computer readable medium for reducing message flow on a message bus are disclosed. The method includes determining if at least one logical operator in a plurality of logical operators requires processing on a given physical processing node in a group of physical nodes. The logical operator is pinned to the given physical processing node. The pinning prevents any subsequent reassignment of the logical operator to another physical processing node. Each logical operator in the plurality of logical operators is assigned to an initial physical processing node in the group of physical processing nodes on a message bus. A determination is made as to whether at least one logical operating in the plurality of logical operators needs to be reassigned to a different physical processing node. The at least one logical operator is reassigned to the different physical processing node. | 11-22-2012 |
20120297086 | METHOD FOR IMPLEMENTING COMMUNICATION BETWEEN DIFFERENT NETWORKS AND APPARATUS - Embodiments of the present invention disclose a method for implementing communication between different networks, where the method includes: receiving a multicast data obtaining request supporting a first network protocol, and determining multicast data identity information (MDID) of multicast data that needs to be obtained; obtaining, according to the MDID, in a multicast manner and from a network device supporting a second network protocol, the multicast data that needs to be obtained, and buffering the multicast data that needs to be obtained; establishing, for the multicast data that needs to be obtained, a multicast group supporting the first network protocol; and sending the multicast data that needs to be obtained by a user apparatus to the user apparatus which joins the multicast group supporting the first network protocol. | 11-22-2012 |
20120311178 | SELF-DISRUPTING NETWORK ELEMENT - A method, apparatus, and machine readable storage medium is disclosed for establishing a test protocol processor which identifies and removes messages from a network element port buffer. Subsequent to removal the test protocol processor may perform one of several actions including allowing the message to drop, replacing the message after a delay, replacing the message after altering the payload of the message, and replacing the message after altering the message type. The disclosed self disrupting network element is particularly useful for providing a means to perform in situ field testing of network performance indicators. | 12-06-2012 |
20120311179 | METHOD FOR PROCESSING TCP RE-TRANSMISSION BASED ON FIXED ADDRESS - The present invention relates to a technology where, in the state that start pointers for n transmit buffers each having Ethernet frame size are fixedly declared and thus fixed addresses are assigned to the transmit buffers, packets can be stably retransmitted utilizing the fixed addresses of the transmit buffers, without executing dynamic pointer operations in re-transmission. | 12-06-2012 |
20120331172 | Method And System For Improved Performance Of Network Communications With Interface Specific Buffers - Network adapter use of an interface specific buffer is managed so that their combined use with non-interface specific buffers has a reduced impact, such as when an interface specific buffer becomes full. If an attempt fails by a protocol stack of an operating system to buffer information for a packet in an interface specific buffer, an offset marks the end of the use of the interface specific buffer for the packet and a non-interface specific buffer is used to store the remaining information for the packet. During transmission of the packet, the offset is read by a network adapter driver to take advantage of reduced processing for sending information from the interface specific buffer and to identify information that need additional processing for transmission from the non-interface specific buffer. | 12-27-2012 |
20130080657 | DATA TRANSMISSION DEVICE AND DATA TRANSMISSION METHOD - A data transmission device ( | 03-28-2013 |
20130166772 | Buffering Media Content - Media content is downloaded on a media device. Portions of the media content are buffered successively during the download in a buffer on the device. During the buffering, the buffered portions are read for playback. In the buffer, a non-write buffer region trails behind a current playback read position. Upon the buffering reaching an end of the buffer, the buffering of media content is continued between a buffer beginning and the non-write buffer region. | 06-27-2013 |
20130166773 | Flexible and scalable data link layer flow control for network fabrics - A network fabric may divide a physical connection into a plurality of VLANs as defined by IEEE 802.1Q. Moreover, many network fabrics use Priority Flow Control to identify and segregate network traffic based on different traffic classes or priorities. Current routing protocols define only eight traffic classes. In contrast, a network fabric may contain thousands of unique VLANs. When network congestion occurs, network devices (e.g., switches, bridges, routers, servers, etc.) can negotiate to pause the network traffic associated with one of the different traffic classes. Pausing the data packets associated with a single traffic class may also stop the data packets associated with thousands of VLANs. The embodiments disclosed herein permit a network fabric to individually pause VLANs rather than entire traffic classes. | 06-27-2013 |
20130332623 | SYSTEM AND METHOD FOR PREVENTING OVERESTIMATION OF AVAILABLE BANDWIDTH IN ADAPTIVE BITRATE STREAMING CLIENTS - A method is provided in one example embodiment and includes generating a bandwidth estimation for an adaptive bitrate (ABR) client; evaluating a current state of a buffer of the ABR client; and determining an encoding rate to be used for the ABR client based, at least, on the bandwidth estimation and the current state of the buffer. A fetch interval for the ABR client increases as the buffer becomes more full, while not reaching a level at which the ABR client is consuming data at a same rate at which it is downloading the data. | 12-12-2013 |
20140040501 | OPPORTUNISTIC NETWORK UPDATES - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for providing opportunistic network updates. In one aspect, a method includes determining, by a queue manager on a mobile device that has a network interface, to fulfill one or more requests to upload or download data through the network interface, and in response to determining to fulfill the requests, applying, by the queue manager, one or more rules to the requests, to classify each request as a request that is to be fulfilled, or a request that is not to be fulfilled. The method also includes causing, by the queue manager, the requests that are classified as to be fulfilled to be fulfilled, or the requests that are classified as not to be fulfilled to not be fulfilled. | 02-06-2014 |
20140052874 | METHOD AND APPARATUS FOR RECOVERING MEMORY OF USER PLANE BUFFER - The present invention discloses a method and an apparatus for recovering a memory of a user plane buffer and relates to the communication field. The method and apparatus are used to recover the memory of the user plane buffer immediately and quickly. The method for recovering a memory of a user plane buffer includes: monitoring memory usage of a buffer in real time; when the memory usage of the buffer is greater than or equal to a preset threshold, releasing the memory of the buffer, where the preset threshold is smaller than a memory capacity of the buffer. The solution of the present invention is applicable to any scenario where the memory of the buffer needs to be recovered. | 02-20-2014 |
20140059247 | NETWORK TRAFFIC MANAGEMENT USING SOCKET-SPECIFIC SYN REQUEST CACHES - Methods, systems, and devices are described for managing network communications at a traffic manager module serving as a proxy to at least one network service for at least one client device. The traffic manager module may maintaining a SYN request cache for a socket implemented by the traffic manager module. Active SYN request messages may be stored at the socket in the SYN request cache. The traffic manager module may determine a status of the SYN request cache and ignore additional SYN request messages at the socket based on the determined status of the SYN request cache. | 02-27-2014 |
20140173127 | SYSTEMS AND METHODS FOR REAL-TIME ENDPOINT APPLICATION FLOW CONTROL WITH NETWORK STRUCTURE COMPONENT - The present solution is directed towards systems and methods to more efficiently control a flow of a data stream traversing at least one intermediary on a network between a client and a server. A sender transmits a first message, comprising a first value of a bandwidth between the first intermediary and a second intermediary determined by the sender, to a first intermediary. The first intermediary establishes a next value of the bandwidth between the first intermediary and the second intermediary. The sender receives from the first intermediary responsive to the first message a second message comprising the established next value of the bandwidth between the first intermediary and the second intermediary. A data transfer manager of the sender, responsive to the second message determines a size of a portion of data queued for transmission to transmit to the first intermediary and a time for transmitting the portion of data queued. | 06-19-2014 |
20140181319 | COMMUNICATION TRAFFIC PROCESSING ARCHITECTURES AND METHODS - Communication traffic processing architectures and methods are disclosed. Processing load on main Central Processing Units (CPUs) can be alleviated by offloading data processing tasks to separate hardware. | 06-26-2014 |
20140281016 | Transmission Device Load Balancing - A content providing service may employ a variety of streaming devices such as servers, and may associate or assign the servers to different content duration time ranges. Each server may then be responsible for servicing requests for content whose duration lies within the server's assigned content duration time range. In response to overload or underload status of a server, the server's time range may be adjusted. The time range of other servers in the system may also be adjusted to compensate. | 09-18-2014 |
20140281017 | JITTER BUFFERING SYSTEM AND METHOD OF JITTER BUFFERING - A jitter buffering system and a method of jitter buffering. The jitter buffering system may be embodied in a quality of service (QoS) management server, including: (1) a network interface controller (NIC) configured to receive one-way-delay statistics regarding a video stream transmitted to a client, and (2) a processer configured to employ the one-way-delay statistics to calculate and recognize jitter and subsequently generate a command for the client to enable jitter buffering. | 09-18-2014 |
20150026361 | Ingress Based Headroom Buffering For Switch Architectures - A network device performs ingress based headroom buffering. The network device may be configured as an output queue switch and include a main packet buffer that stores packet data according to a destination egress port. The network device may implement one or more ingress buffers associated with ingress data ports in the network device. The ingress buffers may be separate from the main packet buffer. The network device may identify a flow control condition triggered by an ingress data port, such as when an amount of data stored in the main packet buffer received through the ingress data port exceeds a fill threshold. In response, the network device may send a flow control message to a link partner to cease sending network traffic through the ingress data port. The network device may store in-flight data from the link partner in an ingress buffer instead of the main packet buffer. | 01-22-2015 |
20150134847 | DYNAMIC ADJUSTMENT TO MULTIPLE BITRATE ALGORITHM BASED ON BUFFER LENGTH - In one embodiment, a method determines thresholds for a multiple bitrate algorithm that adjusts which bitrates for a media program are requested. A first threshold is associated with a first buffer length and a first direction of adjustment and a second threshold is associated with a second buffer length greater than the first buffer length and a second direction of adjustment. The method then determines which threshold applies to a buffer length of a buffer buffering the media program. An adjustment to the multiple bitrate algorithm in the first direction or the second direction based on the threshold that applies where the adjustment in the first direction increases an aggressiveness used by the multiple bitrate algorithm to increase the bitrate requested and the adjustment in the second direction decreases the aggressiveness used by the multiple bitrate algorithm to increase the bitrate requested. | 05-14-2015 |
20160006663 | METHOD AND SYSTEM FOR COMPRESSING FORWARD STATE OF A DATA NETWORK - A method implemented in a network device of a software-defined network (SDN) system for compressing network state of the SDN system is disclosed. The method comprises combining flow tables for a set of the network elements into a combined table, wherein each row within the combined table is keyed off a common field of each flow table. The method continues with selecting a set of columns of the combined table for compression, wherein the selection is based at least partially on topology of the set of the network elements within the network, and compressing the set of columns of the combined table into a set of compressed entries and a set of classification and regression trees (CaRTs), wherein upon a request for an entity within a predicted subset, an entity within a predicting subset and a corresponding CaRT are used to restore the entity within the predicted subset. | 01-07-2016 |
20160134550 | Lossless Time Based Data Acquisition and Control in a Distributed System - Systems and methods for mapping an iterative time-based data acquisition (DAQ) operation to an isochronous data transfer channel of a network. A time-sensitive buffer (TSB) associated with the isochronous data transfer channel of the network may be configured. A data rate clock may and a local buffer may be configured. A functional unit may be configured to initiate continuous performance of the iterative time-based DAQ operation, transfer data to the local buffer, initiate transfer of the data between the local buffer and the TSB at a configured start time, and repeat the transferring and initiating transfer in an iterative manner, thereby transferring data between the local buffer and the TSB. The TSB may be configured to communicate data over the isochronous data transfer channel of the network, thereby mapping the iterative time-based DAQ operation to the isochronous data transfer channel of the network. | 05-12-2016 |