Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


AKAMAI TECHNOLOGIES, INC.

AKAMAI TECHNOLOGIES, INC. Patent applications
Patent application numberTitlePublished
20160065576SYSTEM AND METHODS FOR LEVERAGING AN OBJECT CACHE TO MONITOR NETWORK TRAFFIC - According to non-limiting embodiments disclosed herein, the functionality of an object cache in a server can be extended to monitor and track web traffic, and in particular to perform rate accounting on selected web traffic. As the server communicates with clients (e.g., receiving HTTP requests and responding to those requests), the server can use its existing object cache storage and existing object cache services to monitor web traffic by recording how often a client makes a particular request in the object cache and/or other data about the requests. Preferably, the object cache is still used for conventional caching of objects, the object cache thus providing a dual role by storing both web objects and rate accounting data.03-03-2016
20160057163VALIDATING AND ENFORCING END-USER WORKFLOW FOR A WEB APPLICATION - Described herein, without limitation, are methods and systems to defend web applications against abuse and attack from bots, scrapers, and agents, by validating and enforcing a workflow for web application users. Described herein, without limitation, are methods and systems that enforce and validate workflows in a way that enables web application owners to flexibly define and control workflows, even for complex website topologies.02-25-2016
20150358343DETECTION AND CLASSIFICATION OF MALICIOUS CLIENTS BASED ON MESSAGE ALPHABET ANALYSIS - Described herein are systems, methods and apparatus for detecting and classifying malicious agents on a computer network. Many attacks require that the malicious message or messages employ certain characters. Such sets of characters can be indicative of an attack and referred to as a “malicious alphabet.” All clients on a network are likely to use characters from malicious alphabets in legitimate and valid network messages. However, malicious clients are likely to use characters from malicious alphabets in different ways than legitimate clients. According to the teachings hereof, a particular client's use of a malicious alphabet can be tracked and used to identify it as a potential attacker. Such tracking may take place across the applications and/or websites to which the traffic is directed. Based on the nature and extent of the client's use of the malicious alphabet, a reputation score for the client can be developed.12-10-2015
20150341285METADATA TRANSPORT BETWEEN MOBILE NETWORK CORE AND EXTERNAL DATA NETWORK - Described herein are systems, methods, and apparatus for processing network packets in a computer network. According to the teachings hereof, distributed computing resources can be organized into a service platform to provide certain value-add services—such as deep packet inspection, transcoding, lawful intercept, or otherwise—using a service function chaining model. The platform can be used operate on traffic coming from or going to a mobile network (or other target network) to the public Internet. The platform may send to the mobile network various kinds of metadata related to or reflecting the services it is performing and/or the traffic that is flowing to or from the mobile network, among other things.11-26-2015
20150334094DISTRIBUTED COMPUTING SERVICE PLATFORM FOR MOBILE NETWORK TRAFFIC - Described herein are systems, methods, and apparatus for processing network packet data in a distributed computing platform, such as a content delivery network, to provide services to mobile network operators and/or their mobile subscribers. According to the teachings hereof, distributed computing resources can be organized into a service platform to provide certain value-add services—such as deep packet inspection, transcoding, lawful intercept, or otherwise—using a service function chaining model. The platform resources are preferably located external to the mobile network, on the public Internet. The platform preferably operates on and processes traffic entering or exiting the mobile network. In some embodiments, the service platform is able to establish an encrypted channel between itself and the mobile client through the mobile network, e.g., using content provider key and certificate information available to the platform (but which may not be available to the mobile network operator).11-19-2015
20150333930DYNAMIC SERVICE FUNCTION CHAINING - Described herein are systems, methods, and apparatus for processing network packets in a computer network, including in particular the processing of subscriber traffic in a mobile network. According to the teachings hereof, distributed computing resources can be organized into a service platform to provide certain value-add services—such as deep packet inspection, transcoding, lawful intercept, or otherwise—using a service function chaining model. The platform may operate on traffic egressing or ingressing to a mobile network (or other target network) to the public Internet. The service platform can alternatively be deployed wholly or partially within a target network. Service function chains may be built dynamically based on configured platform policies, packet contents, computing resource status, load, network location, current network conditions, and the like. The teachings hereof support dynamic modification of service function chains, including service function chain re-ordering, service level modification, and dynamic insertion/deletion of service functions.11-19-2015
20150310126CREATION AND DELIVERY OF PRE-RENDERED WEB PAGES FOR ACCELERATED BROWSING - The process of rendering web pages can be significantly improved with a content delivery system that pre-renders web content for a client device. A web page “program” can be pre-executed and the result delivered to a requesting client device, rather than or before sending a traditional set of web page components, such as a markup language document, cascading style sheets, embedded objects. This pre-execution can relieve the client device of the burden of rendering the web page, saving resources and decreasing latency before the web page is ready, and can reduce the number of network requests that the client device must make before being able to display the page. Disclosed herein are methods, systems, and devices for creating and delivering pre-rendered web pages for accelerated browsing.10-29-2015
20150281367MULTIPATH TCP TECHNIQUES FOR DISTRIBUTED COMPUTING SYSTEMS - In non-limiting embodiments described herein, multipath TCP can be implemented between clients and servers, the servers being in a distributed computing system. Multipath TCP can be used in a variety of ways to increase reliability, efficiency, capacity, flexibility, and performance of the distributed computing system. Examples include achieving path redundancy, connection migration between servers and between points-of-presence, end-user mapping (or -remapping), migration or path redundancy for special object delivery, and others.10-01-2015
20150281331SERVER INITIATED MULTIPATH CONTENT DELIVERY - Described herein are—among other things—systems, methods, and apparatus for accelerating and increasing the reliability of content delivery by serving objects redundantly over multiple paths from multiple servers. In preferred embodiments, the decision to use such multipath delivery is made on the server side. A content server can modify or generate a given web page so as to invoke multipath, e.g., by injecting markup language directives and/or script instructions that will cause the client device to make multiple requests for a given object on the page. Preferably the multiple requests are made to separate content servers in separate points of presence. The teachings hereof may be advantageously implemented, without limitation, in intermediary servers such as caching proxy servers and/or in origin servers.10-01-2015
20150281114SYSTEMS AND METHODS FOR ALLOCATING WORK FOR VARIOUS TYPES OF SERVICES AMONG NODES IN A DISTRIBUTED COMPUTING SYSTEM - In a distributed computing system, the allocation of workers to tasks can be challenging. In embodiments described herein, nodes in such a system can execute takeover algorithms that provide efficient, automated, and stable allocation of workers to tasks.10-01-2015
20150278324QUARANTINE AND REPAIR OF REPLICAS IN A QUORUM-BASED DATA STORAGE SYSTEM - A data storage system with quorum-based commits sometimes experiences replica failure, due to unavailability of a replica-hosting node, for example. In embodiments described herein, such failed replicas can be quarantined rather than deleted, and subsequently such quarantines can be recovered. The teachings hereof provide data storage with improved fault-tolerance, resiliency, and data availability.10-01-2015
20150207897SYSTEMS AND METHODS FOR CONTROLLING CACHEABILITY AND PRIVACY OF OBJECTS - Described herein are systems, devices, and methods for content delivery on the Internet. In certain non-limiting embodiments, a caching model is provided that can support caching for indefinite time periods, potentially with infinite or relatively long time-to-live values, yet provide prompt updates when the underlying origin content changes. Origin-generated tokens can drive the process of caching, and can be used as handles for later invalidating origin responses within caching proxy servers delivering the content. Tokens can also be used to control object caching behavior at a server, and in particular to control how an object is indexed in cache and who it may be served to. Tokens may indicate, for example, that responses to certain requested URL paths are public, or may be used to map user-id in a client request to a group for purposes of locating valid cache entries in response to subsequent client requests.07-23-2015
20150200868DISTRIBUTED ON-DEMAND RFID APPLICATION PLATFORM - A method and mechanism for a distributed on-demand computing system. The system automatically provisions distributed computing servers with customer application programs. The parameters of each customer application program are taken into account when a server is selected for hosting the program. The system monitors the status and performance of each distributed computing server. The system provisions additional servers when traffic levels exceed a predetermined level for a customer's application program and, as traffic demand decreases to a predetermined level, servers can be un-provisioned and returned back to a server pool for later provisioning. The system tries to fill up one server at a time with customer application programs before dispatching new requests to another server. The customer is charged a fee based on the usage of the distributed computing servers.07-16-2015
20150189225FRAME-RATE CONVERSION IN A DISTRIBUTED COMPUTING SYSTEM - Described herein are, among other things, distributed processing methods and systems for frame rate conversion. In an embodiment, a transcoding management machine manages a distributed transcoding process, creating a plurality of video segments and assigning the video segments across a set of distributed transcoding resources for frame rate conversion. The management machine typically sends a given segment to a given transcoding resource along with instructions to convert the frame rate to a specified output frame rate. In addition, the management machine can send certain transcoding assistance information that preferably facilitates the frame rate change process and helps the transcoding resource to create a more accurate output segment. Hence, in some embodiments, each transcoding resource can perform its transcode job independently, but with reference to the input segment it is responsible for transcoding and the assistance information provided by the management machine.07-02-2015
20150180892COUNTERING SECURITY THREATS WITH THE DOMAIN NAME SYSTEM - Described herein are methods, systems, and apparatus in which the functionality of a DNS server is modified to take into account security intelligence when determining an answer to return in response to a requesting client. Such a DNS server may consider a variety of security characteristics about the client and/or the client's request, as described more fully herein. Such a DNS server can react to clients in a variety of ways based on the threat assessment, preferably in a way that proactively counters or mitigates the perceived threat.06-25-2015
20150120821DYNAMICALLY POPULATED MANIFESTS AND MANIFEST-BASED PREFETCHING - Described herein are, among other things, systems and methods for generating and using manifests in delivering web content, and for using such manifests for prefetching. Manual and automated generation of manifests are disclosed. Such manifests preferably have placeholders or variables that can be populated at the time of the client request, based on data known from the request and other contextual information. Preferably though without limitation an intermediary device such as a proxy server, which may be part of content delivery network (CDN), performs the function of populating the manifest given a client request for a page. An intermediary or other computer device with a populated manifest can utilize that completed manifest to make anticipatory forward requests to an origin to obtain web resources specified on the manifest, before receiving the client's requests for them. In this way, many kinds of content may be prefetched based on the manifest.04-30-2015
20150100664SYSTEMS AND METHODS FOR CACHING CONTENT WITH NOTIFICATION-BASED INVALIDATION WITH EXTENSION TO CLIENTS - Described herein are systems, devices, and methods for content delivery on the Internet. In certain non-limiting embodiments, a caching model is provided that can support caching for indefinite time periods, potentially with infinite or relatively long time-to-live values, yet provide prompt updates when the underlying origin content changes. In one approach, an origin server can annotate its responses to content requests with tokens, e.g., placing them in an appended HTTP header or otherwise. The tokens can drive the process of caching, and can be used as handles for later invalidating the responses within caching proxy servers delivering the content. This caching and invalidation model can be extended out to clients, such that clients may be notified of invalid data and obtain timely updates.04-09-2015
20150100660SYSTEMS AND METHODS FOR CACHING CONTENT WITH NOTIFICATION-BASED INVALIDATION - Described herein are systems, devices, and methods for content delivery on the Internet. In certain non-limiting embodiments, a caching model is provided that can support caching for indefinite time periods, potentially with infinite or relatively long time-to-live values, yet provide prompt updates when the underlying origin content changes. In one approach, an origin server can annotate its responses to content requests with tokens, e.g., placing them in an appended HTTP header or otherwise. The tokens can drive the process of caching, and can be used as handles for later invalidating the responses within caching proxy servers delivering the content. Tokens may be used to represent a variety of kinds of dependencies expressed in the response, including without limitation data, data ranges, or logic that was a basis for the construction of the response.04-09-2015
20150089582Cloud Based Firewall System And Service - A cloud-based firewall system and service is provided to protect customer sites from attacks, leakage of confidential information, and other security threats. In various embodiments, such a firewall system and service can be implemented in conjunction with a content delivery network (CDN) having a plurality of distributed content servers. The CDN servers receive requests for content identified by the customer for delivery via the CDN. The CDN servers include firewalls that examine those requests and take action against security threats, so as to prevent them from reaching the customer site. The CDN provider implements the firewall system as a managed firewall service, with the operation of the firewalls for given customer content being defined by that customer, independently of other customers. In some embodiments, a customer may define different firewall configurations for different categories of that customer's content identified for delivery via the CDN.03-26-2015
20150067185SERVER-SIDE SYSTEMS AND METHODS FOR REPORTING STREAM DATA - According to the disclosure hereof, the functionality of a server can be extended to collect data on content streams that the server is delivering to clients, and to beacon certain data back an analytics system to facilitate monitoring of, reporting on, and analysis of the delivery of content streams. At various stages of the streaming process, a server can read and update state information (for example cookie data) on the requesting client reflecting, for example, status in playing a particular stream. Based on the client's requests and the state information at each stage, the server can beacon appropriate information about the stream and its playback status back to the analytics system. The teachings hereof are particularly useful, without limitation, in streaming media analytics and for segment-based streaming approaches, including over HTTP.03-05-2015
20150058455APPARATUS AND METHOD FOR SERVING COMPRESSED CONTENT IN A CONTENT DELIVERY NETWORK - A content delivery network (CDN) edge server is provisioned to provide last mile acceleration of content to requesting end users. The CDN edge server fetches, compresses and caches content obtained from a content provider origin server, and serves that content in compressed form in response to receipt of an end user request for that content. It also provides “on-the-fly” compression of otherwise uncompressed content as such content is retrieved from cache and is delivered in response to receipt of an end user request for such content. A preferred compression routine is gzip, as most end user browsers support the capability to decompress files that are received in this format. The compression functionality preferably is enabled on the edge server using customer-specific metadata tags.02-26-2015
20150058439APPARATUS AND METHOD FOR CACHING OF COMPRESSED CONTENT IN A CONTENT DELIVERY NETWORK - A content delivery network (CDN) edge server is provisioned to provide last mile acceleration of content to requesting end users. The CDN edge server fetches, compresses and caches content obtained from a content provider origin server, and serves that content in compressed form in response to receipt of an end user request for that content. It also provides “on-the-fly” compression of otherwise uncompressed content as such content is retrieved from cache and is delivered in response to receipt of an end user request for such content. A preferred compression routine is gzip, as most end user browsers support the capability to decompress files that are received in this format. The compression functionality preferably is enabled on the edge server using customer-specific metadata tags.02-26-2015
20150052349Splicing into an active TLS session without a certificate or private key - An origin server selectively enables an intermediary (e.g., an edge server) to shunt into and out of an active TLS session that is on-going between a client and the origin server. The technique allows for selective pieces of a data stream to be delegated from an origin to the edge server for the transmission (by the edge server) of authentic cached content, but without the edge server having the ability to obtain control of the entire stream or to decrypt arbitrary data after that point. The technique enables an origin to authorize the edge server to inject cached data at certain points in a TLS session, as well as to mathematically and cryptographically revoke any further access to the stream until the origin deems appropriate.02-19-2015
20150040221SERVER WITH MECHANISM FOR CHANGING TREATMENT OF CLIENT CONNECTIONS DETERMINED TO BE RELATED TO ATTACKS - According to certain non-limiting embodiments disclosed herein, the functionality of a server is extended with a mechanism for identifying connections with clients that have exhibited attack characteristics (for example, characteristics indicating a DoS attack), and for transitioning internal ownership of those connections such that server resources consumed by the connection are reduced, while keeping the connection open. The connection thus moves from a state of relatively high resource use to a state of relatively low server resource use. According to certain non-limiting embodiments disclosed herein, the functionality of a server is extended by enabling the server to determine that any of a client and a connection exhibits one or more attack characteristics (e.g., based on at least one of client attributes, connection attributes, and client behavior during the connection, or otherwise). As a result of the determination, the server changes its treatment of the connection.02-05-2015
20150019633METHODS AND APPARATUS FOR MAKING BYTE-SPECIFIC MODIFICATIONS TO REQUESTED CONTENT - According to this disclosure, a proxy server is enhanced to be able to interpret instructions that specify how to modify an input object to create an output object to serve to a requesting client. Typically the instructions operate on binary data. For example, the instructions can be interpreted in a byte-based interpreter that directs the proxy as to what order, and from which source, to fill an output buffer that is served to the client. The instructions specify what changes to make to a generic input file. This functionality extends the capability of the proxy server in an open-ended fashion and enables it to efficiently create a wide variety of outputs for a given generic input file. The generic input file and/or the instructions may be cached at the proxy. The teachings hereof have applications in, among other things, the delivery of web content, streaming media, and the like.01-15-2015
20140379840PREDICTIVE PREFETCHING OF WEB CONTENT - This disclosure describes systems and methods for predictive prefetching. A server can be modified in accordance with the teachings hereof to predictively prefetch a second object for a client (referred to herein as the dependent object), given a request from the client for a first object (referred to herein as the parent object). When enough information about a parent object request is available, the predictive prefetching techniques disclosed herein can be used to calculate the likelihood that one or more dependent objects might be requested. This enables a server to prefetch them from local or remote storage device, from an origin server, or other source.12-25-2014
20140337958SECURITY FRAMEWORK FOR HTTP STREAMING ARCHITECTURE - Methods and apparatus for preventing unauthorized access to online content, including in particular streaming video and other media, are provided. In various embodiments, techniques are provided to authorize users and to authenticate clients (e.g., client media players) to a content delivery system. The content delivery system may comprise a content delivery network with one or more content or “edge” servers therein. The requesting client is sent a program at the time of content delivery. The program may be embedded in the content stream, or sent outside of the stream. The program contains instructions that are executed by the client and cause it to return identifying information to the content delivery system, which can then determine whether the client player is recognized and, if so, authorized to view the content. Unrecognized and/or altered players may be prevented from viewing the content.11-13-2014
20140317177Methods And Apparatus For Image Delivery With Time Limits - A dynamic image delivery system receives a client request for an image at an image caching server. The image caching server measures the client's network access speed and looks for an appropriate pre-rendered copy of the requested image that is rendered for the client's network access speed in local storage. If the appropriate rendered copy is found, then the image caching server sends the rendered image to the client. If it is not found, then the image caching server dynamically renders a copy of the image and sends it to the client.10-23-2014
20140195653Connected-media end user experience using an overlay network - An Internet infrastructure delivery platform (e.g., operated by a service provider) provides an overlay network (a server infrastructure) that is used to facilitate “second screen” end user media experiences. In this approach, first media content, which is typically either live on-demand, is being rendered on a first content device (e.g., a television, Blu-Ray disk or other source). That first media content may be delivered by servers in the overlay network. One or multiple end user second content devices are then adapted to be associated with the first content source, preferably, via the overlay network, to facilitate second screen end user experiences (on the second content devices).07-10-2014
20140189071Stream-based data deduplication with peer node prediction - Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. Data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. As such, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are then re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. On-the-wire compression techniques are provided to reduce the amount of data transmitted between the peers.07-03-2014
20140189070Stream-based data deduplication using directed cyclic graphs to facilitate on-the-wire compression - Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. Data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. As such, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are then re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. On-the-wire compression techniques are provided to reduce the amount of data transmitted between the peers.07-03-2014
20140189069MECHANISM FOR DISTINGUISHING BETWEEN CONTENT TO BE SERVED THROUGH FIRST OR SECOND DELIVERY CHANNELS - Described herein are methods, apparatus and systems for selectively delivering content through one of two communication channels, one being origin to client and the other being from or through a CDN to client. Thus a client may choose to request content from a CDN and/or from an origin server. This disclosure sets forth techniques for, among other things, distinguishing between which channel to use for a given object, using the CDN-client channel to obtain the performance benefit of doing so, and reverting to the origin-client channel where content may be private, sensitive, corrupted, or otherwise considered to be unsuitable from delivery from and/or through the CDN.07-03-2014
20140189040Stream-based data deduplication with cache synchronization - Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. Data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. As such, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are then re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. On-the-wire compression techniques are provided to reduce the amount of data transmitted between the peers.07-03-2014
20140181285SCALABLE CONTENT DELIVERY NETWORK REQUEST HANDLING MECHANISM WITH USAGE-BASED BILLING - Described herein are improved systems, methods, and devices for delivering and managing metadata in a distributed computing platform such as a content delivery network (CDN) so as to configure content servers to handle client requests. The teachings hereof provide, among other things, scalable and configurable solutions for delivering and managing metadata, preferably by leveraging dynamically obtained control information. For example, in one embodiment, a given content server may store metadata, e.g., in a configuration file, that references dynamic, late-bound control information for use in satisfying dependencies. This dynamic control information can be requested by the CDN content server, typically from a remote host, when needed to parse and execute the metadata.06-26-2014
20140181268SCALABLE CONTENT DELIVERY NETWORK REQUEST HANDLING MECHANISM - Described herein are improved systems, methods, and devices for delivering and managing metadata in a distributed computing platform such as a content delivery network (CDN) so as to configure content servers to handle client requests. The teachings hereof provide, among other things, scalable and configurable solutions for delivering and managing metadata, preferably by leveraging dynamically obtained control information. For example, in one embodiment, a given content server may store metadata, e.g., in a configuration file, that references dynamic, late-bound control information for use in satisfying dependencies. This dynamic control information can be requested by the CDN content server, typically from a remote host, when needed to parse and execute the metadata.06-26-2014
20140181187SCALABLE CONTENT DELIVERY NETWORK REQUEST HANDLING MECHANISM TO SUPPORT A REQUEST PROCESSING LAYER - Described herein are improved systems, methods, and devices for delivering and managing metadata in a distributed computing platform such as a content delivery network (CDN) so as to configure content servers to handle client requests. The teachings hereof provide, among other things, scalable and configurable solutions for delivering and managing metadata, preferably by leveraging dynamically obtained control information. For example, in one embodiment, a given content server may store metadata, e.g., in a configuration file, that references dynamic, late-bound control information for use in satisfying dependencies. This dynamic control information can be requested by the CDN content server, typically from a remote host, when needed to parse and execute the metadata.06-26-2014
20140181186SCALABLE CONTENT DELIVERY NETWORK REQUEST HANDLING MECHANISM WITH SUPPORT FOR DYNAMICALLY-OBTAINED CONTENT POLICIES - Described herein are improved systems, methods, and devices for delivering and managing metadata in a distributed computing platform such as a content delivery network (CDN) so as to configure content servers to handle client requests. The teachings hereof provide, among other things, scalable and configurable solutions for delivering and managing metadata, preferably by leveraging dynamically obtained control information. For example, in one embodiment, a given content server may store metadata, e.g., in a configuration file, that references dynamic, late-bound control information for use in satisfying dependencies. This dynamic control information can be requested by the CDN content server, typically from a remote host, when needed to parse and execute the metadata.06-26-2014
20140164447COOKIE SYNCHRONIZATION AND ACCELERATION OF THIRD-PARTY CONTENT IN A WEB PAGE - Described herein are, among other things, systems and methods for synchronizing cookies across different domains, and leveraging such systems and methods for content delivery. For example, two parties hosting content under different domain names from one another may desire to synchronize identification or ‘ID’ cookies that hold identifiers for a given client and/or end-user, so that one or both of the parties can map a given identifier from one domain to the identifier used in the other domain. Without limitation, some techniques described herein leverage one or more proxy servers that may be part of a distributed computing platform known as a content delivery network. Further, by way of example, some of the techniques for cookie synchronization can be leveraged to accelerate the delivery of content on a website with content from multiple domains.06-12-2014
20140156839METHOD FOR DETERMINING METRICS OF A CONTENT DELIVERY AND GLOBAL TRAFFIC MANAGEMENT NETWORK - A method for determining metrics of a content delivery and global traffic management network provides service metric probes that determine the service availability and metric measurements of types of services provided by a content delivery machine. Latency probes are also provided for determining the latency of various servers within a network. Service metric probes consult a configuration file containing each DNS name in its area and the set of services. Each server in the network has a metric test associated with each service supported by the server which the service metric probes periodically performs metric tests on and records the metric test results which are periodically sent to all of the DNS servers in the network. DNS servers use the test result updates to determine the best server to return for a given DNS name. The latency probe calculates the latency from its location to a client's location using the round trip time for sending a packet to the client to obtain the latency value for that client. The latency probe updates the DNS servers with the clients' latency data. The DNS server uses the latency test data updates to determine the closest server to a client.06-05-2014
20140149844PROGRESSIVE CONSOLIDATION OF WEB PAGE RESOURCES - Described herein are systems, method and devices for modifying web pages to enhance their performance. In certain non-limiting embodiments, improved resource consolidation techniques are described, which are sometimes referred to herein as ‘progressive’ consolidation. Such techniques can be used to consolidate page resources in a way that allows a client browser or other application to process each of the consolidated resources after it arrives, even if all the client has not fully retrieved all of the consolidated resources yet. The teachings hereof can be used, for example, to modify a markup language document (HTML) to consolidate CSS, JavaScript, images, or other resources referenced therein.05-29-2014
20140115119Enforcing single stream per sign-on from a content delivery network (CDN) media server - An apparatus for enforcing a media stream delivery restriction uses a stream control service (SCS). The SCS is implemented in a distributed network, such as a CDN, in which a given media stream is delivered to authorized end users from multiple delivery servers, but where an authorized end user is associated with a single log-in identifier that is not intended to be shared with other end users. According to the method, an enforcement server of the SCS identifies first and second copies of the given media stream associated with the single log-in identifier being delivered from multiple delivery servers. It then issues message to terminate delivery of the given media stream from at least one of the multiple delivery servers.04-24-2014
20140101758SERVER WITH MECHANISM FOR REDUCING INTERNAL RESOURCES ASSOCIATED WITH A SELECTED CLIENT CONNECTION - According to certain non-limiting embodiments disclosed herein, the functionality of a server is extended with a mechanism for identifying connections with clients that have exhibited attack characteristics (for example, characteristics indicating a DoS attack), and for transitioning internal ownership of those connections such that server resources consumed by the connection are reduced, while keeping the connection open. The connection thus moves from a state of relatively high resource use to a state of relatively low server resource use, and the server is able to free resources such as memory and processing cycles previously allocated to the connection. In some cases, the server maintains the connection for at least some time and uses it to keep the client occupied so that it cannot launch—or has fewer resources to launch—further attacks, and possibly to gather information about the attacking client.04-10-2014
20140082217Peer-to-peer connection establishment using TURN - A relay service enables two peers attempting to communicate with one another to each connect to a publicly-accessible relay server, which servers are associated with an overlay network and are selected by a directory service. After end-to-end connectivity is established, preferably the hosts communicate with each other by relaying data packets via the overlay network relay servers. Communications (both connection control messages and data being relayed) between a host and a relay server occurs at an application layer using a modified version of the TURN protocol.03-20-2014
20140059168Hybrid HTTP and UDP content delivery - A hybrid HTTP/UDP delivery protocol provides significant improvements for delivery of video and other content over a network, such as an overlay. The approach is especially useful to address problems (e.g., slow startup times, rebuffering, and low bitrates) for HTTP-based streaming. In general, the protocol has two phases: an HTTP phase, and a UDP phase. In the HTTP phase, the client sends an HTTP GET request to a server. The GET request contains a transport header informing the server that the client would like to use UDP-based transfer over the protocol. The server may refuse this mode and continue in ordinary HTTP mode, or the server may respond by sending an empty response with header information informing the client how to make the connection to enter the UDP phase. In the UDP phase, the client initiates a connection and receives the originally-requested content over UDP.02-27-2014
20140056136Preventing TCP from becoming too conservative too quickly - A technique that addresses the problem of a TCP connection's throughput being very vulnerable to early losses implements a pair of controls around ssthresh. A first control is a loss forgiveness mechanism that applies to the first n-loss events by the TCP connection. Generally, this mechanism prevents new TCP connections from ending slow-start and becoming conservative on window growth too early (which would otherwise occur due to the early losses). The second control is a self-decay mechanism that is applied beyond the first n-losses that are handled by the first control. This mechanism decouples of ssthresh drop from cwnd and is thus useful in arresting otherwise steep ssthresh drops. The self-decay mechanism also enables TCP to enter/continue to be slow-start even after fast-recovery from a loss event.02-27-2014
20140052811Dynamic content assembly on edge-of network servers in a content delivery network - Content is dynamically assembled at the edge of the Internet, preferably on content delivery network (CDN) edge servers. A content provider leverages an “edge side include” (ESI) markup language that is used to define Web page fragments for dynamic assembly at the edge. Dynamic assembly improves site performance by caching objects that comprise dynamically-generated pages at the edge of the Internet, close to the end user. Instead of being assembled by an application/web server in a centralized data center, the application/web server sends a page template and content fragments to a CDN edge server where the page is assembled. Each content fragment can have its own cacheability profile to manage the “freshness” of the content. Once a user requests a page, the edge server examines its cache for the included fragments and assembles the page on-the-fly.02-20-2014
20130311433Stream-based data deduplication in a multi-tenant shared infrastructure using asynchronous data dictionaries - Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. In this approach, data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. Because the compressed objects are treated as just objects, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network (CDN) procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are the re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly.11-21-2013
20130254343SERVER WITH MESSAGE EXCHANGE ACCOUNTING - A server has a firewall module that performs accounting of traffic seen at the server. The traffic includes message exchanges, such as HTTP requests and HTTP responses. The server tests the message exchanges to determine if they match any of several message exchange categories. The server keeps statistics on matching traffic, for example the rate of matching traffic generated by a particular requesting client. Typically, the server is a proxy server that is part of a content delivery network (CDN), and the message exchanges occur between a client requesting content, the proxy server, other servers in the CDN, and/or an origin server from which the proxy server retrieves requested content. Using the message exchange model and the statistics generated thereby, the server can flag particular traffic or clients, and take protective action (e.g., deny, alert). In an alternate embodiment, a central control system gathers statistics from multiple servers for analysis.09-26-2013
20130254260NETWORK THREAT ASSESSMENT SYSTEM WITH SERVERS PERFORMING MESSAGE EXCHANGE ACCOUNTING - A server has a firewall module that performs accounting of traffic seen at the server. The traffic includes message exchanges, such as HTTP requests and HTTP responses. The server tests the message exchanges to determine if they match any of several message exchange categories. The server keeps statistics on matching traffic, for example the rate of matching traffic generated by a particular requesting client. Typically, the server is a proxy server that is part of a content delivery network (CDN), and the message exchanges occur between a client requesting content, the proxy server, other servers in the CDN, and/or an origin server from which the proxy server retrieves requested content. Using the message exchange model and the statistics generated thereby, the server can flag particular traffic or clients, and take protective action (e.g., deny, alert). In an alternate embodiment, a central control system gathers statistics from multiple servers for analysis.09-26-2013
20130254247Scalable, high performance and highly available distributed storage system for Internet content - A method for content storage on behalf of participating content providers begins by having a given content provider identify content for storage. The content provider then uploads the content to a given storage site selected from a set of storage sites. Following upload, the content is replicated from the given storage site to at least one other storage site in the set. Upon request from a given entity, a given storage site from which the given entity may retrieve the content is then identified. The content is then downloaded from the identified given storage site to the given entity. In an illustrative embodiment, the given entity is an edge server of a content delivery network (CDN).09-26-2013
20130246612HTML delivery from edge-of-network servers in a content delivery network (CDN) - A content delivery network provides delivery of cacheable content files, such as HTML. To support HTML delivery, the content provider provides the CDNSP with an association of the content provider's domain name to an origin server domain name at which default HTML files are published. The CDNSP provides its customer with a CDNSP-specific domain name. The content provider then implements DNS entry aliasing so that domain name requests for the host cue the CDN DNS request routing mechanism. This mechanism identifies a content server to respond to a request directed to the customer's domain. The CDN content server returns a default HTML file if such file is cached; otherwise, the content server directs a request for the file to the origin server to retrieve the file, after which the file is cached on the content server for subsequent use.09-19-2013
20130232249Forward request queuing in a distributed edge processing environment - A server in a distributed environment includes a process that manages incoming client requests and selectively forwards service requests to other servers in the network. The server includes storage in which at least one forwarding queue is established. The server includes code for aggregating service requests in the forwarding queue and then selectively releasing the requests, or some of them, to another server. The queuing mechanism preferably is managed by metadata, which, for example, controls how many service requests may be placed in the queue, how long a given service request may remain in the queue, what action to take in response to a client request if the forwarding queue's capacity is reached, etc. In one embodiment, the server generates an estimate of a current load on an origin server (to which it is sending forwarding requests) and instantiates the forward request queuing when that current load is reached.09-05-2013
20130219024METHODS AND APPARATUS FOR ACCELERATING CONTENT AUTHORED FOR MULTIPLE DEVICES - Disclosed herein are systems, methods, and apparatus for improving the delivery of web content that has been authored for multiple devices. In certain embodiments, an intermediary device such as a proxy server determines the characteristics of a client device requesting multi-device content, obtains and examines the multi-device content, and in view of the particular requesting client device removes portions that are irrelevant for that device. Doing so can accelerate delivery of the content by reducing payload and relieving the client device of the processing burden associated with parsing the content to make that determination itself, among other things.08-22-2013
20130198387SYSTEMS AND METHODS FOR DETERMINING METRICS OF MACHINES PROVIDING SERVICES TO REQUESTING CLIENTS - A method for determining metrics of a content delivery and global traffic management network provides service metric probes that determine the service availability and metric measurements of types of services provided by a content delivery machine. Latency probes are also provided for determining the latency of various servers within a network. Service metric probes consult a configuration file containing each DNS name in its area and the set of services. Each server in the network has a metric test associated with each service supported by the server which the service metric probes periodically performs metric tests on and records the metric test results which are periodically sent to all of the DNS servers in the network. DNS servers use the test result updates to determine the best server to return for a given DNS name. The latency probe calculates the latency from its location to a client's location using the round trip time for sending a packet to the client to obtain the latency value for that client. The latency probe updates the DNS servers with the clients' latency data. The DNS server uses the latency test data updates to determine the closest server to a client.08-01-2013
20130191499Multi-domain configuration handling in an edge network server - An Internet infrastructure delivery platform operated by a provider enables HTTP-based service to identified third parties at large scale. The platform provides this service to one or more cloud providers. The approach enables the CDN platform provider (the first party) to service third party traffic on behalf of the cloud provider (the second party). In operation, an edge server handling mechanism leverages DNS to determine if a request with an unknown host header should be serviced. Before serving a response, and assuming the host header includes an unrecognized name, the edge server resolves the host header and obtains an intermediate response, typically a list of aliases (e.g., DNS CNAMEs). The edge server checks the returned CNAME list to determine how to respond to the original request. Using just a single edge configuration, the CDN service provider can support instant provisioning of a cloud provider's identified third party traffic.07-25-2013
20130185387Host/path-based data differencing in an overlay network using a compression and differencing engine - A data differencing technique enables a response from a server to the request of a client to be composed of data differences from previous versions of the requested resource. To this end, data differencing-aware processes are positioned, one at or near the origin server (on the sending side) and the other at the edge closest to the end user (on the receiving side), and these processes maintain object dictionaries. The data differencing-aware processes each execute a compression and differencing engine. Whenever requested objects flow through the sending end, the engine replaces the object data with pointers into the object dictionary. On the receiving end of the connection, when the data arrives, the engine reassembles the data using the same object dictionary. The approach is used for version changes within a same host/path, using the data differencing-aware processes to compress data being sent from the sending peer to the receiving peer.07-18-2013
20130179567Network performance monitoring in a content delivery system - A method for Internet delivery in a delivery network established at network locations, the delivery network comprising a plurality of content servers for serving resources. The servers include a plurality of subsets, each subset being located at one of a plurality of Internet data centers. For each Internet Protocol (IP) address block from which requests for content resources are expected to be received, the method generates a candidate list of data centers to be used to service the requests. For the IP address block, the method selects at least one of the data centers from the candidate list. The selected Internet data center for the IP address block is written into a network map. In response to a DNS query, the map is used to identify one of the Internet data centers from the candidate list to be used to service a request for a content resource.07-11-2013
20130167193Security policy editor - A shared computing infrastructure has associated therewith a portal application through which users access the infrastructure and provision one or more services, such as content storage and delivery. The portal comprises a security policy editor, a web-based configuration tool that is intended for use by customers to generate and apply security policies to their media content. The security policy editor provides the user the ability to create and manage security policies, to assign policies so created to desired media content and/or player components, and to view information regarding all of the customer's current policy assignments. The editor provides a unified interface to configure all media security services that are available to the CDN customer from a single interface, and to enable the configured security features to be promptly propagated and enforced throughout the overlay network infrastructure. The editor advantageously enables security features to be configured independently of a delivery configuration.06-27-2013
20130166634ASSESSMENT OF CONTENT DELIVERY SERVICES USING PERFORMANCE MEASUREMENTS FROM WITHIN AN END USER CLIENT APPLICATION - A system for measuring and monitoring performance of online content is provided. In one embodiment, the system includes an intermediary device, such as a web proxy, that receives client requests for content, such as requests for web pages. The device obtains the requested content, modifies it by applying one or more performance optimizations, and serves it to the client. The device also inserts code into the content for execution by the client to gather and report data reflecting, e.g., how quickly the client is able to get and process the content. The code includes information identifying the modifications the device made, and this is reported with the timing data, so that the effect on performance can be analyzed. In other embodiments, the device selects one of multiple versions of content, and the inserted code contains information identifying the selected version. The foregoing are merely examples; other embodiments are described herein.06-27-2013
20130159469Methods and apparatus for image delivery - A dynamic image delivery system receives a client request for an image at an image caching server. The image caching server measures the client's network access speed and looks for an appropriate pre-rendered copy of the requested image that is rendered for the client's network access speed in local storage. If the appropriate rendered copy is found, then the image caching server sends the rendered image to the client. If it is not found, then the image caching server dynamically renders a copy of the image and sends it to the client.06-20-2013
20130156189Terminating SSL connections without locally-accessible private keys - An Internet infrastructure delivery platform (e.g., operated by a service provider) provides an RSA proxy “service” as an enhancement to the SSL protocol that off-loads the decryption of the encrypted pre-master secret (ePMS) to an external server. Using this service, instead of decrypting the ePMS “locally,” the SSL server proxies (forwards) the ePMS to an RSA proxy server component and receives, in response, the decrypted pre-master secret. In this manner, the decryption key does not need to be stored in association with the SSL server.06-20-2013
20130144817Parallel training of a Support Vector Machine (SVM) with distributed block minimization - A method to solve large scale linear SVM that is efficient in terms of computation, data storage and communication requirements. The approach works efficiently over very large datasets, and it does not require any master node to keep any examples in its memory. The algorithm assumes that the dataset is partitioned over several nodes on a cluster, and it performs “distributed block minimization” to achieve the desired results. Using the described approach, the communication complexity of the algorithm is independent of the number of training examples.06-06-2013
20130117418HYBRID PLATFORM FOR CONTENT DELIVERY AND TRANSCODING - The subject matter herein generally relates to transcoding content, typically audio/video files though not limited to such, from one version to another in preparation for online streaming or other delivery to end users. Such transcoding may involve converting from one format to another (e.g., changing codecs or container formats), or creating multiple versions of an original source file in different bitrates, frame-sizes, or otherwise, to support distribution to a wide array of devices and to utilize performance-enhancing technologies like adaptive bitrate streaming. A transcoding platform is described herein that, in certain embodiments, leverages distributed computing techniques to transcode content in parallel across a platform of machines that are preferably idle or low-utilization resources of a content delivery network. The transcoding system also utilizes, in certain embodiments, improved techniques for segmenting the original source file so as to enable different segments to be sent to different machines for parallel transcodes.05-09-2013
20130114744SEGMENTED PARALLEL ENCODING WITH FRAME-AWARE, VARIABLE-SIZE CHUNKING - The subject matter herein generally relates to transcoding content, typically audio/video files though not limited to such, from one version to another in preparation for online streaming or other delivery to end users. Such transcoding may involve converting from one format to another (e.g., changing codecs or container formats), or creating multiple versions of an original source file in different bitrates, frame-sizes, or otherwise, to support distribution to a wide array of devices and to utilize performance-enhancing technologies like adaptive bitrate streaming. A transcoding platform is described herein that, in certain embodiments, leverages distributed computing techniques to transcode content in parallel across a platform of machines that are preferably idle or low-utilization resources of a content delivery network. The transcoding system also utilizes, in certain embodiments, improved techniques for segmenting the original source file so as to enable different segments to be sent to different machines for parallel transcodes.05-09-2013
20130111004File manager having an HTTP-based user interface05-02-2013
20130103782APPARATUS AND METHOD FOR CACHING OF COMPRESSED CONTENT IN A CONTENT DELIVERY NETWORK - A content delivery network (CDN) edge server is provisioned to provide last mile acceleration of content to requesting end users. The CDN edge server fetches, compresses and caches content obtained from a content provider origin server, and serves that content in compressed form in response to receipt of an end user request for that content. It also provides “on-the-fly” compression of otherwise uncompressed content as such content is retrieved from cache and is delivered in response to receipt of an end user request for such content. A preferred compression routine is gzip, as most end user browsers support the capability to decompress files that are received in this format. The compression functionality preferably is enabled on the edge server using customer-specific metadata tags.04-25-2013
20130097291Hybrid content delivery network (CDN) and peer-to-peer (P2P) network - A content delivery network (CDN) typically includes a mapping system for directing requests to CDN servers. One or more peer machines become associated with the CDN, and the CDN mapping system is then used to enable a given peer to locate another peer in the P2P network, and/or a CDN server. Using this hybrid approach, CDN customer content may be delivered from the CDN edge network, from the P2P network, or from both networks. In one embodiment, customer content is uploaded to the CDN and stored in the edge network, or in a storage network associated therewith. The CDN edge network is then used to prime the P2P network, which may be used to take over some of the content delivery requirements for the customer content. The decision of whether to use edge network or peer network resources for delivery may be based on load and traffic conditions.04-18-2013
20130042328Enforcing single stream per sign-on from a content delivery network (CDN) media server - An apparatus for enforcing a media stream delivery restriction uses a stream control service (SCS). The SCS is implemented in a distributed network, such as a CDN, in which a given media stream is delivered to authorized end users from multiple delivery servers, but where an authorized end user is associated with a single log-in identifier that is not intended to be shared with other end users. According to the method, an enforcement server of the SCS identifies first and second copies of the given media stream associated with the single log-in identifier being delivered from multiple delivery servers. It then issues message to terminate delivery of the given media stream from at least one of the multiple delivery servers.02-14-2013
20130024503Using virtual domain name service (DNS) zones for enterprise content delivery - A domain to be published to an enterprise ECDN is associated with a set of one or more enterprise zones configurable in a hierarchy. When a DNS query arrives for a hostname known to be associated with given content within the control of the ECDN, a DNS server responds by handing back an IP address, by executing a zone referral to a next (lower) level name server in a zone hierarchy, or by CNAMing to another hostname, thereby restarting the lookup procedure. At any level in the zone hierarchy, there is an associated zone server that executes logic that applies the requested hostname against a map. A name query to ECDN-managed content may be serviced in coordination with various sources of distributed network intelligence.01-24-2013
20130019311Method and system for handling computer network attacks - A method and apparatus for serving content requests using global and local load balancing techniques is provided. Web site content is cached using two or more point of presences (POPs), wherein each POP has at least one DNS server. Each DNS server is associated with the same anycast IP address. A domain name resolution request is transmitted to the POP in closest network proximity for resolution based on the anycast IP address. Once the domain name resolution request is received at a particular POP, local load balancing techniques are performed to dynamically select the appropriate Web server at the POP for use in resolving the domain name resolution request. Approaches are described for handling bursts of traffic at a particular POP, security, and recovering from the failure of various components of the system.01-17-2013
20130007282Method of load balancing edge-enabled applications in a content delivery network (CDN) - A method and system of load balancing application server resources operating in a distributed set of servers is described. In a representative embodiment, the set of servers comprise a region of a content delivery network. Each server in the set typically includes a server manager process, and an application server on which edge-enabled applications or application components are executed. As service requests are directed to servers in the region, the application servers manage the requests in a load-balanced manner, and without any requirement that a particular application server spawned on-demand.01-03-2013
20130007228Method and system for purging content from a content delivery network - A content file purge mechanism for a content delivery network (CDN) is described. A Web-enabled portal is used by CDN customers to enter purge requests securely. A purge request identifies one or more content files to be purged. The purge request is pushed over a secure link from the portal to a purge server, which validates purge requests from multiple CDN customers and batches the requests into an aggregate purge request. The aggregate purge request is pushed from the purge server to a set of staging servers. Periodically, CDN content servers poll the staging servers to determine whether an aggregate purge request exists. If so, the CDN content servers obtain the aggregate purge request and process the request to remove the identified content files from their local storage.01-03-2013
20120324227System For Generating Fingerprints Based On Information Extracted By A Content Delivery Network Server - A dynamic multimedia fingerprinting system is provided. A user requests multimedia content from a Web cache server that verifies that the user is authorized to download the content. A custom fingerprint specific to the user is generated and dynamically inserted into the content as the content is delivered to the user. The custom fingerprint can be generated on the Web cache server or at the content provider's server. The system allows a content provider to specify where the custom fingerprint is inserted into the content or where the fingerprint is to replace a placeholder within the content.12-20-2012
20120324060Method of data collection among participating content providers in a distributed network - A content delivery network (CDN) service provider extends a content delivery network to gather information on atomically identifiable web clients (called “user agents”) as such computer-implemented entities interact with the CDN across different domains being managed by the CDN service provider. The data system tracks user agents, preferably via cookies, although one or more passive techniques may be used. A user agent may be a cookie-able device having a cookie store. As the user agent navigates across sites, a CDN-specific unique identifier used by the system to correlate user agents is generated. Preferably, the unique identifier is stored as an encrypted cookie. The unique identifier represents one user agent (and, thus, one cookie-able device's store). The system tracks user agent behavior on and across customer sites that are served by the CDN, and these behaviors are classified into identifiable “segments” that may be used to create a profile.12-20-2012
20120311648Automatic migration of data via a distributed computer network - A method and apparatus for the automatic migration of data via a distributed computer network allows a customer to select content files that are to be transferred to a group of edge servers. Origin sites store all of a customer's available content files. An edge server maintains a dynamic number of popular files in its memory for the customer. The files are ranked from most popular to least popular and when a file has been requested from an edge server a sufficient number of times to become more popular than the lowest popular stored file, the file is obtained from an origin site. The edge servers are grouped into two service levels: regional and global. The customer is charged a higher fee to store its popular files on the global edge servers compared to a regional set of edge servers because of greater coverage.12-06-2012
20120303804Method and system for providing on-demand content delivery for an origin server - An infrastructure “insurance” mechanism enables a Web site to fail over to a content delivery network (CDN) upon a given occurrence at the site. Upon such occurrence, at least some portion of the site's content is served preferentially from the CDN so that end users that desire the content can still get it, even if the content is not then available from the origin site. In operation, content requests are serviced from the site in the usual manner, e.g., by resolving DNS queries to the site's IP address, until detection of the given occurrence. Thereafter, DNS queries are managed by a CDN dynamic DNS-based request routing mechanism so that such queries are resolved to optimal CDN edge servers. After the event that caused the occurrence has passed, control of the site's DNS may be returned from the CDN back to the origin server's DNS mechanism.11-29-2012
20120290737Method and system for enhancing live stream delivery quality using prebursting - A method accelerates the delivery of a portion of a data stream across nodes of a stream transport network. A portion of a live stream is forwarded from a first node to a second node in a transport network at a high bitrate as compared to the stream's encoded bitrate, and thereafter, the stream continues to be forwarded from the first node to the second node at or near the encoded bitrate. This technique provides significant advantages in that it reduces stream startup time, reduces unrecoverable stream packet loss, and reduces stream rebuffers as the stream is viewed by a requesting end user that has been mapped to a media server in a distributed computer network such as a content delivery network.11-15-2012
20120275597Extending data confidentiality into a player application - In a content protection scheme, and in response to a request for a content segment received by a server, the server generates and associates with the segment a message that confers entitlement to a session-specific key from which one or more decryption keys may be derived. The decryption keys are useful to decrypt the segment at runtime as it is about to be rendered by a player. Before delivery, the server encrypts the segment to generate an encrypted fragment, and it then serves the encrypted fragment (and the message) in response to the request. At the client, information in the message is used to obtain the session-specific key. Using that key, the decryption keys are derived, and those keys are then used to decrypt the received encrypted fragment. The decryption occurs at runtime. The approach protects content while in transit to and at rest in the client browser environment.11-01-2012
20120265853FORMAT-AGNOSTIC STREAMING ARCHITECTURE USING AN HTTP NETWORK FOR STREAMING - This patent document describes, among other things, distributed computer platforms for online delivery of multimedia, including HD video, at broadcast audience scale to a variety of runtime environments and client devices in both fixed line and mobile environments. The teachings hereof can be applied to deliver live and on-demand content streams via computer networks. The teachings also relate to the ingestion of content streams in a given source format and the serving of the stream in a given target format. For example, a system might have machines in a content delivery network that ingest live streams in a source format, use an intermediate format to transport the stream within the system, and output the stream in a target format to clients that have requested (e.g., with an HTTP request) the stream. The streams may be archived for later playback.10-18-2012
20120259942Proxy server with byte-based include interpreter - According to this disclosure, a proxy server is enhanced to be able to interpret instructions that specify how to modify an input object to create an output object to serve to a requesting client. Typically the instructions operate on binary data. For example, the instructions can be interpreted in a byte-based interpreter that directs the proxy as to what order, and from which source, to fill an output buffer that is served to the client. The instructions specify what changes to make to a generic input file. This functionality extends the capability of the proxy server in an open-ended fashion and enables it to efficiently create a wide variety of outputs for a given generic input file. The generic input file and/or the instructions may be cached at the proxy. The teachings hereof have applications in, among other things, the delivery of web content, streaming media, and the like.10-11-2012
20120246273Optimal route selection in a content delivery network - A routing mechanism operable in a distributed networking environment, such as a content delivery network (CDN), provides improved connectivity back to an origin server, especially for HTTP traffic. The technique enables an edge server operating within a given edge region to retrieve content (cacheable, non-cacheable and the like) from an origin server more efficiently by selectively routing through the network's own nodes, thereby avoiding network congestion and hot spots. The technique enables an edge server to fetch content from an origin server through an intermediate edge server or, more generally, enables an edge server within a given first region to fetch content from the origin server through an intermediate edge region.09-27-2012
20120226649Content delivery network (CDN) cold content handling - A method of content delivery in a content delivery network (CDN), where the CDN is deployed, operated and managed by a content delivery network service provider (CDNSP). The CDN comprises a set of content servers and a domain name system (DNS). For a given content provider, a determination is first made whether the content provider has “cold content” delivery requirements by evaluating one or more factors that include: total content size, size of content objects expected to be served, uniqueness of content, total number of content objects, and a percentage of the total content size that is expected to account for a given percentage of traffic. Upon a determination that the content provider has cold content delivery requirements, a subset of the CDN content servers are configured to implement a set of one or handling rules for managing delivery of the cold content from the CDN content servers.09-06-2012
20120215938Reliable, high-throughput, high-performance transport and routing mechanism for arbitrary data flows - The present invention leverages an existing content delivery network infrastructure to provide a system that enhances performance for any application that uses the Internet Protocol (IP) as its underlying transport mechanism. An overlay network comprises a set of edge nodes, intermediate nodes, and gateway nodes. This network provides optimized routing of IP packets. Internet application users can use the overlay to obtain improved performance during normal network conditions, to obtain or maintain good performance where normal default BGP routing would otherwise force the user over congested or poorly performing paths, or to enable the user to maintain communications to a target server application even during network outages.08-23-2012
20120204025SYSTEM AND METHOD FOR CLIENT-SIDE AUTHENTICATION FOR SECURE INTERNET COMMUNICATIONS - A system and method for client-side authentication for secure Internet communications is disclosed. In one embodiment, an intermediate device receives a web browser secure socket layer certificate from a web browser, authenticates the web browser using the secure socket layer certificate, and then re-signs the secure socket layer certificate with an intermediate device public key and an intermediate device certificate authority signature. The intermediate device sends the re-signed secure socket layer certificate to a web server and the web server authenticates the intermediate device using the re-signed secure socket layer certificate. In another embodiment, an intermediate device receives a web browser secure socket layer certificate from a web browser, inserts the web browser secure socket layer certificate into a HTTP header of a packet, and sends the packet to a web server.08-09-2012
20120203873Dynamic content assembly on edge-of-network servers in a content delivery network - Content is dynamically assembled at the edge of the Internet, preferably on content delivery network (CDN) edge servers. A content provider leverages an “edge side include” (ESI) markup language that is used to define Web page fragments for dynamic assembly at the edge. Dynamic assembly improves site performance by caching objects that comprise dynamically-generated pages at the edge of the Internet, close to the end user. Instead of being assembled by an application/web server in a centralized data center, the application/web server sends a page template and content fragments to a CDN edge server where the page is assembled. Each content fragment can have its own cacheability profile to manage the “freshness” of the content. Once a user requests a page, the edge server examines its cache for the included fragments and assembles the page on-the-fly.08-09-2012
20120203861METHODS AND SYSTEMS FOR DELIVERING CONTENT TO DIFFERENTIATED CLIENT DEVICES - Methods and systems are disclosed for delivery of tailored content to differentiated devices, such as desktop, mobile, and tablet devices, over a computer network. In one embodiment, a proxy cache server has a content cache for storing previously retrieved objects like web pages or multimedia files. For at least some objects, several versions are stored, each version representing an object suited for a given set of client device characteristics. A device-equivalency data structure maintained at the proxy facilitates a determination of whether such cached versions can be used to service a current request. The versions might represent, for example, modified versions created using, e.g., mobile device transcoding techniques, in response to prior requests. They may also represent a set of alternate content created by a content provider and available from an origin server. Such methods and systems may be implemented in a distributed computing networks, e.g., a content delivery network.08-09-2012
20120179814Determination and use of metrics in a domain name service (DNS) system - A method for determining metrics of a content delivery and global traffic management network provides service metric probes that determine the service availability and metric measurements of types of services provided by a content delivery machine. Latency probes are also provided for determining the latency of various servers within a network. The latency probe calculates, for example, the latency from its location to a client's location using the round trip time for sending a packet to the client to obtain the latency value for that client. DNS servers use the latency test results, along with traffic weightings, to determine a server to return for a given DNS name.07-12-2012
20120166650Method of load balancing edge-enabled applications in a content delivery network (CDN) - A method and system of load balancing application server resources operating in a distributed set of servers is described. In a representative embodiment, the set of servers comprise a region of a content delivery network. Each server in the set typically includes a server manager process, and an application server on which edge-enabled applications or application components are executed. As service requests are directed to servers in the region, the application servers manage the requests in a load-balanced manner, and without any requirement that a particular application server spawned on-demand.06-28-2012
20120166589CONTENT DELIVERY NETWORK FOR RFID DEVICES - A method and mechanism for a distributed on-demand computing system. The system automatically provisions distributed computing servers with customer application programs. The parameters of each customer application program are taken into account when a server is selected for hosting the program. The system monitors the status and performance of each distributed computing server. The system provisions additional servers when traffic levels exceed a predetermined level for a customer's application program and, as traffic demand decreases to a predetermined level, servers can be un-provisioned and returned back to a server pool for later provisioning. The system tries to fill up one server at a time with customer application programs before dispatching new requests to another server. The customer is charged a fee based on the usage of the distributed computing servers.06-28-2012
20120151016Content delivery network (CDN) content server request handling mechanism with metadata framework support - To serve content through a content delivery network (CDN), the CDN must have some information about the identity, characteristics and state of its target objects. Such additional information is provided in the form of object metadata, which according to the invention can be located in the request string itself, in the response headers from the origin server, in a metadata configuration file distributed to CDN servers, or in a per-customer metadata configuration file. CDN content servers execute a request identification and parsing process to locate object metadata and to handle the request in accordance therewith. Where different types of metadata exist for a particular object, metadata in a configuration file is overridden by metadata in a response header or request string, with metadata in the request string taking precedence.06-14-2012
20120150993ASSISTED DELIVERY OF CONTENT ADAPTED FOR A REQUESTING CLIENT - Disclosed herein are methods and apparatus facilitating delivery of web content that has adapted for particular client devices, such as mobile devices. Doing so may involve assisting a server without the adaptation logic necessary to deliver adapted content to a particular client device. For example, a given web server may adapt content and serve website content to a requesting client, but another server may take over when the client desires to make a purchase at the site. That other server, while perhaps qualified to process payment information, may not be able to provide adapted content. The content adaptation web server can assist that other server to do so. In other embodiments, such a content adapting server may provide such services to a range of other servers, and itself may not serve content directly to the client. The teachings herein may be implemented within a content delivery network.06-14-2012
20120130871Content delivery network service provider (CDNSP)-managed content delivery network (CDN) for network service provider (NSP) - A CDN service provider shares its CDN infrastructure with a network to enable a network service provider (NSP) to offer a private-labeled network content delivery network (NCDN or “private CDN”) to participating content providers. The CDNSP preferably provides the hardware, software and services required to build, deploy, operate and manage the CDN for the NCDN customer. Thus, the NCDN customer has access to and can make available to participating content providers one or more of the content delivery services (e.g., HTTP delivery, streaming media delivery, application delivery, and the like) available from the global CDN without having to provide the large capital investment, R&D expense and labor necessary to successfully deploy and operate the network itself. Rather, the global CDN service provider simply operates the private CDN for the network as a managed service.05-24-2012
20120124372Protecting Websites and Website Users By Obscuring URLs - Websites and website users are subject to an increasing array of online threats and attacks. Disclosed herein are, among other things, approaches for protecting websites and website users from online threats. For example, a content server, such as a proxying content delivery network (CDN) server that is delivering content on behalf of an origin server, can modify URLs as they pass through the content server to obscured values that are given to the end-user client browser. The end-user browser can use the obscured URL to obtain content from the content server, but the URL may be valid only for a limited time, and may be invalid for obtaining content from the origin. Hence, information is hidden from the client, making attacks against the website more difficult and frustrating client-end malware that leverages knowledge of browsed URLs.05-17-2012
20120110148DOMAIN NAME RESOLUTION USING A DISTRIBUTED DNS NETWORK - A distributed DNS network includes a central origin server that actually controls the zone, and edge DNS cache servers configured to cache the DNS content of the origin server. The edge DNS cache servers are published as the authoritative servers for customer domains instead of the origin server. When a request for a DNS record results in a cache miss, the edge DNS cache servers get the information from the origin server and cache it for use in response to future requests. Multiple edge DNS cache servers can be deployed at multiple locations. Since an unlimited number of edge DNS cache servers can be deployed, the system is highly scalable. The disclosed techniques protect against DoS attacks, as DNS requests are not made to the origin server directly.05-03-2012
20120096546Edge server HTTP POST message processing - A CDN edge server process receives an HTTP message, takes a given action with respect to that message, and then forwards a modified version of the message to a target server, typically a server associated with a CDN customer. The process may include an associated intermediate processing agent (IPA) or a sub-processing thread to facilitate the given action. In one embodiment, the message is an HTTP POST, and the given action comprises the following: (i) recognizing the POST, (ii) removing given data from the POST, (iii) issuing an intermediate (or subordinate) request to another process (e.g., a third party server), passing the given data removed from the POST to the process, (iv) receiving a response to the intermediate request, (v) incorporating data received from or associated with the response into a new HTTP message, and (vi) forwarding the new HTTP message onto the target server. In this manner, the given data in the POST may be protected as the HTTP message “passes through” the edge server on its way from the client to the target (merchant) server. In an alternative embodiment, data extracted from the POST message is enhanced by passing the data to an externalized process and adding a derived value (such as a fraud risk score based on the data) back into the message.04-19-2012
20120096106Extending a content delivery network (CDN) into a mobile or wireline network - A content delivery network (CDN) comprises a set of edge servers, and a domain name service (DNS) that is authoritative for content provider domains served by the CDN. The CDN is extended into one or more mobile or wireline networks that cannot or do not otherwise support fully-managed CDN edge servers. In particular, an “Extender” is deployed in the mobile or wireline network, preferably as a passive web caching proxy that is beyond the edge of the CDN but that serves CDN-provisioned content under the control of the CDN. The Extender may also be used to transparently cache and serve non-CDN content. An information channel is established between the Extender and the CDN to facilitate the Extender functionality.04-19-2012
20120036238Method and system for providing on-demand content delivery for an origin server - An infrastructure “insurance” mechanism enables a Web site to fail over to a content delivery network (CDN) upon a given occurrence at the site. Upon such occurrence, at least some portion of the site's content is served preferentially from the CDN so that end users that desire the content can still get it, even if the content is not then available from the origin site. In operation, content requests are serviced from the site in the usual manner, e.g., by resolving DNS queries to the site's IP address, until detection of the given occurrence. Thereafter, DNS queries are managed by a CDN dynamic DNS-based request routing mechanism so that such queries are resolved to optimal CDN edge servers. After the event that caused the occurrence has passed, control of the site's DNS may be returned from the CDN back to the origin server's DNS mechanism.02-09-2012
20120016933Dynamic Image Delivery System - A dynamic image delivery system receives a client request for an image at an image caching server. The image caching server measures the client's network access speed and looks for an appropriate pre-rendered copy of the requested image that is rendered for the client's network access speed in local storage. If the appropriate rendered copy is found, then the image caching server sends the rendered image to the client. If it is not found, then the image caching server dynamically renders a copy of the image and sends it to the client.01-19-2012
20110307584HTML delivery from edge-of-network servers in a content delivery network (CDN) - A content delivery network is enhanced to provide for delivery of cacheable markup language content files such as HTML. To support HTML delivery, the content provider provides the CDNSP with an association of the content provider's domain name (e.g., www.customer.com) to an origin server domain name (e.g., html.customer.com) at which one or more default HTML files are published and hosted. The CDNSP provides its customer with a CDNSP-specific domain name. The content provider, or an entity on its behalf, then implements DNS entry aliasing (e.g., a CNAME of the host to the CDNSP-specific domain) so that domain name requests for the host cue the CDN DNS request routing mechanism. This mechanism then identifies a best content server to respond to a request directed to the customer's domain. The CDN content server returns a default HTML file if such file is cached; otherwise, the CDN content server directs a request for the file to the origin server to retrieve the file, after which the file is cached on the CDN content server for subsequent use in servicing other requests. The content provider is also provided with log files of CDNSP-delivered HTML.12-15-2011
20110296048Method and system for stream handling using an intermediate format - A method of delivering a live stream is implemented within a content delivery network (CDN) and includes the high level functions of recording the stream using a recording tier, and playing the stream using a player tier. The step of recording the stream includes a set of sub-steps that begins when the stream is received at a CDN entry point in a source format. The stream is then converted into an intermediate format (IF), which is an internal format for delivering the stream within the CDN and comprises a stream manifest, a set of one or more fragment indexes (FI), and a set of IF fragments. The player process begins when a requesting client is associated with a CDN HTTP proxy. In response to receipt at the HTTP proxy of a request for the stream or a portion thereof, the HTTP proxy retrieves (either from the archive or the data store) the stream manifest and at least one fragment index. Using the fragment index, the IF fragments are retrieved to the HTTP proxy, converted to a target format, and then served in response to the client request. The source format may be the same or different from the target format. Preferably, all fragments are accessed, cached and served by the HTTP proxy via HTTP.12-01-2011
20110289214Content delivery network map generation using passive measurement data - A routing method operative in a content delivery network (CDN) where the CDN includes a request routing mechanism for routing clients to subsets of edge servers within the CDN. According to the routing method, TCP connection data statistics are collected are edge servers located within a CDN region. The TCP connection data statistics are collected as connections are established between requesting clients and the CDN region and requests are serviced by those edge servers. Periodically, e.g., daily, the connection data statistics are provided from the edge servers in a region back to the request routing mechanism. The TCP connection data statistics are then used by the request routing mechanism in subsequent routing decisions and, in particular, in the map generation processes. Thus, for example, the TCP connection data may be used to determine whether a given quality of service is being obtained by routing requesting clients to the CDN region. If not, the request routing mechanism generates a map that directs requesting clients away from the CDN region for a given time period or until the quality of service improves.11-24-2011
20110283018Method and apparatus for correlating nameserver IPv6 and IPv4 addresses - A method of correlating nameserver addresses is implemented in a multi-tier name server hierarchy comprising a first level authority for a domain, and one or more second level authorities to which the first level authority delegates with respect to a particular sub-domain associated with the domain. Preferably, the first level authority is IPv4-based and at least one second level authority is IPv6-based. The first level authority responds to a request issued by a client caching nameserver (a “CCNS”) and returns an answer that includes both IPv4 and IPv6 authorities for the domain. The CCNS is located at an IPv4 source address that is passed along to the first level authority with the CCNS request. According to a feature of this disclosure, the first level authority encodes the CCNS IPv4 source address in the IPv6 destination address of at least one IPv6 authority. Then, when the CCNS then makes a follow-on IPv6 request (with respect to the sub-domain) directed to the IPv6 authority, the IPv6 authority knows both the IPv6 address of the CCNS (by virtue of having received it in association with the request) as well as its IPv4 address (by virtue of the encoding). The IPv6 authority maintains the IPv4-IPv6 correlation. Over time (i.e., as other CCNSs make requests), the IPv6 authority builds up a database of these CCNS IPv6-IPv4 associations.11-17-2011
20110282990Method and system for constraining server usage in a distributed network - A “velvet rope” mechanism that enables customers of a shared distributed network (such as a content delivery network) needing to control their costs to control the amount of traffic that is served via the shared network. A given server in the distributed network identifies when a customer is about to exceed a bandwidth quota as a rate (bursting) or for a given billing period (e.g., total megabytes (MB) served for a given period) and provides a means for taking a given action based on this information. Typically, the action taken would result in a reduction in traffic served so that the customer can constrain its usage of the shared network to a given budget value.11-17-2011
20110231515Transparent Session Persistence Management by a Server in a Content Delivery Network - A method and apparatus for establishing session persistence between a client and an origin server are provided. The session persistence can be managed by an intermediate cache server. The persistence is established by inserting an identifier and origin server address in a cookie or URL. Alternatively, the persistence is established by a table mapping a source IP address or a session ID to a specific origin server. Subsequent requests from the same client are mapped to the same origin server using these methods of establishing persistence.09-22-2011
20110225647Cloud Based Firewall System And Service - A cloud-based firewall system and service is provided to protect customer sites from attacks, leakage of confidential information, and other security threats. In various embodiments, such a firewall system and service can be implemented in conjunction with a content delivery network (CDN) having a plurality of distributed content servers. The CDN servers receive requests for content identified by the customer for delivery via the CDN. The CDN servers include firewalls that examine those requests and take action against security threats, so as to prevent them from reaching the customer site. The CDN provider implements the firewall system as a managed firewall service, with the operation of the firewalls for given customer content being defined by that customer, independently of other customers. In some embodiments, a customer may define different firewall configurations for different categories of that customer's content identified for delivery via the CDN.09-15-2011
20110219108Scalable, high performance and highly available distributed storage system for Internet content - A method for content storage on behalf of participating content providers begins by having a given content provider identify content for storage. The content provider then uploads the content to a given storage site selected from a set of storage sites. Following upload, the content is replicated from the given storage site to at least one other storage site in the set. Upon request from a given entity, a given storage site from which the given entity may retrieve the content is then identified. The content is then downloaded from the identified given storage site to the given entity. In an illustrative embodiment, the given entity is an edge server of a content delivery network (CDN).09-08-2011
20110213882Method and system for handling computer network attacks - A method and apparatus for serving content requests using global and local load balancing techniques is provided. Web site content is cached using two or more point of presences (POPs), wherein each POP has at least one DNS server. Each DNS server is associated with the same anycast IP address. A domain name resolution request is transmitted to the POP in closest network proximity for resolution based on the anycast IP address. Once the domain name resolution request is received at a particular POP, local load balancing techniques are performed to dynamically select the appropriate Web server at the POP for use in resolving the domain name resolution request. Approaches are described for handling bursts of traffic at a particular POP, security, and recovering from the failure of various components of the system.09-01-2011
20110196943Optimal route selection in a content delivery network - A routing mechanism, service or system operable in a distributed networking environment. One preferred environment is a content delivery network (CDN) wherein the present invention provides improved connectivity back to an origin server, especially for HTTP traffic. In a CDN, edge servers are typically organized into regions, with each region comprising a set of content servers that preferably operate in a peer-to-peer manner and share data across a common backbone such as a local area network (LAN). The inventive routing technique enables an edge server operating within a given CDN region to retrieve content (cacheable, non-cacheable and the like) from an origin server more efficiently by selectively routing through the CDN's own nodes, thereby avoiding network congestion and hot spots. The invention enables an edge server to fetch content from an origin server through an intermediate CDN server or, more generally, enables an edge server within a given first region to fetch content from the origin server through an intermediate CDN region.08-11-2011
20110191449Automatic migration of data via a distributed computer network - A method and apparatus for the automatic migration of data via a distributed computer network allows a customer to select content files that are to be transferred to a group of edge servers. Origin sites store all of a customer's available content files. An edge server maintains a dynamic number of popular files in its memory for the customer. The files are ranked from most popular to least popular and when a file has been requested from an edge server a sufficient number of times to become more popular than the lowest popular stored file, the file is obtained from an origin site. The edge servers are grouped into two service levels: regional and global. The customer is charged a higher fee to store its popular files on the global edge servers compared to a regional set of edge servers because of greater coverage.08-04-2011
20110173345Method and system for HTTP-based stream delivery - A method of delivering a live stream is implemented within a content delivery network (CDN) and includes the high level functions of recording the stream using a recording tier, and playing the stream using a player tier. The step of recording the stream includes a set of sub-steps that begins when the stream is received at a CDN entry point in a source format. The stream is then converted into an intermediate format (IF), which is an internal format for delivering the stream within the CDN and comprises a stream manifest, a set of one or more fragment indexes (FI), and a set of IF fragments. The player process begins when a requesting client is associated with a CDN HTTP proxy. In response to receipt at the HTTP proxy of a request for the stream or a portion thereof, the HTTP proxy retrieves (either from the archive or the data store) the stream manifest and at least one fragment index. Using the fragment index, the IF fragments are retrieved to the HTTP proxy, converted to a target format, and then served in response to the client request. The source format may be the same or different from the target format. Preferably, all fragments are accessed, cached and served by the HTTP proxy via HTTP. In another embodiment, a method of delivering a stream on-demand (VOD) uses a translation tier (in lieu of the recording tier) to manage the creation and/or handling of the IF components.07-14-2011
20110167111METHOD FOR OPERATING AN INTEGRATED POINT OF PRESENCE SERVER NETWORK - A method for operating a network of point of presence servers sharing a hostname includes receiving a request from a user for a web page at a first web address, determining traffic loads of a plurality of customer web servers, determining a customer web server from the plurality of customer web servers, the customer web server having a traffic load lower than traffic loads of remaining customer web servers, directing the request from the user to the customer web server, receiving a request from the user for static content on the web page at a second web address, determining the point of presence server from the network of point of presence servers that is appropriate for the request, the point of presence server having service metrics more appropriate than service metrics of remaining point of presence servers from the network.07-07-2011
20110113152Method and system for enhancing live stream delivery quality using prebursting - The subject matter herein relates to a method to “accelerate” the delivery of a portion of a data stream across nodes of a stream transport network. A portion of a live stream is forwarded from a first node to a second node in a transport network at a high bitrate as compared to the stream's encoded bitrate, and thereafter, the stream continues to be forwarded from the first node to the second node at or near the encoded bitrate. The disclosed technique of forwarding a portion of a stream at a high bitrate as compared to the encoded bitrate of the stream is sometimes referred to as “prebursting” the stream. This technique provides significant advantages in that it reduces stream startup time, reduces unrecoverable stream packet loss, and reduces stream rebuffers as the stream is viewed by a requesting end user that has been mapped to a media server in a distributed computer network such as a content delivery network.05-12-2011
20110099290METHOD FOR DETERMINING METRICS OF A CONTENT DELIVERY AND GLOBAL TRAFFIC MANAGEMENT NETWORK - A method for determining metrics of a content delivery and global traffic management network provides service metric probes that determine the service availability and metric measurements of types of services provided by a content delivery machine. Latency probes are also provided for determining the latency of various servers within a network. Service metric probes consult a configuration file containing each DNS name in its area and the set of services. Each server in the network has a metric test associated with each service supported by the server which the service metric probes periodically performs metric tests on and records the metric test results which are periodically sent to all of the DNS servers in the network. DNS servers use the test result updates to determine the best server to return for a given DNS name. The latency probe calculates the latency from its location to a client's location using the round trip time for sending a packet to the client to obtain the latency value for that client. The latency probe updates the DNS servers with the clients' latency data. The DNS server uses the latency test data updates to determine the closest server to a client.04-28-2011
20100293281Managing web tier session state objects in a content delivery network (CDN) - Business applications running on a content delivery network (CDN) having a distributed application framework can create, access and modify state for each client. Over time, a single client may desire to access a given application on different CDN edge servers within the same region and even across different regions. Each time, the application may need to access the latest “state” of the client even if the state was last modified by an application on a different server. A difficulty arises when a process or a machine that last modified the state dies or is temporarily or permanently unavailable. The present invention provides techniques for migrating session state data across CDN servers in a manner transparent to the user. A distributed application thus can access a latest “state” of a client even if the state was last modified by an application instance executing on a different CDN server, including a nearby (in-region) or a remote (out-of-region) server.11-18-2010
20100293229Highly scalable, fault tolerant file transport using vector exchange - A file transport mechanism according to the invention is responsible for accepting, storing and distributing files, such as configuration or control files, to a large number of field machines. The mechanism is comprised of a set of servers that accept, store and maintain submitted files. The file transport mechanism implements a distributed agreement protocol based on “vector exchange.” A vector exchange is a knowledge-based algorithm that works by passing around to potential participants a commitment bit vector. A participant that observes a quorum of commit bits in a vector assumes agreement. Servers use vector exchange to achieve consensus on file submissions. Once a server learns of an agreement, it persistently marks (in a local data store) the request as “agreed.” Once the submission is agreed, the server can stage the new file for download.11-18-2010
20100274819Dynamic content assembly on edge-of-network servers in a content delivery network - The disclosed technique enables a content provider to dynamically assemble content at the edge of the Internet, preferably on content delivery network (CDN) edge servers. Preferably, the content provider leverages an “edge side include” (ESI) markup language that is used to define Web page fragments for dynamic assembly at the edge. Dynamic assembly improves site performance by catching the objects that comprise dynamically generated pages at the edge of the Internet, close to the end user. The content provider designs and develops the business logic to form and assemble the pages, for example, by using the ESI language within its development environment. Instead of being assembled by an application/web server in a centralized data center, the application/web server sends a page template and content fragments to a CDN edge server where the page is assembled. Each content fragment can have its own cacheability profile to manage the “freshness” of the content. Once a user requests a page (template), the edge server examines its cache for the included fragments and assembles the page on-the-fly.10-28-2010
20100250742Global load balancing across mirrored data centers - An intelligent traffic redirection system that performs global load balancing can be used in any situation where an end-user requires access to a replicated resource. The method directs end-users to the appropriate replica so that the route to the replica is good from a network standpoint and the replica is not overloaded. The technique preferably uses a Domain Name Service (DNS) to provide IP addresses for the appropriate replica. The most common use is to direct traffic to a mirrored web site.09-30-2010
20100217801Network performance monitoring in a content delivery system - A method for Internet content delivery in a content delivery network established at network locations, the content delivery network comprising a plurality of content servers for serving content resources. The plurality of content servers includes a plurality of subsets of content servers, each subject being located at one of a plurality of Internet data centers. For each Internet Protocol (IP) address block from which requests for content resources are expected to be received, the method generates a candidate list of Internet data centers to be used to service the requests for content resources. For the IP address block, the method selects at least one of the Internet data centers from the candidate list to be used to service the requests for content resources. The selected Internet data center for the IP address block is written into a network map. The selecting step is carried out concurrently for each IP address block from which requests for content resources are expected to be received such that the network map comprises the selected Internet data center for each IP address block. The network map is then provided to a domain name service (DNS) associated with the content delivery network. In response to a DNS query received at the domain name service associated with the content delivery network, the network map is used to identify one of the Internet data centers from the candidate list to be used to service a request for a content resource.08-26-2010
20100005175DISTRIBUTED ON-DEMAND COMPUTING SYSTEM - A method and mechanism for a distributed on-demand computing system. The system automatically provisions distributed computing servers with customer application programs. The parameters of each customer application program are taken into account when a server is selected for hosting the program. The system monitors the status and performance of each distributed computing server. The system provisions additional servers when traffic levels exceed a predetermined level for a customer's application program and, as traffic demand decreases to a predetermined level, servers can be un-provisioned and returned back to a server pool for later provisioning. The system tries to fill up one server at a time with customer application programs before dispatching new requests to another server. The customer is charged a fee based on the usage of the distributed computing servers.01-07-2010
20090259853DYNAMIC MULTIMEDIA FINGERPRINTING SYSTEM - A dynamic multimedia fingerprinting system is provided. A user requests multimedia content from a Web cache server that verifies that the user is authorized to download the content. A custom fingerprint specific to the user is generated and dynamically inserted into the content as the content is delivered to the user. The custom fingerprint can be generated on the Web cache server or at the content provider's server. The system allows a content provider to specify where the custom fingerprint is inserted into the content or where the fingerprint is to replace a placeholder within the content.10-15-2009
20090210528METHOD FOR DETERMINING METRICS OF A CONTENT DELIVERY AND GLOBAL TRAFFIC MANAGEMENT NETWORK - A method for determining metrics of a content delivery and global traffic management network provides service metric probes that determine the service availability and metric measurements of types of services provided by a content delivery machine. Latency probes are also provided for determining the latency of various servers within a network. Service metric probes consult a configuration file containing each DNS name in its area and the set of services. Each server in the network has a metric test associated with each service supported by the server which the service metric probes periodically performs metric tests on and records the metric test results which are periodically sent to all of the DNS servers in the network. DNS servers use the test result updates to determine the best server to return for a given DNS name. The latency probe calculates the latency from its location to a client's location using the round trip time for sending a packet to the client to obtain the latency value for that client. The latency probe updates the DNS servers with the clients' latency data. The DNS server uses the latency test data updates to determine the closest server to a client.08-20-2009
20090119397Using virtual domain name service (DNS) zones for enterprise content delivery - A domain to be published to an enterprise ECDN is associated (either by static configuration or dynamically) with a set of one or more enterprise zones configurable in a hierarchy. When a DNS query arrives for a hostname known to be associated with given content within the control of the ECDN, a DNS server preferably responds in one of three (3) ways: (a) handing back an IP address, e.g., for an ECDN intelligent node that knows how to obtain the requested content from a surrogate or origin server; (b) executing a zone referral to a next (lower) level name server in a zone hierarchy, or (c) CNAMing to another hostname, thereby essentially restarting the lookup procedure. In the latter case, this new CNAME causes the resolution process to start back at the root and resolve a new path, probably along a different path in the hierarchy. At any particular level in the zone hierarchy, preferably there is an associated zone server. That server preferably executes logic that applies the requested hostname against a map, which, using known techniques, may be generated from given (static, dynamic, internally-generated or third party-sourced) performance metrics. Thus, a given name query to ECDN-managed content may be serviced in coordination with various sources of distributed network intelligence. As a result, the invention provides for a distributed, dynamic globally load balanced name service.05-07-2009
20090106411SCALABLE, HIGH PERFORMANCE AND HIGHLY AVAILABLE DISTRIBUTED STORAGE SYSTEM FOR INTERNET CONTENT - A method for content storage on behalf of participating content providers begins by having a given content provider identify content for storage. The content provider then uploads the content to a given storage site selected from a set of storage sites. Following upload, the content is replicated from the given storage site to at least one other storage site in the set. Upon request from a given entity, a given storage site from which the given entity may retrieve the content is then identified. The content is then downloaded from the identified given storage site to the given entity. In an illustrative embodiment, the given entity is an edge server of a content delivery network (CDN).04-23-2009
20080320160Method and system for enhancing live stream delivery quality using prebursting - The subject matter herein relates to a method to “accelerate” the delivery of a portion of a data stream across nodes of a stream transport network. A portion of a live stream is forwarded from a first node to a second node in a transport network at a high bitrate as compared to the stream's encoded bitrate, and thereafter, the stream continues to be forwarded from the first node to the second node at or near the encoded bitrate. The disclosed technique of forwarding a portion of a stream at a high bitrate as compared to the encoded bitrate of the stream is sometimes referred to as “prebursting” the stream. This technique provides significant advantages in that it reduces stream startup time, reduces unrecoverable stream packet loss, and reduces stream rebuffers as the stream is viewed by a requesting end user that has been mapped to a media server in a distributed computer network such as a content delivery network.12-25-2008
20080282112Method and apparatus for testing request-response service using live connection traffic - The subject matter herein provides for a method and apparatus for comparison of network systems using live traffic in real-time. The inventive technique presents real-world workload in real-time with no external impact (i.e. no impact on the system under test), and it enables comparison against a production system for correctness verification. A preferred embodiment of the invention is a testing tool for the pseudo-live testing of CDN content staging servers, According to the invention, traffic between clients and the live production CDN servers is monitored by a simulator device, which then replicates this workload onto a system under test (SUT). The simulator detects divergences between the outputs from the SUT and live production servers, allowing detection of erroneous behavior. To the extent possible, the SUT is completely isolated from the outside world so that errors or crashes by this system do not affect either the CDN customers or the end users. Thus, the SUT does not interact with end users (i.e., their web browsers). Consequently, the simulator serves as a proxy for the clients. By basing its behavior off the packet stream sent between client and the live production system, the simulator can simulate most of the oddities of real-world client behavior, including malformed packets, timeouts, dropped traffic and reset connections, among others.11-13-2008
20080281946Automatic migration of data via a distributed computer network - A method and apparatus for the automatic migration of data via a distributed computer network allows a customer to select content files that are to be transferred to a group of edge servers. Origin sites store all of a customer's available content files. An edge server maintains a dynamic number of popular files in its memory for the customer. The files are ranked from most popular to least popular and when a file has been requested from an edge server a sufficient number of times to become more popular than the lowest popular stored file, the file is obtained from an origin site. The edge servers are grouped into two service levels: regional and global. The customer is charged a higher fee to store its popular files on the global edge servers compared to a regional set of edge servers because of greater coverage.11-13-2008
20080222243Client-side method for identifying an optimal server - A client player performs a query to a nameserver against a network map of Internet traffic conditions. The query is made asking for a particular service (e.g., RTSP) via a particular protocol (TCP) in a particular domain. In response, the nameserver returns a set of one or more tokens, with each token defining a machine or, in the preferred embodiment, a group of machines, from which the player should seek to obtain the stream. The player may then optionally perform one or more tests to determine which of a set of servers provides a best quality of service for the stream. That server is then used to retrieve the stream. Periodically, the client player code repeats the query during stream playback to determine whether there is a better source for the stream. If a better source exists, the player performs a switch to the better stream source “on the fly” if appropriate to maintain and/or enhance the quality of service. Preferably, the client player publishes data identifying why it selected a particular server, and such data may be used to augment the network map used for subsequent request routing determinations.09-11-2008

Patent applications by AKAMAI TECHNOLOGIES, INC.

Website © 2016 Advameg, Inc.