Entries |
Document | Title | Date |
20080215701 | Modified machine architecture with advanced synchronization - A multiple computer environment is disclosed in which an application program executes simultaneously on a plurality of computers (M | 09-04-2008 |
20080228897 | LAYERING SERIAL ATTACHED SMALL COMPUTER SYSTEM INTERFACE (SAS) OVER ETHERNET - Disclosed are embodiments of a storage area network (SAN), a network interface card and a method of managing data transfers. These embodiments overcome the distance limitation of the Serial Attached Small Computer System Interface (SAS) physical layer so that SAS storage protocol may be used for communication between host systems and storage controllers. Host systems and storage controls are connected via an Ethernet interface (e.g., a legacy Ethernet or enhanced Ethernet for datacenter (EED) fabric). SAS storage protocol is layered over this Ethernet interface, providing commands and transport protocol for information exchange. Since the Ethernet interface has its own physical layer, the SAS physical layer is unnecessary and, thus, so is the SAS distance limitation. If legacy Ethernet is used, over-provisioning is used to avoid packet drops, or alternatively, TCP/IP is supported in order to recover from packet drops. If EED is used, congestion management as well as priority of service functions are provided by the EED protocols | 09-18-2008 |
20080244031 | On-Demand Memory Sharing - A method for sharing memory resources in a data network is provided. The method comprises monitoring first memory space available to a first system; transferring data to a second system, in response to determining that the first memory space has fallen below a first threshold level; and transferring instructions to the second system to perform a first operation on the data. | 10-02-2008 |
20080250116 | METHOD AND APPARATUS FOR REDUCING POOL STARVATION IN A SHARED MEMORY SWITCH - Reducing pool starvation in a switch is disclosed. The switch includes a plurality of egress ports, and a reserved pool of buffers in a shared memory. The reserved pool of buffers is one of a number of reserved pools of buffers, and the reserved pool of buffers is reserved for one of the egress ports. A shared pool of buffers and a multicast pool of buffers are in the shared memory. The shared pool of buffers is shared by the egress ports. | 10-09-2008 |
20080270565 | Method and system for arbitrating computer access to a shared storage medium - A method of arbitrating access to a storage medium that is shared by M first computers operating on a Windows™ operating comprising (1) determining if the SCSI PR-flag has been set; (2) if yes, preventing the N second computers from writing to the storage medium; and (3) setting the SCSI MC-flag for each of said M first computers after one of the second computers writes to the storage medium to notify the M first computers that the contents of the storage medium may have changed. | 10-30-2008 |
20080281939 | DECOUPLED LOGICAL AND PHYSICAL DATA STORAGE WITHIN A DATABASE MANAGEMENT SYSTEM - The subject matter herein relates to database management systems and, more particularly, to decoupled logical and physical data storage within a database management system. Various embodiments provide systems, methods, and software that separate physical storage from logical storage of data. These embodiments include a mapping of logical storage to physical storage to allow data to be moved within the physical storage to increase database responsiveness. | 11-13-2008 |
20080288608 | Method and System for Correlating Transactions and Messages - A method is presented for correlating related transactions, such as a parent transaction that invokes a child transaction within a distributed data processing system, using a particular format for the correlation tokens. Each transaction is associated with a correlation token containing a hierarchical, three-layer identifier that includes a local transaction identifier and a local system identifier, which are associated with the local system of the child transaction, along with a root transaction identifier, a root system identifier, and a registry identifier. The local transaction identifier is unique within the local system, and the local system identifier is unique within a registry that contains a set of system identifiers. The registry is associated with a domain in which the local systems operate, and multiple domains exist within a transaction space of entities that use these correlation tokens. Correlation token pairs are analyzed to construct a call graph of related transactions. | 11-20-2008 |
20080307065 | Method for starting up file sharing system and file sharing device - The file sharing system of the present invention is capable of starting up a file sharing device and preventing the connection of an external storage medium to an erroneous host using information that is saved in the external storage medium. In cases where the maintenance exchange work for a NAS device is performed, the collection section collects information that is required in order to start up the NAS system section. The saving section stores the collected information in the USB memory as startup information. In cases where the NAS device is returned after the maintenance exchange is complete, the USB memory is attached to the NAS device. The setting section reads the startup information that is stored in the USB memory and sets the communication control section in accordance with an instruction from the startup control section. As a result, the NAS-OS is read from the logical volume in the storage device and the NAS system section starts up. | 12-11-2008 |
20090037554 | MIGRATING WORKLOADS USING NETWORKED ATTACHED MEMORY - A network system comprising a plurality of servers communicatively-coupled on a network, a network-attached memory coupled between a first server and a second server of the server plurality, and a memory management logic that executes on selected servers of the server plurality and migrates a virtual machine from the first server to the second server with memory for the virtual machine residing on the network-attached memory. | 02-05-2009 |
20090037555 | Storage system that transfers system information elements - A first storage system that has a first storage device comprises a first interface device that is connected to a second interface device that a second storage system has. A first controller of the first storage system reads system information elements of first system information (information relating to the constitution and control of the first storage system) from a first system area (a storage area that is not provided for the host of the first storage device) and transfers the system information elements or modified system information elements to the second storage system via the first interface device. The system information elements are recorded in a second system area in a second storage device that the second storage system has. | 02-05-2009 |
20090043863 | SYSTEM USING VIRTUAL REPLICATED TABLES IN A CLUSTER DATABASE MANAGEMENT SYSTEM - A system for improved data sharing within a cluster of nodes having a database management system. The system defines a virtual replicated table as being useable in a hybrid of a shared-cache and shared-nothing architecture. The virtual replicated table is a physically single table sharable among a plurality of cluster nodes for data read operations and not sharable with other cluster nodes for data modification operations. Default owner node is assigned for each virtual replicated table to ensure the page validity and provide requested pages to the requesting node. | 02-12-2009 |
20090063652 | Localized Media Content Delivery - Improved approaches to make data available locally at business establishments are disclosed. In one embodiment, data anticipated to be soon to be requested by patrons of a particular business establishment can be pre-loaded to a local server provided at the particular business establishment. By pre-loading data that is anticipated to be soon to be requested by patrons of the particular business establishment, local network access traffic and congestion at the retail establishment can be reduced. The improved approaches are particularly well suited for media content data that is likely to be requested by patrons at business (e.g., retail) establishments. Advantageously, patrons can get rapid download of media content data associated with one or more media items that the patrons have purchased from an online media store. | 03-05-2009 |
20090063653 | Grid computing space - A method and apparatus for using a tree-structured cluster as a library for a computing grid. In one embodiment, a request for computation is received at a cache node of the cluster. The computation requires data from an other cache node of the cluster, and not present in the cache node receiving the request. The other cache nodes of the cluster are polled for the required data. An instance of the required data stored in the other cache node of the cluster is replicated to the cache node receiving the computation request. | 03-05-2009 |
20090077194 | Data input terminal, method, and computer readable storage medium storing program thereof - The input unit stores data input by the user in the data storage. The status determiner determines reception status of the screen data to be one of three statuses of “abnormal”, “normal”, and “recovery” from “abnormal” to “normal” on the basis of frame losses. In a case of the “abnormal” status, a transmission controller does not read the input data stored in the data storage. In a case of the “normal” status, the transmission controller reads the input data stored in the data storage, transmits the input data via the transmitter, and deletes the input data stored in the data storage. In a case of the “recovery” status, the transmission confirmer instructs the output unit to output the input data stored in the data storage to ask the user whether to transmit the input data to the server. | 03-19-2009 |
20090144388 | NETWORK WITH DISTRIBUTED SHARED MEMORY - A computer network with distributed shared memory, including a clustered memory cache aggregated from and comprised of physical memory locations on a plurality of physically distinct computing systems. The clustered memory cache is accessible by a plurality of clients on the computer network and is configured to perform page caching of data items accessed by the clients. The network also includes a policy engine operatively coupled with the clustered memory cache, where the policy engine is configured to control where data items are cached in the clustered memory cache. | 06-04-2009 |
20090150510 | SYSTEM AND METHOD FOR USING REMOTE MODULE ON VIOS TO MANAGE BACKUPS TO REMOTE BACKUP SERVERS - A system, method, and program product is provided that receives a backup request at a virtual input/output server (VIOS) from a client of the VIOS. The backup request corresponds to a virtual nonvolatile storage that is used by the client. The VIOS retrieves data from the nonvolatile storage device where the virtual nonvolatile storage is stored. The VIOS transmits the retrieved data to a backup server via a computer network, such as the Internet. In one embodiment, a backup software application runs on the VIOS client and a backup proxy software application runs on the VIOS. | 06-11-2009 |
20090150511 | NETWORK WITH DISTRIBUTED SHARED MEMORY - A computer network with distributed shared memory, including a clustered memory cache aggregated from and comprised of physical memory locations on a plurality of physically distinct computing systems. The network also includes a plurality of local cache managers, each of which are associated with a different portion of the clustered memory cache, and a metadata service operatively coupled with the local cache managers. Also, a plurality of clients are operatively coupled with the metadata service and the local cache managers. In response to a request issuing from any of the clients for a data item present in the clustered memory cache, the metadata service is configured to respond with identification of the local cache manager associated with the portion of the clustered memory cache containing such data item. | 06-11-2009 |
20090157840 | Controlling Shared Access Of A Media Tray - Methods, apparatus, and products for controlling shared access of a media tray are disclosed that include monitoring communications between a virtualized media tray and a computing device currently connected to the virtualized media tray; receiving an access request from a requesting computing device not currently connected to the virtualized media tray; determining, in dependence upon the monitored communications between the virtualized media tray and the computing device currently connected to the virtualized media tray, to switch connection of the virtualized media tray from the computing device currently connected to the virtualized media tray to the requesting computing device; and switching connection of the virtualized media tray from the computing device currently connected to the virtualized media tray to the requesting computing device. | 06-18-2009 |
20090182835 | Non-disruptive storage caching using spliced cache appliances with packet inspection intelligence - A method, system and program are disclosed for accelerating data storage by providing non-disruptive storage caching using spliced cache appliances with packet inspection intelligence. A cache appliance that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies provides low-latency access and redundancy in responding to both read and write requests for cached files, thereby improving access time to the data stored on the disk-based NAS filer (group). | 07-16-2009 |
20090182836 | System and method for populating a cache using behavioral adaptive policies - A method, system and program are disclosed for accelerating data storage in a cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies which populate the storage cache using behavioral adaptive policies that are based on analysis of clients-filers transaction patterns and network utilization, thereby improving access time to the data stored on the disk-based NAS filer (group) for predetermined applications. | 07-16-2009 |
20090198789 | VIDEOGAME LOCALIZATION USING LANGUAGE PACKS - A code library, or “language interface pack” library, is provided that can be integrated into a video game to detect new localizations of the video game dynamically, and to locate and load the most appropriate localized resources depending on user preferences and available localized game content. If no localized content is available in the preferred language, a fallback system ensures that the game always receives the location of existing game content in another language. | 08-06-2009 |
20090198790 | METHOD AND SYSTEM FOR AN EFFICIENT DISTRIBUTED CACHE WITH A SHARED CACHE REPOSITORY - Network cache systems are used to improve network performance and reduce network traffic. An improved network cache system that uses a centralized shared cache system is disclosed. Each cache device that shares the centralized shared cache system maintains its own catalog, database or metadata index of the content stored on the centralized shared cache system. When one of the cache devices that shares the centralized shared cache system stores a new content resource to the centralized shared cache system, that cache device transmits a broadcast message to all of the peer cache devices. The other cache devices that receive the broadcast message will then update their own local catalog, database or metadata index of the centralized share cache system with the information about the new content resource. | 08-06-2009 |
20090234933 | DATA FORWARDING STORAGE - Methods and apparatus, including computer program products, for data forwarding storage. A network includes a group of interconnected computer system nodes each adapted to receive data and continuously forward the data from computer memory to computer memory without storing on any physical storage device in response to a request to store data from a requesting system and retrieve data being continuously forwarded from computer memory to computer memory in response to a request to retrieve data from the requesting system. | 09-17-2009 |
20090271493 | System and Apparatus for Managing Social Networking and Loyalty Program Data - A system for managing and sharing social networking and loyalty program data that includes a quick-transfer device. The quick-transfer device can be in any number of forms that are readily portable such as a keychain, wristwatch, accessory for a mobile phone or music player or similar form. The quick-transfer device provides a set of interactive features for managing and transferring the social networking and loyalty program data through a ‘quick-touch’ or ‘quick-click’ transfer mechanism. The loyalty program and social networking data structure is managed by the quick-transfer device. The data structure is accessible to general purpose applications such as web browsers. The quick-transfer device can communicate with other quick-transfer devices as well as computers and external sensors to update and modify the contents of the stored data structure. | 10-29-2009 |
20090276502 | Network Switch with Shared Memory - A network switch that incorporates memory that can be shared by computers or processors connected to the network switch is provided. The network switch of the present invention is particularly suitable for use in a computer cluster, such as a Beowulf cluster, in which each computer in the cluster can use the shared memory resident in at least one of the network switches. | 11-05-2009 |
20090292789 | Computer System, Management Server and Configuration Information Acquisition Method - The management server includes an acquisition unit for acquiring the configuration information and performance information of the storage apparatus and the host computer respectively at different timings, and a comparison unit for comparing, when a configuration change of the storage apparatus is commanded externally, a performance value of components in the storage apparatus subject to the configuration change and a performance value of components in a connection relationship with the components. The acquisition unit determines that an unknown component has been added in the storage apparatus when the difference in the performance values compared with the comparison unit is of a certain level or greater, and reacquires configuration information from the storage apparatus. | 11-26-2009 |
20090327445 | CONTINUOUS DATA PROTECTION AND REMOTE BLOCK-LEVEL STORAGE FOR A DATA VOLUME - A system and method for writing and reading blocks of a data volume are disclosed. The method provides continuous data protection (CDP) for a data volume by backing up blocks of the data volume in real time to a local CDP log and transmitting the blocks over the Internet for storage in a remote CDP log on a server computer system in response to write requests that change the blocks of the data volume. In response to a read request for a particular block the method attempts to read the block from the data volume. If the block is not present in the data volume the method attempts to read the block from the local CDP log. If the block is not present in the local CDP log the method request the server computer system to read the block from the remote CDP log and return the block. | 12-31-2009 |
20090327446 | Software Application Striping - A distributed computing system comprising networking infrastructure and methods of executing an application on the distributed computing system is presented. Interconnected networking nodes offering available computing resources form a network fabric. The computing resources can be allocated from the networking nodes, including available processing cores or memory elements located on the networking nodes. A software application can be stored in a system memory comprising memory elements allocated from the nodes. The software application can be disaggregated into a plurality of executable portions that are striped across the allocated processing cores by assigning each core a portion to execute. When the cores are authenticated with respect to their portions, the cores are allowed to execute the portions by accessing the system memory over the fabric. While executing the software application, the networking nodes having the allocated cores concurrently forward packets through the fabric. | 12-31-2009 |
20100023596 | File-system based data store for a workgroup server - A system and method for storing workgroup objects on a file-system based data store in a workgroup server is disclosed. The present invention implements a file-system based workgroup system in which a workgroup object is stored in one or more files. The present invention further includes a workgroup object list comprising object identifiers, each object identifier uniquely mapping to a workgroup object and each object identifier including a property of the workgroup object based on which the workgroup object list is sorted. | 01-28-2010 |
20100049822 | Network, storage appliance, and method for externalizing an external I/O link between a server and a storage controller integrated within the storage appliance chassis - A network storage appliance is disclosed. The storage appliance includes a port combiner that provides data communication between at least first, second, and third I/O ports; a storage controller that controls storage devices and includes the first I/O port; a server having the second I/O port; and an I/O connector for networking the third I/O port to the port combiner. A single chassis encloses the port combiner, storage controller, and server, and the I/O connector is affixed on the storage appliance. The third I/O port is external to the chassis and is not enclosed therein. In various embodiments, the port combiner comprises a FibreChannel hub comprising a series of loop resiliency circuits, or a FibreChannel, Ethernet, or Infiniband switch. In one embodiment, the port combiner, I/O ports, and server are all comprised in a single blade module for plugging into a backplane of the chassis. | 02-25-2010 |
20100064023 | HOST DISCOVERY IN MULTI-BLADE SERVER CHASSIS - A method for discovering hosts on a multi-blade server chassis is provided. A switch, operational in the multi-blade server, is queried for first world-wide name (WWN) information of the hosts. The first WWN information is known to the switch. The first WWN information is saved on a redundant array of independent disks (RAID) subsystem of the multi-blade server chassis. A system location for each of the hosts is mapped to the RAID subsystem. | 03-11-2010 |
20100070605 | Dynamic Load Management of Network Memory - A system for managing network memory comprises a communication interface and a processor. The communication interface receives a status message from another appliance. The status message indicates an activity level of a faster memory and a slower memory associated with the other appliance. The communication interface also receives a data packet. The processor processes the status message to determine the activity level of the faster memory and the slower memory. The processor also processes the data packet to identify any matching data in the other appliance and estimate whether the matching data is stored in the faster memory based on the activity level. Based on the estimate, the processor determines whether to generate an instruction to retrieve the matching data. | 03-18-2010 |
20100077055 | Remote user interface in a terminal server environment - Methods, apparatus, systems and computer program product for updating a user session in a terminal server environment. Transfer of display data corresponding to an updated user interface can occur via a memory shared between an agent server and an agent client in a terminal server environment. Access to the shared memory can be synchronized via token passing or other operation to prevent simultaneous access to the shared memory. Token sharing and synchronized input/output can be performed using FIFOs, sockets, files, semaphores and the like, allowing communications between the agent server and agent client communications to adapt to different operating system architecture. | 03-25-2010 |
20100082764 | COMMUNITY CACHING NETWORKS - A system for sharing data within a network, the system including a first peer device coupled with the network that comprises local cache storage configured to store data comprising at least one entry designated as network accessible cache data and a cache control module operative to control access to the data stored in the local cache storage. The system further includes a second peer device coupled with the first peer device via the network where the second peer device is configured to request network accessible cache data stored in the local cache storage of the first peer device. Furthermore, the cache control module of the first peer device is configured to transmit at least a portion of the requested network accessible cache data to the second peer device in response to the request for network accessible data stored from the second peer device. | 04-01-2010 |
20100082765 | SYSTEM AND METHOD FOR CHUNK BASED TIERED STORAGE VOLUME MIGRATION - System and method for reducing costs of moving data between two or more of multi-tiered storage devices. Specifically, the system operates by moving only high tier portion of data and merely remapping the low tier data to migration target device, which eliminates a large amount of data movement (low tier) while maintaining the SLA of high tier data. Specifically, when a command to migrate a thin provisioned volume is received from a source primary storage device to another target primary storage device, the system doesn't copy all of the tier | 04-01-2010 |
20100094949 | Method of Backing Up Library Virtual Private Database Using a Web Browser - A library uses a web server to store library vital product data (VPD) to a user's computer. In certain embodiments, the library uses web type cookies to save library VPD as name-value pairs. After an action, such as a service action, that results in a loss of VPD, the library can automatically retrieve the VPD from the web browser of the user's computer. This approach has several advantages. No user intervention is required to back up or restore the library VPD. Simply using the web user interface of the library accomplishes the necessary connection to the user's computer storage. If the user does not connect to the web browser then it is likely that library VPD is not being changed. No additional hardware or software is required. Additionally, the library already has a web server and the customer already uses web browsers to access the library. No cost, installation, or setup is required. In certain embodiments, library firmware can use the existing operator panel and web user interface for prompting the user through any decisions that may be required, as it relates to backing up or restoring library VPD. | 04-15-2010 |
20100094950 | Methods and systems for controlling fragment load on shared links - Controlling fragment load on shared links, including a large number of fractional-storage CDN servers storing erasure-coded fragments encoded with a redundancy factor greater than one from contents, and a large number of assembling devices configured to obtain the fragments from sub-sets of the servers. At least some of the servers share their Internet communication link with other Internet traffic, and the fragment traffic via the shared link is determined by the number of sub-sets in which the servers accessed via the shared link participate. Wherein the maximum number of sub-sets in which the servers accessed via the shared link are allowed to participate is approximately a decreasing function of the throughput of the other Internet traffic via the shared link. | 04-15-2010 |
20100100604 | CACHE CONFIGURATION SYSTEM, MANAGEMENT SERVER AND CACHE CONFIGURATION MANAGEMENT METHOD - A cache configuration management system capable of lightening workloads of estimation of a cache capacity in virtualization apparatus and/or cache assignment is provided. In a storage system having application servers, storage devices, a virtualization apparatus for letting the storage devices be distinctly recognizable as virtualized storages, and a storage management server, the storage management server predicts a response time of the virtualization apparatus with respect to a application server from cache configurations and access performances of the virtualization apparatus and storage device and then evaluates the presence or absence of the assignment to a virtual volume of internal cache and a predictive performance value based on a to-be-assigned capacity to thereby perform judgment of the cache capacity within the virtualization apparatus and estimation of an optimal cache capacity, thus enabling preparation of an internal cache configuration change plan. | 04-22-2010 |
20100115048 | DATA TRANSMISSION SCHEDULER - A method of co-ordinating the time of execution of a plurality of applications all hosted by the same communications device, each application requiring a network connection for completion of a predetermined task, the method comprising for each task: determining one or more task completion conditions including one or more network conditions for said network connection required to complete said task; retrieving stored data indicating for a predetermined period of time, one or more network characteristics for an available network connection; processing said task completion conditions to determine if said one or more network characteristics retrieved for said predetermined period of time match said one or more network conditions for said network connection required to complete said task; and in the event of a match in between the network characteristics of a connection available for a predetermined period of time and the network conditions required for said network connection to complete said task, scheduling said task for execution in said predetermined period of time; and reducing the predetermined period of time by the duration of the network connection required to complete a scheduled task. | 05-06-2010 |
20100138513 | SYSTEM AND METHOD FOR SELECTIVELY TRANSFERRING BLOCK DATA OVER A NETWORK - A system for sharing block data includes a non-removable device for storing block data (e.g. a hard drive) that is networked with a plurality of computers. Each computer can initiate discovery commands and read/write commands, and transmit these commands over the network to the non-removable storage device. Computer commands are intercepted and processed by a logical algorithm program at the storage device. One function of the logical algorithm program is to instruct each computer to treat the non-removable block storage device as a removable block device. Because the computers treat the storage device as a removable block device, they relinquish control of the device (after use) to other computers on the network. The logical algorithm program also functions to allocate temporary ownership of the block storage device to one of the computers on the network and passes temporary ownership from computer to computer on the network. | 06-03-2010 |
20100153514 | Non-disruptive, reliable live migration of virtual machines with network data reception directly into virtual machines' memory - Techniques are disclosed for the non-disruptive and reliable live migration of a virtual machine (VM) from a source host to a target host, where network data is placed directly into the VM's memory. When a live migration begins, a network interface card (NIC) of the source stops placing newly received packets into the VM's memory. A virtual server driver (VSP) on the source stores the packets being processed and forces a return of the memory where the packets are stored to the NIC. When the VM has been migrated to the target, and the source VSP has transferred the stored packets to the target host, the VM resumes processing the packets, and when the VM sends messages to the target NIC that the memory associated with a processed packet is free, a VSP on the target intercepts that message, blocking the target NIC from receiving it. | 06-17-2010 |
20100161751 | METHOD AND SYSTEM FOR ACCESSING DATA - A method and system for distributing and accessing data over multiple storage controllers wherein data is broken down into one or more fragments over the multiple storage controllers, each storage controller owning a fragment of the data, receiving a request for data in a first storage controller from one of a plurality of hosts, responding to the host by the first storage controller with the request if the first storage controller contains the requested data, forwarding the request to the second storage controller from the first storage controller if the first storage controller does not contain the requested data, responding to the first storage controller from the second storage controller with the request, and responding to the host from the first storage controller with the request. | 06-24-2010 |
20100180005 | CACHE CYCLING - The present invention relates to methods, apparatus, and systems for implementing cache cycling. The system includes a gateway in communication with a satellite. The gateway includes a gateway accelerator module which further includes a proxy server. The proxy server is configured to receive the request for the new copy of the requested content and forward the request. Furthermore, the system includes a content provider in communication with the gateway. The content provider is configured to receive the content request and transmit the new copy of the requested content to the gateway. The gateway is configured to transmit the new copy of the content to the subscriber terminal via the satellite, and wherein the subscriber terminal is further configured to replace the requested content stored in the terminal cache module with the new copy of the requested content. The content stored in the terminal cache module is updated for subsequent requests. | 07-15-2010 |
20100180006 | Network Access Device with Shared Memory - A technique for providing network access in accordance with at least one layered network access technology comprising layer | 07-15-2010 |
20100185744 | MANAGEMENT OF A RESERVE FOREVER DEVICE - A host reserves a device controlled by a controller that is coupled to the host. The controller starts a first timer, in response to a completion of input/output (I/O) operations on the device by the host, wherein the host continues to reserve the device after the completion of the I/O operations. The controller sends a notification to the host after an expiry of the first timer, wherein the notification requests the host to determine whether the device should continue to be reserved by the host. The controller starts a second timer, in response to receiving an acknowledgement from the host that the notification has been received by the host, wherein reservation status of the device reserved by the host is determined by the controller on or prior to an expiry of the second timer. | 07-22-2010 |
20100191822 | Broadcasting Data In A Hybrid Computing Environment - Methods, apparatus, and products for broadcasting data in a hybrid computing environment that includes a host computer, a number of accelerators, the host computer and the accelerators adapted to one another for data communications by a system level message passing module, the host computer having local memory shared remotely with the accelerators, the accelerators having local memory for the accelerators shared remotely with the host computer, where broadcasting data according to embodiments of the present invention includes: writing, by the host computer remotely to the shared local memory for the accelerators, the data to be broadcast; reading, by each of the accelerators from the shared local memory for the accelerators, the data; and notifying the host computer, by the accelerators, that the accelerators have read the data. | 07-29-2010 |
20100191823 | Data Processing In A Hybrid Computing Environment - Data processing in a hybrid computing environment that includes a host computer, a plurality of accelerators, the host computer and the accelerators adapted to one another for data communications by a system level message passing module, the host computer having local memory shared remotely with the accelerators, the accelerators having local memory for the plurality of accelerators shared remotely with the host computer, where data processing according to embodiments of the present invention includes performing, by the plurality of accelerators, a local reduction operation with the local shared memory for the accelerators; writing remotely, by one of the plurality of accelerators to the shared memory local to the host computer, a result of the local reduction operation; and reading, by the host computer from shared memory local to the host computer, the result of the local reduction operation. | 07-29-2010 |
20100235461 | NETWORK DEVICE AND METHOD OF SHARING EXTERNAL STORAGE DEVICE - When the external storage device is connected to the USB connector of the router, through the process of the OS which has detected this event, it will be determined whether the device is a USB mass storage device; and if it is found to be a USB mass storage device, internal software is started up by using Hotplug function, and it is further determined whether the file system is recognizable; and if the file system is recognizable, CIFS is configured to allow sharing and enable GUEST access. As a result, no laborious operation is needed to share a memory device such as a hard disk among users on a network. | 09-16-2010 |
20100250698 | AUTOMATED TAPE DRIVE SHARING IN A HETEROGENEOUS SERVER AND APPLICATION ENVIRONMENT - A method and system for automatically sharing a tape drive in a heterogeneous computing environment that includes a first computer and second computer. The first computer receives a message that includes a shared tape drive identifier, a source port identifier of the second computer, and a reservation status change for the tape drive. Based on the tape drive identifier, the first computer determines that the tape drive is connected to the first computer. The source port identifier is determined to not identify any host bus adapter installed in the first computer. In response to the first computer determining that the reservation status change indicates a reservation or a release of the tape drive for the second computer, the first computer sets the tape drive offline or online, respectively, in an application executing in the first computer. | 09-30-2010 |
20100250699 | METHOD AND APPARATUS FOR REDUCING POOL STARVATION IN A SHARED MEMORY SWITCH - A switch includes a reserved pool of buffers in a shared memory. The reserved pool of buffers is reserved for exclusive use by an egress port. The switch includes pool select logic which selects a free buffer from the reserved pool for storing data received from an ingress port to be forwarded to the egress port. The shared memory also includes a shared pool of buffers. The shared pool of buffers is shared by a plurality of egress ports. The pool select logic selects a free buffer in the shared pool upon detecting no free buffer in the reserved pool. The shared memory may also include a multicast pool of buffers. The multicast pool of buffers is shared by a plurality of egress ports. The pool select logic selects a free buffer in the multicast pool upon detecting an IP Multicast data packet received from an ingress port. | 09-30-2010 |
20100268788 | Remote Asynchronous Data Mover - A distributed data processing system executes multiple tasks within a parallel job, including a first local task on a local node and at least one task executing on a remote node, with a remote memory having real address (RA) locations mapped to one or more of the source effective addresses (EA) and destination EA of a data move operation initiated by a task executing on the local node. On initiation of the data move operation, remote asynchronous data move (RADM) logic identifies that the operation moves data to/from a first EA that is memory mapped to an RA of the remote memory. The local processor/RADM logic initiates a RADM operation that moves a copy of the data directly from/to the first remote memory by completing the RADM operation using the network interface cards (NICs) of the source and destination processing nodes, determined by accessing a data center for the node IDs of remote memory. | 10-21-2010 |
20100281131 | User Interface Between a Flexray Communications Module and a Flexray User, and Method for Transmiting Message Over Such an Interface - A user interface between a FlexRay communications module, which is connected to a FlexRay communications connection via which messages are transmitted, and which includes a message memory for the temporary storage of messages from the FlexRay communications connection or for the FlexRay communications connection, and a FlexRay user assigned to the FlexRay communications module. In order to make possible a particularly resource-saving and resource-conserving connection of the user to the FlexRay communications module, it is provided that the user interface has a device for the temporary storage of the messages. The device includes at least one message memory which has a first connection to FlexRay the communications module and a second connection to the user. Message memory may be implemented as a dual-ported RAM. | 11-04-2010 |
20100281132 | MULTISTAGE ONLINE TRANSACTION SYSTEM, SERVER, MULTISTAGE ONLINE TRANSACTION PROCESSING METHOD AND PROGRAM - Provided is a system in which a plurality of nodes including a plurality of servers are connected at least with one NAS shared among the plurality of nodes. At least one of the nodes includes a shared memory from/to which each server belonging to the same node can read and write data. Each of at least two of the servers belonging to the node having the shared memory includes: a node judging device which judges whether output destination of output data obtained by processing the input data is the server belonging to the same node as that of the server itself; a data storage memory acquiring device which secures a storage region of the output data on the shared memory if the output destination is the server belonging to the same node; and a data processor which processes the input data and stores the output data to the storage region. | 11-04-2010 |
20100281133 | STORING LOSSY HASHES OF FILE NAMES AND PARENT HANDLES RATHER THAN FULL NAMES USING A COMPACT TABLE FOR NETWORK-ATTACHED-STORAGE (NAS) - Multiple Network Attached Storage (NAS) appliances are pooled together by a virtual NAS translator, forming one common name space visible to clients. Clients send messages to the virtual NAS translator with a file name and a virtual handle of the parent directory that are concatenated to a full file-path name and compressed by a cryptographic hash function to generate a hashed-name key. The hashed-name key is matched to a storage key in a table. The full file-path name is not stored, reducing the table size. A unique entry number is returned to the client as the virtual file handle that is also stored in another table with one or more native file handles, allowing virtual handles to be translated to native handles that the NAS appliance servers use to retrieve files. File movement among NAS servers alters native file handles but not virtual handles, hiding NAS details from clients. | 11-04-2010 |
20100299402 | CONFIGURING CHANNELS FOR SHARING MEDIA - A user interface for sharing media items with others. From a sender's perspective, embodiments of the invention allow for an easy-to-use drag-and-drop technique that is more user-friendly than conventional techniques. From the recipient's perspective, embodiments of the invention allow media items from multiple sources to be aggregated into a single viewport, providing a cohesive and unified approach to media items received from others. | 11-25-2010 |
20100306337 | SYSTEMS AND METHODS FOR CLONING TARGET MACHINES IN A SOFTWARE PROVISIONING ENVIRONMENT - A provisioning server can provide and interact with a cloner agent on target machines. The cloner agent can execute on a source target machine and copy the contents of storage on the source target machine to a storage location of the provisioning server. Once copied, the provisioning server can provide the cloner agent to destination target machines. The cloner agent can copy the contents of the source target machine, stored at the provisioning server, to the destination target machines. | 12-02-2010 |
20100306338 | WRITING OPERATING DATA INTO A PORTABLE DATA CARRIER - In a method for writing (S | 12-02-2010 |
20100306339 | P2P CONTENT CACHING SYSTEM AND METHOD - A P2P content caching system, method, and computer program product for a P2P application on a computer network device. The system includes: a content analyzer; and a content manager. The method includes: determining P2P hotspot downloading contents of the P2P application on the computer network device; downloading the determined P2P hotspot downloading contents into a local memory, and requesting a directory server of the P2P application to register a P2P content caching system as a P2P content provider of the downloaded P2P hotspot downloading contents; and providing the downloaded P2P hotspot downloading contents to a P2P participant in response to a request from the P2P participant to the downloaded P2P hotspot downloading contents. | 12-02-2010 |
20100318625 | METHOD AND APPARATUS FOR STORAGE-SERVICE-PROVIDER-AWARE STORAGE SYSTEM - A storage system includes a virtual volume configured on a storage controller and mapping to a physical storage capacity maintained at a remote location by a storage service provider (SSP). The storage controller receives an I/O command in a block-based protocol specifying a logical block address (LBA). The storage controller correlates the LBA with a file name of a file stored by the SSP, translates the I/O command to an IP-supported protocol, and forwards the translated I/O command with the file name to the SSP for processing. In the case of a write command, the SSP stores the write data using the specified file name. In the case of a read command, the SSP enables download of data from the specified file name. In an alternative embodiment, a NAS head may replace the storage controller for correlating the LBA with a file name and translating the I/O command. | 12-16-2010 |
20110010427 | Quality of Service in Virtual Computing Environments - Methods and apparatus facilitate the management of input/output (I/O) subsystems in virtual I/O servers to provide appropriate quality of services (QoS). A hierarchical QoS scheme based on partitioning of network interfaces and I/O subsystems transaction types are used to classify Virtual I/O communications. This multi-tier QoS method allows virtual I/O servers to be scalable and provide appropriate QoS granularity. | 01-13-2011 |
20110010428 | PEER-TO-PEER STREAMING AND API SERVICES FOR PLURAL APPLICATIONS - Embodiments of apparatuses with a universal P2P service platform are disclosed herein. A unified infrastructure is built in such apparatuses and a unified P2P network may be established with such apparatuses. In various embodiments, such an apparatus comprises a P2P operating system (OS) virtual machine (VM) | 01-13-2011 |
20110022677 | Media Fusion Remote Access System - The present invention is a system that receives data in different formats from different devices/applications in the format native to the devices/applications and fuses the data into a common shared audio/video collaborative environment including a composite display showing the data from the different sources in different areas of the display and composite audio. The common environment is presented to users who can be at remote locations. The users are allowed to supply a control input for the different device data sources and the control input is mapped back to the source, thereby controlling the source. The location of the control input on the remote display is mapped to the storage area for that portion of the display and the control data is transmitted to the corresponding device/application. The fusion system converts the data from the different sources/applications into a common format and stores the converted data from the different sources in a shared memory with each source allocated a different area in the memory. A combined window like composite representation of the data is produced and also stored in the memory. The combined representation is transmitted to and can be controlled by the users. | 01-27-2011 |
20110035460 | METHOD AND APPARATUS FOR MANAGING SHARED DATA AT A PORTABLE ELECTRONIC DEVICE OF A FIRST ENTITY - A method and apparatus for managing shared data at a portable electronic device of a first entity is provided. A message is received advising that data associated with a second entity is being shared. A request is transmitted to a server for a list of shared folders associated with the second entity, in response to an option to view shared folders associated with the second entity being selected. The list is received. An initialize command is transmitted to the server, the initialize command identifying at least one folder in the list. The data associated with the second entity is received, responsive to the transmitting the initialize command. The data is stored in association with a second entity identifier. | 02-10-2011 |
20110035461 | Protocol adapter for transferring diagnostic signals between in-vehicle networks and a computer - A protocol adapter for simultaneously communicating with one or more remote computers over any one of a plurality of protocols. The adapter includes a motherboard having an integrated CPU, a plurality of interface modules, a plurality of device drivers and a plurality of daughter-board module slots. The protocol adapter further includes at least one daughter-board interface module mounted in one of the plurality of daughter-board slots. The at least one daughter-board modules expands the number of protocols of the adapter beyond those protocols being run by the motherboard. | 02-10-2011 |
20110040848 | PUSH PULL CACHING FOR SOCIAL NETWORK INFORMATION - Embodiments are directed towards modifying a distribution of writers as either a push writer or a pull writer based on a cost model that decides for a given content reader whether it is more effective for the writer to be a pull writer or a push writer. A cache is maintained for each content reader for caching content items pushed by a push writer in the content writer's push list of writers when the content is generated. At query time, content items are pulled by the content reader based on writers a content reader's pull list. One embodiment of the cost model employs data about a previous number of requests for content items for a given writer for a number of previous blended display results of content items. When a writer is determined to be popular, mechanisms are proposed for pushing content items to a plurality of content readers. | 02-17-2011 |
20110040849 | RELAY DEVICE, MAC ADDRESS SEARCH METHOD - A relay device includes: memories, each memory being operable to store at least a data pair formed of a MAC address and a port number; a search unit to search only amongst ones of the memories having valid data pairs when searching for a port number based upon a MAC address; a data moving unit to move valid data pairs to different locations within the plurality of memories in order to reduce a total number of memories, amongst the plurality thereof, having valid data pairs; and a power supply controller to selectively stop supplying power to ones of the memories storing only invalid data. | 02-17-2011 |
20110040850 | MESH-MANAGING DATA ACROSS A DISTRIBUTED SET OF DEVICES - Data files, applications and/or corresponding user interfaces may be accessed at a device that collaborates in a mesh. The mesh may include any number or type of devices that collaborate in a network. Data, applications and/or corresponding user interfaces may be stored within a core object that may be shared over the mesh. Information in the core object may be identified with a corresponding user such that a user may use any collaborating device in the mesh to access the information. In one example, the information is stored remotely from a device used to access the information. A remote source may store the desired information or may determine the storage location of the desired information in the mesh and may further provide the desired information to a corresponding user. | 02-17-2011 |
20110060806 | USING IN-THE-CLOUD STORAGE FOR COMPUTER HEALTH DATA - A policy enforcement point (PEP) controls access to a network in accordance with one or more policy statements that specify conditions for compliant devices. The PEP receives current health data from a device seeking to access the network, and stores this health data in local volatile memory. If the health data stored in local volatile memory complies with the policy statements, the device is permitted to access the network. Otherwise, the device is denied access to the network, or permitted only limited access to the network in order to resolve its compliance issues. The PEP occasionally stores the health data in local persistent memory and on an online service (OLS). During reboot, the PEP accesses the OLS to confirm that it has the most recent health data. If more recent health data is available from the OLS, the OLS provides this more recent data to the PEP. | 03-10-2011 |
20110066696 | COMMUNICATION PROTOCOL - One aspect relates to a communication protocol for communicating between one or more entities, such as devices, hosts or any other system capable of communicating over a network. A protocol is provided that allows communication between entities without a priori knowledge of the communication protocol. In such a protocol, for example, information describing a data structure of the communication protocol is transferred between communicating entities. Further, an authentication protocol is provided for providing bidirectional authentication between communicating entities. In one specific example, the entities include a master device and a slave device coupled by a serial link. In another specific example, the communication protocol may be used for performing unbalanced transmission between communicating entities. | 03-17-2011 |
20110082908 | DYNAMIC CACHING OF NODES - A replication count of a data element of a node of a cache cluster is defined. The data element has a key-value pair where the node is selected based on a hash of the key and a size of the cache cluster. The data element is replicated to at least one other node of the cache cluster based on the replication count. | 04-07-2011 |
20110106907 | SYSTEM AND METHOD FOR SEQUENTIAL RECORDING AND ARCHIVING LARGE VOLUMES OF VIDEO DATA - The invention relates to a data storage system comprising a plurality of arrays of a server and a number of data recording devices, capable of sequentially recording supplied data at an input rate below a given maximum input rate. The system further comprises a network switch as an interface between the arrays of data recording devices and a network of data capturing devices where there is a variable overall data capturing rate. The servers are each provided with monitoring means for monitoring the input rate of the respective array. The servers are communicatively linked to each other and at least one of the servers is provided for functioning as a controller for controlling at least one other of the servers and assigning part of the stream of captured data to the at least one other server in response to its monitoring means. | 05-05-2011 |
20110113115 | STORAGE SYSTEM WITH A MEMORY BLADE THAT GENERATES A COMPUTATIONAL RESULT FOR A STORAGE DEVICE - One embodiment is a storage system having one or more compute blades to generate and use data and one or more memory blades to generate a computational result. The computational result is generated by a computational function that transforms the data generated and used by the one or more compute blades. One or more storage devices are in communication with and remotely located from the one or more compute blades. The one or more storage devices store and serve the data for the one or more compute blades. | 05-12-2011 |
20110119344 | Apparatus And Method For Using Distributed Servers As Mainframe Class Computers - The invention consists of a switch or bank of switches that give hundreds or thousands of servers the ability to share memory efficiently. It supports improving distributed server utilization from 10% on average to 100%. The invention consists of connecting distributed servers via a cross point switch to a back plane shared random access (RAM) memory thereby achieving a mainframe class computer. The distributed servers may be Windows PCs or Linux standalone computers. They may be clustered or virtualized. This use of cross point switches provides shared memory across servers, improving performance. | 05-19-2011 |
20110131289 | METHOD AND APPARATUS FOR SWITCHING COMMUNICATION CHANNEL IN SHARED MEMORY COMMUNICATION ENVIRONMENT - A method for switching a communication channel in a shared memory communication environment which sets up a TCP/IP (Transmission Control Protocol/Internet Protocol) communication channel and a shared memory communication channel from a first virtual machine to a second virtual machine, the method includes: transmitting a channel switching message to the first virtual machine when the first virtual machine moves to another physical machine; transmitting the channel switching message from the first virtual machine to the second virtual machine; and switching a channel state between the first virtual machine and the second virtual machine. | 06-02-2011 |
20110145357 | STORAGE REPLICATION SYSTEMS AND METHODS - Systems and methods for information storage replication are presented. In one embodiment a storage flow control method includes estimating in a primary data server what an outstanding request backlog trend is for a remote secondary data server; determining a relationship of an outstanding request backlog trend to a threshold; and notifying a client that the primary data server can not service additional requests if the trend exceeds the threshold. In one embodiment the estimating comprises: sampling a number of outstanding messages at a plurality of fixed time intervals; and determining if there is a trend in the number of outstanding messages over the plurality of fixed time intervals. It is appreciated the estimating can be performed in a variety of ways, (e.g., utilizing an average, a moving average, etc). Determining the trend can include determining if values monotonically increase. The estimating in the primary server can be performed without intruding on operations of the remote secondary data server. The primary data server and the secondary data server can have a variety of configurations (e.g., a mirrored configuration, a RAID5 configuration, etc.). | 06-16-2011 |
20110179132 | Provisioning Server Resources in a Cloud Resource - Systems and methods to manage workloads and hardware resources in a data center or cloud. In one embodiment, a method includes a data center having a plurality of servers in a network. The data center provides a virtual machine for each of a plurality of users, each virtual machine to use a portion of hardware resources of the data center. The hardware resources include storage and processing resources distributed onto each of the plurality of servers. The method further includes sending messages amongst the servers, some of the messages being sent from a server including status information regarding a hardware resource utilization status of that server. The method further includes detecting a request from the virtual machine to handle a workload requiring increased use of the hardware resources, and provisioning the servers to temporarily allocate additional resources to the virtual machine, wherein the provisioning is based on status information provided by one or more of the messages. | 07-21-2011 |
20110179133 | CONNECTION MANAGER CAPABLE OF SUPPORTING BOTH DISTRIBUTED COMPUTING SESSIONS AND NON DISTRIBUTED COMPUTING SESSIONS - A method is described that involves establishing a connection over a shared memory between a connection manager and a worker node. The shared memory is accessible to multiple worker nodes. Then sending, from the connection manager to the worker node over the connection, a first request containing a method call to a remote object on the worker node. Also sending, from the connection manager to the worker node over the connection, a second request containing a second method call to a second remote object on the worker node. | 07-21-2011 |
20110191436 | Method and System for Protocol Offload in Paravirtualized Systems - Certain aspects of a method and system for protocol offload in paravirtualized systems may be disclosed. Exemplary aspects of the method may include preposting of application buffers to a front-end driver rather than to a NIC in a paravirtualized system. The NIC may be enabled to place the received offloaded data packets into a received data buffer corresponding to a particular GOS. A back-end driver may be enabled to acknowledge the placed offloaded data packets. The back-end driver may be enabled to forward the received data buffer corresponding to the particular GOS to the front-end driver. The front-end driver may be enabled to copy offloaded data packets from a received data buffer corresponding to a particular guest operating system (GOS) to the preposted application buffers. | 08-04-2011 |
20110202625 | Storage device, system and method for data share - The present invention is one storage device for data share which comprises a device body with a USB communications interface unit, a memory unit, and a control unit wherein the memory unit has an executive file/program comprising a group management module used to manage a group/peer list and the group list has at least a group ID and a peer ID. Accordingly, the storage devices with the same group ID can be referred to as “peers” inside the group and mutually share files saved in respective storage devices when at least two storage devices with the same group ID are separately plugged onto computers and complete login on the central server via Internets. | 08-18-2011 |
20110231510 | PROCESSING DATA FLOWS WITH A DATA FLOW PROCESSOR - An apparatus and method to distribute applications and services in and throughout a network and to secure the network includes the functionality of a switch with the ability to apply applications and services to received data according to respective subscriber profiles. Front-end processors, or Network Processor Modules (NPMs), receive and recognize data flows from subscribers, extract profile information for the respective subscribers, utilize flow scheduling techniques to forward the data to applications processors, or Flow Processor Modules (FPMs). The FPMs utilize resident applications to process data received from the NPMs. A Control Processor Module (CPM) facilitates applications processing and maintains connections to the NPMs, FPMs, local and remote storage devices, and a Management Server (MS) module that can monitor the health and maintenance of the various modules. | 09-22-2011 |
20110238775 | Virtualized Data Storage Applications and Optimizations - Virtual storage arrays consolidate branch data storage at data centers connected via wide area networks. Virtual storage arrays appear to storage clients as local data storage, but actually store data at the data center. Virtual storage arrays may prioritize storage client and prefetching requests for communication over the WAN and/or SAN based on their associated clients, servers, storage clients, and/or applications. A virtual storage array may transfer large data sets from a data center to a branch location while providing branch location users with immediate access to the data set stored at the data center. Virtual storage arrays may be migrated by disabling a virtual storage array interface at a first branch location and then configuring another branch virtual storage array interface at a second branch location to provide its storage clients with access to storage array data stored at the data center. | 09-29-2011 |
20110238776 | METHOD AND SYSTEM FOR THE VIRTUALIZED STORAGE OF A DIGITAL DATA SET - This method of virtualized storage of a digital data set ( | 09-29-2011 |
20110238777 | PIPELINE SYSTEMS AND METHOD FOR TRANSFERRING DATA IN A NETWORK ENVIRONMENT - A communications system having a data transfer pipeline apparatus for transferring data in a sequence of N stages from an origination device to a destination device. The apparatus comprises dedicated memory having buffers dedicated for carrying data and a master control for registering and controlling processes associated with the apparatus for participation in the N stage data transfer sequence. The processes include a first stage process for initiating the data transfer and a last Nth stage process for completing data transfer. The first stage process allocates a buffer from a predetermined number of buffers available within the memory for collection, processing, and sending of the data from the origination device to a next stage process. The Nth stage process receives a buffer allocated to the first stage process from the (N−1)th stage and to free the buffer upon processing completion to permit reallocation of the buffer. | 09-29-2011 |
20110246600 | MEMORY SHARING APPARATUS - A memory sharing apparatus includes a server, a host and a client. The server includes a shared page which is an entity of a shared memory, a share setting page which is data in which an index value of each shared page is collected, and a grant table in which a page frame number of each share setting page and the index value are stored so as to correspond to each other. The host includes a database in which the index value in the grant table is managed. The client includes the shared page and a shared page area to which the shared page is mapped, and a share setting page area to which the share setting page is mapped. | 10-06-2011 |
20110258283 | MESSAGE COMMUNICATION TECHNIQUES - A network protocol unit interface is described that uses a message engine to transfer contents of received network protocol units in message segments to a destination message engine. The network protocol unit interface uses a message engine to receive messages whose content is to be transmitted in network protocol units. A message engine transmits message segments to a destination message engine without the message engine transmitter and receiver sharing memory space. In addition, the transmitter message engine can transmit message segments to a receiver message engine by use of a virtual address associated with the receiver message and a queue identifier, as opposed to a memory address. | 10-20-2011 |
20110270945 | COMPUTER SYSTEM AND CONTROL METHOD FOR THE SAME - A computer system with a plurality of storage systems connected to each other via a network, each storage system including a virtual machine whose data is stored in hierarchized storage areas. When a virtual machine of a first storage system is migrated from the first storage system to a second storage system, the second storage system stores data of the virtual machine of the first storage system as well as data of its own virtual machine, in the hierarchized storage areas in the second storage system. | 11-03-2011 |
20110270946 | SERVICE PROVIDING APPARATUS, SERVICE PROVIDING SYSTEM, SERVICE PROVIDING METHOD, AND STORAGE MEDIUM - A service providing apparatus ( | 11-03-2011 |
20110289178 | Host Device and Method For Accessing a Virtual File in a Storage Device by Bypassing a Cache in the Host Device - A host device is provided comprising an interface configured to communicate with a storage device having a public memory area and a private memory area, wherein the public memory area stores a virtual file that is associated with content stored in the private memory area. The host device also comprises a cache, a host application, and a server. The server is configured to receive a request for the virtual file from the host application, send a request to the storage device for the virtual file, receive the content associated with the virtual file from the private memory area of the storage device, wherein the content is received by bypassing the cache, generate a response to the request from the host application, the response including the content, and send the response to the host application. In one embodiment, the server is a hypertext transfer protocol (HTTP) server. In another embodiment, the server can determine if a request is associated with a normal user permission or a super user permission, and send a response to the host application only if it is determined that the request is associated with the normal user permission. | 11-24-2011 |
20110289179 | DYNAMIC CONFIGURATION OF PROCESSING MODULES IN A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide a method of updating configuration data of a network processor having one or more processing modules and a shared memory. A control processor of the network processor writes updated configuration data to the shared memory and sends a configuration update request to a configuration manager. The configuration update request corresponds to the updated configuration data. The configuration manager determines whether the configuration update request corresponds to settings of a given one of the processing modules. If the configuration update request corresponds to settings of a given one of the one or more processing modules, the configuration manager, sends one or more configuration operations to a destination one of the processing modules corresponding to the configuration update request and updated configuration data. The destination processing module updates one or more register values corresponding to configuration settings of the processing module with the corresponding updated configuration data. | 11-24-2011 |
20110289180 | DATA CACHING IN A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide for storing data in a local cache of one of a plurality of processing modules of a network processor. A control processing module determines presence of data stored in its local cache while concurrently sending a request to read the data from a shared memory and from one or more local caches corresponding to other of the plurality of processing modules. Each of the plurality of processing modules responds whether the data is located in one or more corresponding local caches. The control processing module determines, based on the responses, presence of the data in the local caches corresponding to the other processing modules. If the data is present in one of the local caches corresponding to one of the other processing modules, the control processing module reads the data from the local cache containing the data and cancels the read request to the shared memory. | 11-24-2011 |
20110289181 | Method and System for Detecting Changes in a Network Using Simple Network Management Protocol Polling - In an embodiment, methods and systems have been provided for detecting changes in a network using improved Simple Network Management Protocol (SNMP) polling that reduces network traffic. Examples of changes in the network include, but are not limited to, configuration and behavioral changes in a network device, and response of network device to a network change. A Network Management Station (NMS) periodically polls Management Information Base (MIB) groups instead of periodically polling individual MIB object instances. The NMS receives the Aggregate Change Identifiers (ACIs) of MIB groups in response to polling, from a SNMP agent. The changes in the received ACIs represent the changes in the MIB groups. A change in an MIB group represents changes in the MIB object instances of the MIB group. The ACIs can be checksum, timestamp, and a combination of number of MIB object instances in a group and checksum of the MIB group. | 11-24-2011 |
20110295968 | DATA PROCESSING METHOD AND COMPUTER SYSTEM - A technique for increasing the speed of data entry into a distributed processing platform is provided. According to a computer system of the present invention, when data is entered into each node in a distributed manner, the most efficient entry method (a method with the highest processing speed) is selected from among a plurality of entry methods, so that the data is entered into each node with no overlaps in accordance with the selected method. | 12-01-2011 |
20110314119 | MASSIVELY SCALABLE MULTILAYERED LOAD BALANCING BASED ON INTEGRATED CONTROL AND DATA PLANE - Method and system for load balancing in providing a service. A request for a service, represented by a single IP address, is first received by a router in the network. The router accesses information received from one or more advertising routers in the network. Each of the advertising routers advertises, via the single IP address, the service provided by at least one server in a server pool associated with the advertising router. The advertisement includes metrics indicating a health condition of the associated server pool. The router selects a target router based on, at least in part, the metrics of the server pools associated with the advertising routers to achieve a first level load balancing and forwards the request for the service to the target router. A local server load balancer (SLB) connected with the target router then identifies a target server from the associated server pool to provide the requested service thereby to achieve a second level load balancing. | 12-22-2011 |
20110314120 | SYSTEM AND METHOD FOR PERFORMING MULTISTREAM STORAGE OPERATIONS - Systems and methods for performing storage operations over multi-stream data paths are provided. Prior to performing a storage operation, it may be determined whether multi-streaming resources are available to perform a multi-stream storage operation. Availability of multi-streaming resources may be related to network pathways capable of supporting multi-stream storage operations, existing network load related to other storage operations being or to be performed, availability of components capable of supporting multi-stream storage operation, and other factors. If system resources to support multi-stream storage operations are not available, the system may optionally perform a traditional storage operation that does not incorporate multiple data streams. | 12-22-2011 |
20110320556 | Techniques For Migrating A Virtual Machine Using Shared Storage - Techniques for providing the ability to live migrate a virtual machine from one physical host to another physical host employ shared storage as the transfer medium for the state of the virtual machine. In addition, the ability for a virtualization module to use second-level paging functionality is employed, paging-out the virtual machine memory content from one physical host to the shared storage. The content of the memory file can be restored on another physical host by employing on-demand paging and optionally low-priority background paging from the shared storage to the other physical host. | 12-29-2011 |
20110320557 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information-processing apparatus includes a communication unit that transmits a first command to register in a memory a service provided by an application using a first communicative method. The communication unit transmits a second command to register in the memory a service indicator of the service using a second communicative method different from the first communicative method. | 12-29-2011 |
20110320558 | Network with Distributed Shared Memory - A computer network with distributed shared memory, including a clustered memory cache aggregated from and comprised of physical memory locations on a plurality of physically distinct computing systems. The network also includes a plurality of local cache managers, each of which are associated with a different portion of the clustered memory cache, and a metadata service operatively coupled with the local cache managers. Also, a plurality of clients are operatively coupled with the metadata service and the local cache managers. In response to a request issuing from any of the clients for a data item present in the clustered memory cache, the metadata service is configured to respond with identification of the local cache manager associated with the portion of the clustered memory cache containing such data item. | 12-29-2011 |
20120005301 | Sharing an image - Method, server, network and computer program product for sharing an image between a first terminal and a second terminal. An original version of the image is received at the server from the first terminal. Tiles are then received at the server from the first terminal, each tile representing at least a section of the image and including a change made to the image at the first terminal. An image state is maintained at the server identifying which tiles are required for forming a latest version of the image. On determining that the latest version of the image is to be formed at the second terminal, tiles based on the image state for forming the latest version of the image are transmitted from the server to the second terminal. | 01-05-2012 |
20120016950 | METHOD AND APPARATUS FOR DYNAMICALLY MANAGING BANDWIDTH FOR CLIENTS IN A STORAGE AREA NETWORK - A method for managing bandwidth allocation in a storage area network includes receiving a plurality of Input/Output (I/O) requests from a plurality of client devices, determining a priority of each of the client devices relative to other client devices, and dynamically allocating bandwidth resources to each client device based on the priority assigned to that client device. | 01-19-2012 |
20120023187 | Multi-Tenant Universal Storage Manager - In one aspect, a universal storage manager in a multi-tenant computing system receives at least one message requesting a change to a storage infrastructure of the multi-tenant computing system. Thereafter, the universal storage manager associates the requested change with one of a plurality of operations changing the storage infrastructure. Once this association is made, the universal storage manager initiates the associated operation to change the storage infrastructure. Related apparatus, systems, techniques and articles are also described. | 01-26-2012 |
20120030305 | METHOD AND SYSTEM FOR DELIVERING EMBEDDED OBJECTS IN A WEBPAGE TO A USER AGENT USING A NETWORK DEVICE - A method and system for delivering embedded objects in a webpage to a user agent using a network device is described. In one embodiment, a method for delivering embedded objects in a webpage to a user agent using a network device is described. The method for delivering embedded objects in a webpage to a user agent using a network device involves intercepting a webpage at a network device, where the webpage is transmitted from a web server and is destined to a user agent, scanning the webpage at the network device to discover links that are embedded in the webpage, obtaining an object that is identified by one of the links at the network device, and transmitting the object from the network device to the user agent as soon as the object is obtained at the network device. Other embodiments are also described. | 02-02-2012 |
20120030306 | RAPID MOVEMENT SYSTEM FOR VIRTUAL DEVICES IN A COMPUTING SYSTEM, MANAGEMENT DEVICE, AND METHOD AND PROGRAM THEREFOR - In a virtualized computer system having at least two computers connected via a network, the service suspension period while a virtual device is dynamically migrated from a first computer to a second computer is shortened. | 02-02-2012 |
20120047220 | DATA TRANSFER DEVICE AND DATA TRANSFER SYSTEM - According to one embodiment, a data transfer device is provided. The data transfer device is configured to transfer data between a plurality of data transceivers and at least one memory having a first memory area. When one of the data transceivers has acquired an exclusive access right to the first memory area of the memory, the data transfer device stores address information corresponding to the first memory area. | 02-23-2012 |
20120047221 | METHODS AND APPARATUS FACILITATING ACCESS TO STORAGE AMONG MULTIPLE COMPUTERS - Multiple computers in a cluster maintain respective sets of identifiers of neighbor computers in the cluster for each of multiple named resource. A combination of the respective sets of identifiers define a respective tree formed by the respective sets of identifiers for a respective named resource in the set of named resources. Upon origination and detection of a request at a given computer in the cluster, a given computer forwards the request from the given computer over a network to successive computers in the hierarchical tree leading to the computers relevant in handling the request based on use of identifiers of neighbor computers. Thus, a combination of identifiers of neighbor computers identify potential paths to related computers in the tree. | 02-23-2012 |
20120059900 | PERSISTENT PERSONAL MESSAGING IN A DISTRIBUTED SYSTEM - A persistent personal messaging system provides shared memory space functionality supporting a user changing between a plurality of client devices, even within a loosely coupled, distributed system for persistent personal messaging. A user, irrespective of which messaging client they are using, logs on to the system. The act of logging on places user data, representing the user, into the shared memory space. A “contacts” service agent finds the friends and groups that the user belongs to and notifies other users that the user has logged on. Given the on-line status of other users and groups, a “history” service agent will retrieve previous messages from the shared memory space that formed the user's conversations with users and groups, as if the user had never logged off or switched devices. When the user adds a new message to any conversation, the message is added to the shared memory space. | 03-08-2012 |
20120066335 | DYNAMIC ADDRESS MAPPING - To permit communications between devices using different communication protocols, a mapping device is connected to one or more communication networks, and stores associations between communication addresses as dynamic address mappings. A dynamic address mapping is associated with an initiator address (from which the communication is initiated) and a recipient address (to which a communication is initially addressed) and minimally contains a final address (to which a communication is finally routed). A new dynamic address mapping can be created in response a request, typically from a communication initiator. Communications from the initiator address to the recipient address are routed to the final address, with appropriate format conversion if the protocol of the final address is different to that of the initiator address. A reply address may also be stored in a dynamic address mapping for return communications, and a reply mapping may be automatically generated to map the reply address to the initiator address. | 03-15-2012 |
20120066336 | APPARATUS AND METHOD FOR AGGREGATING DISPARATE STORAGE ON CONSUMER ELECTRONICS DEVICES - A method includes determining whether a requesting device includes sufficient available memory to store a media file. The method further includes determining whether a best fit memory block is available in a particular device of a plurality of devices in response to a determination that the requesting device includes insufficient available memory. | 03-15-2012 |
20120072524 | SYSTEM AND METHOD FOR RECORDING DATA IN A NETWORK ENVIRONMENT - A method is provided in one example embodiment and includes receiving a signal to record a media stream, and recording the media stream in a first file that has a preconfigured length. If the media stream being recorded exceeds the preconfigured length then a second file is used to continue recording the media stream. The second file can have the same preconfigured length as the first file. The method also includes receiving a signal to stop recording the media stream, and storing metadata associated with the media stream in a database. In specific implementations, the metadata can include a unique file name associated with the media stream, a directory name of a disk directory, a first time indicative of when the recording started, and a second time indicative of when the recording ended. | 03-22-2012 |
20120072525 | Extending Caching Network Functionality To An Existing Streaming Media Server - A content delivery network (CDN) includes multiple cluster sites, including sites with streaming media servers, caching servers and storage devices accessible to the caching servers for storing streaming content. Interface software is configured to initiate retrieval, by a caching server, of electronic streaming resources from the one or more storage devices in response to requests for the electronic streaming resource received by the streaming media server. | 03-22-2012 |
20120072526 | METHOD AND NODE FOR DISTRIBUTING ELECTRONIC CONTENT IN A CONTENT DISTRIBUTION NETWORK - The present invention relates to a method and node for efficiently distributing electronic content in a content distribution network (CDN) comprising a plurality of cache nodes. | 03-22-2012 |
20120079054 | Automatic Memory Management for a Home Transcoding Device - A content moving device which enables providing content stored on a first user device, such as a DVR, in a first format and resolution to be provided to a second user device, such as a portable media player (PMP) in a second format and resolution. The content moving device identifies content on the first user device as candidate content which may be desired by the PMP and assigns a priority level to the content. The content moving device transcodes the candidate content in order of highest priority first and lowest priority last. The content moving device may also use the priority level to manage deletion of the transcoded content from the storage on the content moving device. The lowest priority level content may be deleted first as storage space is needed. | 03-29-2012 |
20120079055 | REVERSE DNS LOOKUP WITH MODIFIED REVERSE MAPPINGS - In accordance with the invention, embodiments of a DNS server, a DNS proxy process, and an intermediate server (IMS) are described. The DNS server, DNS proxy process, and intermediate server (IMS) described herein utilize a source IP address for a client device, in combination with a destination IP address for a host server, in reverse mapping operations in order to accurately provide a hostname originally requested by the client device. | 03-29-2012 |
20120079056 | Network Cache Architecture - There is described a method and apparatus for sending data through one or more packet data networks. A stripped-down packet is sent from a packet sending node towards a cache node, the stripped down packet including in its payload a pointer to a payload data segment stored in a file at the cache node. When the stripped-down packet is received at the cache node, the pointer is used to identify the payload data segment from data stored at the cache node. The payload data segment is inserted into the stripped-down packet in place of the pointer so as to generate a full size packet, which is sent from the cache node towards a client. | 03-29-2012 |
20120084381 | Virtual Desktop Configuration And Operation Techniques - Techniques for configuring and operating a virtual desktop session are disclosed herein. In an exemplary embodiment, an inter-partition communication channel can be established between a virtualization platform and a virtual machine. The inter-partition communication channel can be used to configure a guest operating system to conduct virtual desktop sessions and manage running virtual desktop sessions. In addition to the foregoing, other techniques are described in the claims, the detailed description, and the figures. | 04-05-2012 |
20120084382 | ON-THE-FLY REVERSE MAPPING - In accordance with the invention, embodiments of a DNS server, a DNS proxy process, and an intermediate server (IMS) are described. The DNS server, DNS proxy process, and intermediate server (IMS) described herein utilize a destination IP address for a destination device in on-the-fly reverse mapping operations in order to accurately provide a hostname originally requested by the client device. | 04-05-2012 |
20120084383 | Distributed Data Storage - The present invention relates to a distributed data storage system comprising a plurality of storage nodes. Using unicast and multicast transmission, a server application may write data in the storage system. When writing data, at least two storage nodes are selected based in part on a randomized function, which ensures that data is sufficiently spread to provide efficient and reliable replication of data in case a storage node malfunctions. | 04-05-2012 |
20120084384 | DISTRIBUTED CACHE FOR STATE TRANSFER OPERATIONS - A network arrangement that employs a cache having copies distributed among a plurality of different locations. The cache stores state information for a session with any of the server devices so that it is accessible to at least one other server device. Using this arrangement, when a client device switches from a connection with a first server device to a connection with a second server device, the second server device can retrieve state information from the cache corresponding to the session between the client device and the first server device. The second server device can then use the retrieved state information to accept a session with the client device. | 04-05-2012 |
20120084385 | Network Cache Architecture - There is described a method and apparatus for sending data through one or more packet data networks. A reduced size packet is sent from a packet sending node towards a cache node, the reduced size packet including in its payload a pointer to a payload data segment stored in a file at the cache node. When the reduced size packet is received at the cache node, the pointer is used to identify the payload data segment from data stored at the cache node. The payload data segment is inserted into the reduced size packet in place of the pointer so as to generate a full size packet, which is sent from the cache node towards a client. | 04-05-2012 |
20120089695 | ACCELERATION OF WEB PAGES ACCESS USING NEXT PAGE OPTIMIZATION, CACHING AND PRE-FETCHING - A method and system for acceleration of access to a web page using next page optimization, caching and pre-fetching techniques. The method comprises receiving a web page responsive to a request by a user; analyzing the received web page for possible acceleration improvements of the web page access; generating a modified web page of the received web page using at least one of a plurality of pre-fetching techniques; providing the modified web page to the user, wherein the user experiences an accelerated access to the modified web page resulting from execution of the at least one of a plurality of pre-fetching techniques; and storing the modified web page for use responsive to future user requests. | 04-12-2012 |
20120089696 | METHOD AND APPARATUS FOR MANAGING SHARED DATA AT A PORTABLE ELECTRONIC DEVICE OF A FIRST ENTITY - A method and apparatus for managing shared data at a portable electronic device of a first entity is provided. A message is received advising that data associated with a second entity is being shared. A request is transmitted to a server for a list of shared folders associated with the second entity, in response to an option to view shared folders associated with the second entity being selected. The list is received. An initialize command is transmitted to the server, the initialize command identifying at least one folder in the list. The data associated with the second entity is received, responsive to the transmitting the initialize command. The data is stored in association with a second entity identifier. | 04-12-2012 |
20120096106 | Extending a content delivery network (CDN) into a mobile or wireline network - A content delivery network (CDN) comprises a set of edge servers, and a domain name service (DNS) that is authoritative for content provider domains served by the CDN. The CDN is extended into one or more mobile or wireline networks that cannot or do not otherwise support fully-managed CDN edge servers. In particular, an “Extender” is deployed in the mobile or wireline network, preferably as a passive web caching proxy that is beyond the edge of the CDN but that serves CDN-provisioned content under the control of the CDN. The Extender may also be used to transparently cache and serve non-CDN content. An information channel is established between the Extender and the CDN to facilitate the Extender functionality. | 04-19-2012 |
20120096107 | HOME APPLIANCE MANAGING SYSTEM - The home appliance managing system includes a plurality of central managing devices and a center server. The center server is connected to the plurality of the central managing devices, and stores plural data used at home appliances. When the central managing device stores predetermined data requested by the home appliance, the central managing device sends the predetermined data to the home appliance. When the central managing device does not store the predetermined data, the central managing device requests the predetermined data from the center server. The center server sends the predetermined data to the central managing device in response to the request from the central managing device. The central managing device sends the predetermined data received from the center server to the home appliance and stores the same data. The center server selects the cache data from the plural data on the basis of the data previously sent to the central managing device, and sends the cache data to the central managing device. The central managing device stores the cache data received from the center server. | 04-19-2012 |
20120096108 | MANAGING APPLICATION INTERACTIONS USING DISTRIBUTED MODALITY COMPONENTS - A method for managing multimodal interactions can include the step of registering a multitude of modality components with a modality component server, wherein each modality component handles an interface modality for an application. The modality component can be connected to a device. A user interaction can be conveyed from the device to the modality component for processing. Results from the user interaction can be placed on a shared memory are of the modality component server. | 04-19-2012 |
20120096109 | Hierarchical Pre-fetch Pipelining in a Hybrid Memory Server - A method, hybrid server system, and computer program product, prefetch data. A set of prefetch requests associated with one or more given datasets residing on the server system are received from a set of accelerator systems. A set of data is prefetched from a memory system residing at the server system for at least one prefetch request in the set of prefetch requests. The set of data satisfies the at least one prefetch request. The set of data that has been prefetched is sent to at least one accelerator system, in the set of accelerator systems, associated with the at least one prefetch request. | 04-19-2012 |
20120096110 | Registering, Transferring, and Acting on Event Metadata - A technique and associated mechanism is described for registering event metadata at a first site, transferring the event metadata to a second site using a portable module, and processing the event metadata at the second site. A user can register the event metadata at the first site in the course of consuming broadcast content. Namely, when the user encounters an interesting portion of the broadcast content, the user activates an input mechanism, resulting in the storage of event metadata associated with the interesting portion on the portable module. The second site can upload the event metadata from the portable module and, in response, provide content associated with the event metadata, including recommended content associated with the event metadata. | 04-19-2012 |
20120102134 | CACHE SHARING AMONG BRANCH PROXY SERVERS VIA A MASTER PROXY SERVER AT A DATA CENTER - A method, system and computer program product for cache sharing among branch proxy servers. A branch proxy sever receives a request for accessing a resource at a data center. The branch proxy server creates a cache entry in its cache to store the requested resource if the branch proxy server does not store the requested resource. Upon creating the cache entry, the branch proxy server sends the cache entry to a master proxy server at the data center to transfer ownership of the cache entry if the master proxy server did not store the resource in its cache. When the resource becomes invalid or expired, the master proxy server informs the appropriate branch proxy servers storing the resource to purge the cache entry containing this resource. In this manner, the master proxy server ensures that the cached resource is synchronized across the branch proxy servers storing this resource. | 04-26-2012 |
20120102135 | SEAMLESS TAKEOVER OF A STATEFUL PROTOCOL SESSION IN A VIRTUAL MACHINE ENVIRONMENT - The disclosed technique uses virtual machines in solving a problem of persistent state for storage protocols. The technique provides for seamless, persistent, storage protocol session state management on a server, for higher availability. A first virtual server is operated in an active role in a host system to serve a client, by using a stateful protocol between the first virtual server and the client. A second, substantially identical virtual server is maintained in a passive role. In response to a predetermined event, the second virtual server takes over for the first virtual server, while preserving state for a pending client request sent to the first virtual server in the stateful protocol. The method can further include causing the second virtual server to respond to the request before a timeout which is specific to the stateful protocol can occur. | 04-26-2012 |
20120102136 | DATA CACHING SYSTEM - Provided herein are systems, uses, and processes relating to network communications. For example, provided herein are systems, uses, and processes for increasing transmission efficiency by removing redundancy from single source multiple destination transfers. | 04-26-2012 |
20120102137 | CLUSTER CACHE COHERENCY PROTOCOL - Systems, methods, and other embodiments associated with a cluster cache coherency protocol are described. According to one embodiment, an apparatus includes non-transitory storage media configured as a cache associated with a computing machine. The computing machine is a member of a cluster of computing machines that share access to a storage device. A cluster caching logic is associated with the computing machine. The cluster caching logic is configured to communicate with cluster caching logics associated with the other computing machines to determine an operational status of a clique of cluster caching logics performing caching operations on data in the storage device. The cluster caching logic is also configured to selectively enable caching of data from the storage device in the cache based, at least in part, on a membership status of the cluster caching logic in the clique. | 04-26-2012 |
20120102138 | Multiplexing Users and Enabling Virtualization on a Hybrid System - A method, hybrid server system, and computer program product, support multiple users in an out-of-core processing environment. At least one accelerator system in a plurality of accelerator systems is partitioned into a plurality of virtualized accelerator systems. A private client cache is configured on each virtualized accelerator system in the plurality of virtualized accelerator systems. The private client cache of each virtualized accelerator system stores data that is one of accessible by only the private client cache and accessible by other private client caches associated with a common data set. Each user in a plurality of users is assigned to a virtualized accelerator system from the plurality of virtualized accelerator systems. | 04-26-2012 |
20120102139 | MANAGING DATA DELIVERY BASED ON DEVICE STATE - Managing power-consuming resources on a first computing device by adjusting data delivery from a plurality of second computing devices based on a state of the first computing device. The state of the first computing device is provided to the second computing devices to alter the data delivery. In some embodiments, the first computing device provides the second computing devices with actions or commands relating to data delivery based on the device state. For example, the second computing devices are instructed to store the data, forward the data, forward only high priority data, or perform other actions. Managing the data delivery from the second computing devices preserves battery life of the first computing device. | 04-26-2012 |
20120102140 | METHOD FOR EFFICIENT UTILISATION OF THE THROUGHPUT CAPACITY OF AN ENB BY USING A CACHE - Method and apparatus for enabling optimisation of the utilisation of the throughput capacity of a first and a second interface of an eNB, where the first and the second interface alternate in having the lowest throughput capacity, and thereby take turns in limiting the combined data throughput over the two interfaces. In the method, data is received over the first interface and then cached in one of the higher layers of the Internet Protocol stack. The output from the cache of data to be sent over the second interface is controlled, based on the available throughput capacity of the second interface. Thereby, the alternating limiting effect of the interfaces is levelled out. | 04-26-2012 |
20120110108 | Computer System with Cooperative Cache - A server receives information that identifies which chunks are stored in local caches at client computers and receives a request to evict a chunk from a local cache of a first one of the client computers. The server determines whether the chunk stored at the local cache of the first one of the client computers is globally oldest among the chunks stored in the local caches at the client computers, and authorizes the first one of the client computers to evict the chunk when the chunk is the globally oldest among the chunks stored in the local caches at the client computers. | 05-03-2012 |
20120110109 | CACHING ADAPTED FOR MOBILE APPLICATION BEHAVIOR AND NETWORK CONDITIONS - Systems and methods for caching adapted for mobile application behavior and network conditions are disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, of determining cacheability of content received for a client on a mobile device by tracking requests generated by the client at the mobile device to detect periodicity of the requests generated by the client, tracking responses received for requests generated by the client to detect repeatability in content of the responses, and/or determining whether the content received for the client is cacheable on the mobile device based on one or more of the periodicity in the requests and the repeatability in the content of the responses. | 05-03-2012 |
20120110110 | REQUEST AND RESPONSE CHARACTERISTICS BASED ADAPTATION OF DISTRIBUTED CACHING IN A MOBILE NETWORK - Systems and methods of request and response characteristics based adaptation of distributed caching in a mobile network are disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, of collecting information about a request or information about the response received for the request, the request being initiated at the mobile device, using the information about the request or the response, determining cacheability of the response, caching the response by storing the response a cache entry in a cache on the mobile device in response to determining the cacheability of the response, and/or serving the response from the cache to satisfy a subsequent request. The response in the cache entry can be verified by an entity physically separate from the mobile device to determine whether the response stored in the local cache still matches a current response at a source which sent the response. | 05-03-2012 |
20120110111 | CACHE DEFEAT DETECTION AND CACHING OF CONTENT ADDRESSED BY IDENTIFIERS INTENDED TO DEFEAT CACHE - Systems and methods for cache defeat detection are disclosed. Moreover, systems and methods for caching of content addressed by identifiers intended to defeat cache are further disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, of resource management in a wireless network by caching content on a mobile device. The method can include detecting a data request to a content source for which content received is stored as cache elements in a local cache on the mobile device, determining, from an identifier of the data request, that a cache defeating mechanism is used by the content source, and/or retrieving content from the cache elements in the local cache to respond to the data request. | 05-03-2012 |
20120110112 | DISTRIBUTED SYSTEM FOR CACHE DEFEAT DETECTION AND CACHING OF CONTENT ADDRESSED BY IDENTIFIERS INTENDED TO DEFEAT CACHE - Systems and methods for cache defeat detection are disclosed. Moreover, systems and methods for caching of content addressed by identifiers intended to defeat cache are further disclosed. In one aspect, embodiments of the present disclosure include a system for optimizing resources in a mobile network, by for example performing one or more of, identifying a parameter in an identifier used in multiple polling requests to a given content source; means for, detecting that the parameter in the identifier changes for each of the polling requests; determining whether responses received from the given content source are the same for each of the multiple polling requests; and/or caching the responses on the mobile device in response to determining that the responses received for the given content source are the same. | 05-03-2012 |
20120124158 | FILE TRANSFER PROTOCOL FOR MOBILE COMPUTER - A method is disclosed for communicating using a device having a Palm OS. SMB is preferentially used to communicate with a node, and if use of SMB is not possible, FTP is used, and if use of FTP is not possible, Bluetooth is used. If FTP or Bluetooth is selected as the protocol, file sharing between the device and node that entails a read or write is executed by temporarily copying a file to an internal Palm OS memory of the device, performing the read or write on the file, and then copying the file back to the node to overwrite a previous version of the file at the node. For non-Palm OS file transfer to the internal memory, the file is wrapped in a Palm OS stream in the internal memory for executing reads or writes. For file transfer to an expansion Palm OS memory card, byte-to-byte copying of the file is executed using the FAT of the expansion memory, with the file being transferred through an internal Palm OS memory of the device. | 05-17-2012 |
20120124159 | CONTENT DELIVERY SYSTEM, CONTENT DELIVERY METHOD AND CONTENT DELIVERY PROGRAM - In order to stably deliver content data over a network, a content delivery system is provided with: a content retention module for storing content data consisting of hierarchically encoded hierarchical data; a cache retention module for caching content data; a hierarchical score determination module for calculating an access requirement frequency for each piece of cached hierarchical data; a hierarchical arrangement determination module for replacing hierarchical data having an access requirement frequency lower than a fixed value with the hierarchical data stored in the content retention module; and a content delivery module for delivering content data in response to requests from a client device. | 05-17-2012 |
20120131126 | Mirroring Solution in Cloud Storage Environment - A system configured to provide access to shared storage includes a first network node configured to provide access to the shared storage to a first plurality of client stations. The first network node includes a first cache memory module configured to store first data corresponding to the first plurality of client stations, and a first cache control module configured to transfer the first data from the first cache memory module to the shared storage. A second network node is configured to provide access to the shared storage to a second plurality of client stations. The second network node includes a second cache memory module configured to store second data corresponding to the second plurality of client stations and store the first data, and a second cache control module configured to transfer the second data from the second cache memory module to the shared storage. | 05-24-2012 |
20120131127 | ADVANCED CONTENTION DETECTION - A multiple computer system disclosed in which n computers (M | 05-24-2012 |
20120131128 | SYSTEM AND METHOD FOR GENERATING A CONSISTENT USER NAME-SPACE ON NETWORKED DEVICES - Implementing a consistent user name-space on networked computing devices includes various components and methods. When a network connection between a local or host computing device and one or more remote computing devices is present, remote items are represented using the same methodology as items located on the host computing device. To the user, remote and local items are indistinguishable. When the network connection is lost or items located on a remote computer are otherwise unavailable, the unavailable items remain represented on the host computing device. Unavailable items are represented in a way that informs the user that the items may not be fully accessed. | 05-24-2012 |
20120143979 | PROTOCOL STACK USING SHARED MEMORY - There are disclosed processes and systems relating to optimized network traffic generation and reception. Application programs and a protocol stack may share a memory space. The protocol stack may designate available bandwidth for use by an application program. The application programs may store descriptors from which the protocol stack may form payload data for data units. | 06-07-2012 |
20120150987 | TRANSMISSION SYSTEM AND APPARATUS, AND METHOD - In a transmission system, a first transmitting apparatus (server node) acquires distribution data, which includes multiple update data sets and attribute information of the update data sets, from a file server via a second network. The first transmitting apparatus (server node) stores the attribute information so as to allow a second transmitting apparatus (client node) connected to the first transmitting apparatus (server node) via a first network to acquire the attribute information and determine necessity of acquisition with respect to each of the update data sets. The first transmitting apparatus (server node) also stores the update data sets to be acquired by the second transmitting apparatus (client node). | 06-14-2012 |
20120158882 | HIGHLY SCALABLE AND DISTRIBUTED DATA SHARING AND STORAGE - Embodiments of the disclosure relate to storing and sharing data in a scalable distributed storing system using parallel file systems. An exemplary embodiment may comprise a network, a storage node coupled to the network for storing data, a plurality of application nodes in device and system modalities coupled to the network, and a parallel file structure disposed across the storage node and the application nodes to allow data storage, access and sharing through the parallel file structure. Other embodiments may comprise interface nodes for accessing data through various file access protocols, a storage management node for managing and archiving data, and a system management node for managing nodes in the system. | 06-21-2012 |
20120158883 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION STORAGE MEDIUM - A reading instruction receiving unit ( | 06-21-2012 |
20120158884 | CONTENT DISTRIBUTION DEVICE, CONTENT DISTRIBUTION METHOD, AND PROGRAM - A content distribution device of the present invention includes: a content holding unit that holds a plurality of distribution contents that can be distributed, the distribution content including a first content that has a first bit rate and a second content that has a second bit rate lower than the first bit rate; a cache holding unit that temporarily holds at least one of the first and second contents; a cache control unit that reads out the first or second content to be distributed from the content holding unit, and stores the read out content in the cache holding unit; and a content distribution unit that reads out and distributes the first or second content that is temporarily held in the cache holding unit, or the first or second content that is held in the content holding unit, in distribution of a specified content specified by a distribution request among the plurality of distribution contents. The content distribution unit reads out and distributes the second content of the specified content that is held in the content holding unit, in a case of the specified content not being stored in the cache holding unit, and an available capacity of the cache holding unit being less than a first available capacity threshold value. | 06-21-2012 |
20120166571 | APPARATUS AND METHOD FOR PROVIDING MOBILE SERVICE IN A MOBILE COMMNUCATION NETWORK - Apparatus, system, and method for providing a mobile service to a mobile node in a mobile communication network. In order to provide the mobile service, a request may be received from a mobile node for connecting to a mobile router. When the mobile node is authorized to access the mobile router, the authorized mobile node may be connected to a file server in the mobile router. Then, a storage service may be provided to the authorized mobile node. | 06-28-2012 |
20120166572 | CACHE SHARING AMONG BRANCH PROXY SERVERS VIA A MASTER PROXY SERVER AT A DATA CENTER - A method for cache sharing among branch proxy servers. A branch proxy sever receives a request for accessing a resource at a data center. The branch proxy server creates a cache entry in its cache to store the requested resource if the branch proxy server does not store the requested resource. Upon creating the cache entry, the branch proxy server sends the cache entry to a master proxy server at the data center to transfer ownership of the cache entry if the master proxy server did not store the resource in its cache. When the resource becomes invalid or expired, the master proxy server informs the appropriate branch proxy servers storing the resource to purge the cache entry containing this resource. In this manner, the master proxy server ensures that the cached resource is synchronized across the branch proxy servers storing this resource. | 06-28-2012 |
20120166573 | CENTRALIZED FEED MANAGER - A method delivering content from a plurality of sources to a plurality of end servers through a central manager is provided. The method includes receiving the content from the plurality of sources at the central manager, formatting the content to a form usable by the plurality of end servers, creating a transaction generic to the plurality of end servers where the transaction includes a reference to a set of instructions for storing the formatted content, sending the transaction to an end server in the plurality of end servers, and calling the reference to execute the set of instructions where the set of instructions store the formatted content into the memory of the end server. | 06-28-2012 |
20120173653 | VIRTUAL MACHINE MIGRATION IN FABRIC ATTACHED MEMORY - A computer program product and computer implemented method are provided for migrating a virtual machine between servers. The virtual machine is initially operated on a first server, wherein the first server accesses the virtual machine image over a network at a memory location within fabric attached memory. The virtual machine is migrated from the first server to a second server by flushing data to the virtual machine image from cache memory associated with the virtual machine on the first server and providing the state and memory location of the virtual machine to the second server. The virtual machine may then operate on the second server, wherein the second server accesses the virtual machine image over the network at the same memory location within the fabric attached memory without copying the virtual machine image. | 07-05-2012 |
20120173654 | METHOD AND APPARATUS FOR IDENTIFYING VIRTUAL CONTENT CANDIDATES TO ENSURE DELIVERY OF VIRTUAL CONTENT - An apparatus and method is provided that ensures virtual content providers such as advertisers that their virtual content will reach every mobile device, every application within each mobile device and/or every user. Such functionality is referred to herein as a “guaranteed reach”. Guaranteed reach parameters including reach type parameters (mobile devices, applications and/or users) are specified in a memory. A server receives a virtual content request and a received target identification uniquely identifying, for example, the requesting device via a network. The server identifies virtual content candidates from the memory by comparing the received target identification to the stored target identification associated with the virtual content. The guaranteed reach parameters may also include frequency-based criteria that guarantee a frequency of impression(s) for particular virtual content and guaranteed priority criteria to ensure the guarantee will be met. | 07-05-2012 |
20120179771 | SUPPORTING AUTONOMOUS LIVE PARTITION MOBILITY DURING A CLUSTER SPLIT-BRAINED CONDITION - A method, data processing system, and computer program product autonomously migrate clients serviced by a first VIOS to other VIOSes in the event of a VIOS cluster “split-brain” scenario generating a primary sub-cluster and a secondary sub-cluster, where the first VIOS is in the secondary sub-cluster. The VIOSes in the cluster continually exchange keep-alive information to provide each VIOS with an up-to-date status of other VIOSes within the cluster and to notify the VIOSes when one or more nodes loose connection to or are no longer communicating with other nodes within the cluster, as occurs with a cluster split-brain event/condition. When this event is detected, a first sub-cluster assumes a primary sub-cluster role and one or more clients served by one or more VIOSes within the secondary sub-cluster are autonomously migrated to other VIOSes in the primary sub-cluster, thus minimizing downtime for clients previously served by the unavailable/uncommunicative VIOSes. | 07-12-2012 |
20120179772 | SYSTEM AND METHOD TO IMPROVE FITNESS TRAINING - A method for creating a personalized exercise routine with at least one user interface used in connection with forming machine-readable instructions protected as private to a user subsequently carrying out the exercise routine on an exercise machine, the method including providing the user with at least one user interface to define the personalized exercise routine; forming machine-readable instructions to control the exercise machine to carry out the exercise routine on the exercise machine, said machine instructions protected as private to the user; storing the personalized exercise routine formed in the machine-readable instructions in a memory device; and user-triggered engaging of the machine-readable instructions to control the exercise machine in carrying out the personalized exercise routine. The method can include associating the exercise routine with a first exercise machine to produce a first set of signals; and subsequently translating the first set of signals into the machine-readable instructions. | 07-12-2012 |
20120179773 | METHOD AND SYSTEM FOR COMMUNITY DATA CACHING - A cache module ( | 07-12-2012 |
20120191801 | Utilizing Removable Virtual Volumes for Sharing Data on Storage Area Network - The present disclosure provides data sharing through virtual removable volumes. A virtual volume of a SAN (storage area network) is presented to clients as a virtual removable volume. A controlling application controls access of clients connected to the SAN to the virtual removable volume. The controlling application allows only one client at a time to access the virtual removable volume. The controlling application allows a first client to mount the virtual removable volume as a removable volume. The controlling application then causes the first client to unmount the virtual removable volume and allows a second client to mount the virtual removable volume as a removable volume. In this way, the first client and second client are able to share data via the virtual removable volume without causing corruption of data and without requiring a shared file system or physical transfer of removable media. | 07-26-2012 |
20120209942 | SYSTEM COMBINING A CDN REVERSE PROXY AND AN EDGE FORWARD PROXY WITH SECURE CONNECTIONS - A proxy system is provided to receive an HTTP request for content accessible over the Internet comprising: cache storage; and a computer system configured to implement, a CDN proxy module and an edge forward proxy module each having access to the cache storage to cache and to retrieve content; and a selector to select either the CDN proxy module or the edge forward proxy module depending upon contents of a header of the HTTP request received from the user device; an HTTP client to forward the request from the CDN proxy or from the edge forward proxy over the Internet to a server to serve the requested content. | 08-16-2012 |
20120209943 | APPARATUS AND METHOD FOR CONTROLLING DISTRIBUTED MEMORY CLUSTER - Provided are an apparatus and method for controlling a distributed memory cluster. A distributed computing system may include a computing node cluster, a distributed memory cluster, and a controlling node. The computing node cluster may include a plurality of computing nodes including first computing nodes that each generates associated data. The distributed memory cluster may be configured to store the associated data of the first computing nodes. The controlling node may be configured to select memory blocks of the associated data for distribution on the distributed memory cluster based on a node selection rule and memory cluster structure information, and to select second computing nodes from the computing node cluster based on a location selection rule and the memory cluster structure information. | 08-16-2012 |
20120209944 | Software Pipelining On A Network On Chip - Memory sharing in a software pipeline on a network on chip (‘NOC’), the NOC including integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, and each network interface controller controlling inter-IP block communications through routers, including segmenting a computer software application into stages of a software pipeline, the software pipeline comprising one or more paths of execution; allocating memory to be shared among at least two stages including creating a smart pointer, the smart pointer including data elements for determining when the shared memory can be deallocated; determining, in dependence upon the data elements for determining when the shared memory can be deallocated, that the shared memory can be deallocated; and deallocating the shared memory. | 08-16-2012 |
20120215878 | CONTENT DELIVERY PLATFORM APPARATUSES, METHODS AND SYSTEMS - The CONTENT DELIVERY PLATFORM APPARATUSES, METHODS AND SYSTEMS (“CDP”) transform content seed selections and recommendations via CDP components such as discovery and gurus into events and discovery of other contents for users and revenue for right-holders. In one embodiment, the CDP may provide facilities for obtaining a universally resolvable list of content items on a local client and identifying a non-local item from the list that is absent on the local client. The CDP may generate a local cache request for the identified non-local item having an associated universally resolvable content identifier and transmit the generated local cache request to a universally resolvable content server. The CDP may then receive, in response to the transmitted request, a universally resolvable content item corresponding to the local cache request and may mark the requested item as temporary and locally available upon receiving the content item. | 08-23-2012 |
20120221670 | METHODS, CIRCUITS, DEVICES, SYSTEMS AND ASSOCIATED COMPUTER EXECUTABLE CODE FOR CACHING CONTENT - Disclosed are methods, circuits, devices, systems and associated computer executable code for caching content. According to embodiments, a client device may be connected to the internet or other distributed data network through a gateway network. As initial portions of client requested content enters the gateway network, the requested content may be characterized and compared to content previously cached on a cache integral or otherwise functionally associated with the gateway network. In the event a match is found, a routing logic, mechanism, circuitry or module may replace the content source server with the cache as the source of content being routed to the client device. In the event the comparison does not produce a match, as content enters the network a caching routine running on processing circuitry associated with the gateway network may passively cache the requested content while routing the content to the client device. | 08-30-2012 |
20120221671 | Controlling Shared Memory - In view of the characteristics of distributed applications, the present invention proposes a technical solution for applying a shared memory on an NIC comprising: a shared memory configured to provide shared storage space for a task of a distributed application, and a microcontroller. Furthermore, the present invention provides a computer device that includes the above-mentioned NIC, a method for controlling a read/write operation on a shared memory of a NIC, and a method for invoking the NIC. The use of the technical solution provided in the present invention bypasses the processing of network protocol stack, avoids the time delay introduced by the network protocol stack. The present invention does not need to perform TCP/IP encapsulation on the data packet, thus greatly saving additional packet header and packet tail overheads generated from the TCP/IP layer data encapsulation. | 08-30-2012 |
20120221672 | COMMUNICATION DEVICES, METHODS AND COMPUTER READABLE STORAGE MEDIA - A communication device includes a memory that has a first storage area that stores an identifier of a first communication device, which is in a communication session with the communication device, and a second storage area that stores an identifier of a second communication device, which established a communication session with the communication device. The communication device performs the steps of: notifying the identifier stored in the first storage area to the first communication device, receiving an identifier stored in a first storage area of the first communication device from the first communication device, determining whether the identifier received from the first communication device is stored in the second storage area of the communication device, restricting re-establishment of the communication session with the first communication device when the identifier received from the first communication device is stored in the second storage area of the communication device. | 08-30-2012 |
20120221673 | METHOD FOR PROVIDING VIRTUALIZATION INFORMATION - Virtualization information on a first user terminal is generated and is stored in a data storage device through a mobile communication system. When a user with a second user terminal requests virtualization information while the second user terminal provides a first identification number of the first user terminal, the mobile communication system provides virtualization information corresponding to the identification number to the second user terminal. The second user terminal operates the virtualization information corresponding to the first identification number. | 08-30-2012 |
20120226765 | DATA RECEPTION MANAGEMENT APPARATUS, SYSTEMS, AND METHODS - Apparatus, systems, and methods to manage networks may operate to receive a packet into an element of an array contained in a memory while a low resource state exists, and to truncate the array at the element responsive to at least one of an indication that the array is full, or an indication that no more packets are available to be received after receiving at least the packet. The receiving and the truncating may be executed by a processor. Additional apparatus, systems, and methods are disclosed. | 09-06-2012 |
20120226766 | SYSTEMS AND METHODS THERETO FOR ACCELERATION OF WEB PAGES ACCESS USING NEXT PAGE OPTIMIZATION, CACHING AND PRE-FETCHING TECHNIQUES - A method and system for acceleration of access to a web page using next page optimization, caching and pre-fetching techniques. The method comprises receiving a web page responsive to a request by a user; analyzing the received web page for possible acceleration improvements of the web page access; generating a modified web page of the received web page using at least one of a plurality of pre-fetching techniques; providing the modified web page to the user, wherein the user experiences an accelerated access to the modified web page resulting from execution of the at least one of a plurality of pre-fetching techniques; and storing the modified web page for use responsive to future user requests. | 09-06-2012 |
20120233284 | METHODS AND SYSTEMS FOR CACHING DATA COMMUNICATIONS OVER COMPUTER NETWORKS - A computer-implemented method and system for caching multi-session data communications in a computer network. | 09-13-2012 |
20120239774 | UNOBTRUSIVE METHODS AND SYSTEMS FOR COLLECTING INFORMATION TRANSMITTED OVER A NETWORK - The present invention relates generally to unobtrusive methods and systems for collecting information transmitted over a network utilizing a data collection system residing between an originator system and a responding system. In one embodiment the Originator System can be a web browser and the Responding System can be a web server. In another embodiment the Originator System can be a local computer and the Responding System can be another computer on the network. Both these and other configurations are considered to be within the domain of this invention. The Data Collection System acts in a hybrid peer-to-peer/client-server manner in responding to the Originating System as a Responding System while acting as an Originating System to the Responding System. This configuration enables real-time acquisition and storage of network traffic information in a completely unobtrusive manner without requiring any server- or client-side code. | 09-20-2012 |
20120246257 | Pre-Caching Web Content For A Mobile Device - A web service for pre-caching web content on a mobile device includes receiving a request from the mobile device for first web content, fetching the first web content, determining second web content to pre-fetch based upon the first web content, fetching the second web content, and causing the second web content to be stored in a content cache on the mobile device responsive to the request for the first web content. Pre-caching web content in this manner provides web content to the mobile device that the user of the mobile device is likely to access. Pre-caching of additional web content prior to receiving an explicit request improves web browsing performance of the mobile device. | 09-27-2012 |
20120246258 | HTTP-BASED SYNCHRONIZATION METHOD AND APPARATUS - An HTTP-based synchronization method includes obtaining a first response sent by a source server or a cache in response to an HTTP request for obtaining a file; determining time when the first response is sent in local time at server, according to a value of a Date field and a value of an Age field in the first response; determining time when the first response is sent in local time at client, according to the client time of an event related to the first response; and determining time offset between the server time and the client time according to the time when the first response is sent in local time at server and the time when the first response is sent in local time at client, and setting up a synchronization relationship between the client time and the server time. | 09-27-2012 |
20120246259 | EXCHANGING STREAMING INFORMATION - Intermediate devices ( | 09-27-2012 |
20120254340 | Local Storage Linked to Networked Storage System - Disclosed are various embodiments for storage of files. A portable memory device is configured to couple to a computing device, and a storage management application is stored in the portable memory device, the storage management application being executable by a processor circuit. The storage management application is configured to send a plurality of files for storage in a networked storage system, the networked storage system being remote to the computing device. The storage management system caches a subset of the files on the portable memory device and maintains a local file directory in the portable memory device. The local file directory lists the files stored in the networked storage system in association with an account linked to the portable memory device. | 10-04-2012 |
20120254341 | METHOD AND SYSTEM FOR DYNAMIC DISTRIBUTED DATA CACHING - A method and system for dynamic distributed data caching is presented. The system includes one or more peer members and a master member. The master member and the one or more peer members form cache community for data storage. The master member is operable to select one of the one or more peer members to become a new master member. The master member is operable to update a peer list for the cache community by removing itself from the peer list. The master member is operable to send a nominate master message and an updated peer list to a peer member selected by the master member to become the new master member. | 10-04-2012 |
20120259942 | Proxy server with byte-based include interpreter - According to this disclosure, a proxy server is enhanced to be able to interpret instructions that specify how to modify an input object to create an output object to serve to a requesting client. Typically the instructions operate on binary data. For example, the instructions can be interpreted in a byte-based interpreter that directs the proxy as to what order, and from which source, to fill an output buffer that is served to the client. The instructions specify what changes to make to a generic input file. This functionality extends the capability of the proxy server in an open-ended fashion and enables it to efficiently create a wide variety of outputs for a given generic input file. The generic input file and/or the instructions may be cached at the proxy. The teachings hereof have applications in, among other things, the delivery of web content, streaming media, and the like. | 10-11-2012 |
20120271903 | SHARED RESOURCE AND VIRTUAL RESOURCE MANAGEMENT IN A NETWORKED ENVIRONMENT - Systems and methods for shared resource or virtual resource management in a networked environment are disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, includes, creating a virtual memory pool from an aggregation of the physical memory of the devices and/or allocating portions of the virtual memory pool to a given device among the devices. Further, the portions of the virtual memory pool allocated to the given device are in part accessible over a wireless connection for data retrieval and storage by the given device. | 10-25-2012 |
20120271904 | Method and Apparatus for Caching in a Networked Environment - In general, methods and apparatus according to the invention mitigate these and other issues by implementing caching techniques described herein. So when one device in a home network downloads and plays a particular content (e.g., a video, song) from a given site, the content is cached within the network such that the same content is available to be re-played on another device without re-downloading the same content from the Internet. | 10-25-2012 |
20120271905 | PROXY CACHING IN A PHOTOSHARING PEER-TO-PEER NETWORK TO IMPROVE GUEST IMAGE VIEWING PERFORMANCE - The present invention provides a method and system for serving an image stored in the peer computer to a requesting computer in a network photosharing system in which the peer computer is coupled to a photosharing system server. Aspects of the invention include caching copy of the image in the photosharing server; and in response to the photosharing server receiving a request from the requesting computer to view the image stored in the peer computer, transmitting the cached image from the photosharing server to the requesting computer, thereby avoiding the need to transfer the image from the peer computer to the photosharing server for each request to view the image. | 10-25-2012 |
20120271906 | System and Method for Selectively Caching Hot Content in a Content Delivery System - A method includes altering a request interval threshold when a cache-hit ratio falling below a target, receiving a request for content, providing the content when the content is in the cache, when the content is not in the cache and the time since a previous request for the content is less than the request interval threshold, retrieving and storing the content, and providing the content to the client, when the elapsed time is greater than the request interval threshold, and when another elapsed time since another previous request for the content is less than another request interval threshold, retrieving and storing the content, and providing the content to the client, and when the other elapsed time is greater than the other request interval threshold, rerouting the request to the content server without caching the content. | 10-25-2012 |
20120278423 | Method for Transmitting Data by Means of Storage Area Network and System Thereof - In the technical field of data storage and access, the invention relates to the technique of data transmission using a storage area network (SAN) in a magnetic disk storage device environment, including a method for transmitting data over a SAN in such an environment, including: determining a logical volume accessible to a server of the magnetic disk storage device; obtaining information on a logical volume accessible to a client of the magnetic disk storage device, which is determined by the client; establishing a corresponding relationship between the logical volume accessible to the server and the logical volume accessible to the client; receiving a request for using the logical volume of the magnetic disk storage device from the client; and informing the client of an available logical volume by utilizing the corresponding relationship so that a data access to the available logical volume is performed by the client over the SAN. | 11-01-2012 |
20120278424 | SYSTEM, A METHOD, AND A COMPUTER PROGRAM PRODUCT FOR COMPUTER COMMUNICATION - A system, a method and a computer program product for transmission over a network, the method includes: receiving, by an intermediate system coupled to the network, a portion of a data structure that is aimed to a recipient computer; generating a stamp that is responsive to a content of a segment of the data structure and is indifferent to transfer information about a transmission of the data structure; wherein the portion may include the segment or equals the segment; determining, by the intermediate system, whether to cache the portion, in response to at least a comparison between the stamp and stamps of cached portions of data structures; selectively caching the portion in response to the determination; and transmitting to the recipient computer either one of the portion of the transmitted data structure and a cached version of the portion of the transmitted data structure. | 11-01-2012 |
20120284356 | WIRELESS TRAFFIC MANAGEMENT SYSTEM CACHE OPTIMIZATION USING HTTP HEADERS - Wireless traffic management system cache optimization using HTTP headers is disclosed. In one embodiment, the method can include, for example: storing the web content from a web server as cached elements in a local cache on the mobile device and retrieving the cached elements from the local cache to respond to a request made at the mobile device, regardless of expiration indicated in headers of the web content that is cached. The cached elements can be retrieved from the local cache and used to respond to the request at the mobile device even if the expiration in the headers has exceeded, using a tag is used by a proxy server remote from the mobile device to determine if the cached elements for the web content on the local proxy are still valid. | 11-08-2012 |
20120290676 | System and Method for Managing Information Retrievals for Integrated Digital and Analog Archives on a Global Basis - A system and method for managing information retrievals from all of an enterprises' archives across all operating locations. The archives include both digital and analog archives. A single “virtual archive” is provided which links all of the archives of the enterprise, regardless of the location or configuration of the archive. The virtual archive allows for data aggregation (regardless of location) so the a user can have data from multiple physical locations on a single screen in a single view. A single, consistent and user friendly interface is provided through which users are able to access multiple applications through a single sign-on and password. Logical tables that are used to direct information retrieval requests to the physical archives. The retrieved information is reformatted and repackaging to resolve any incompatibility between the format of the stored information and the distribution media. | 11-15-2012 |
20120297008 | CACHING PROVENANCE INFORMATION - Techniques are disclosed for caching provenance information. For example, in an information system comprising a first computing device requesting provenance data from at least a second computing device, a method for improving the delivery of provenance data to the first computing device, comprises the following steps. At least one cache is maintained for storing provenance data which the first computing device can access with less overhead than accessing the second computing device. Aggregated provenance data is produced from input provenance data. A decision whether or not to cache input provenance data is made based on a likelihood of the input provenance data being used to produce aggregated provenance data. By way of example, the first computing device may comprise a client and the second computing device may comprise a server. | 11-22-2012 |
20120297009 | METHOD AND SYSTEM FOR CAHING IN MOBILE RAN - A non-transitory computer readable medium and a method that may include receiving, at a first level cache that is coupled to a radio access network (RAN) component, a data entity that comprises an address; wherein each cache of the hierarchical group of caches is coupled to a component of the RAN or to a component of a core network that is coupled between the RAN and the Internet; identifying the data entity as comprising a request to receive information from a requesting entity that is wirelessly coupled to the RAN—if the address belongs to a root cache address range; providing the information, by the first level cache, to the requesting entity if the content is stored in the first level cache; and sending to an intermediate level cache the data entity if the information is not stored in the first level cache. | 11-22-2012 |
20120297010 | Distributed Caching and Cache Analysis - In a distributed caching system, a Web server may receive, from a user device, a request for a Web service. The Web server may parse the request to identify a cookie included in the request and determine whether the cookie includes allocation information. The allocation information may indicate multiple cache servers temporally store certain data associated with the Web service. The Web server may request for the certain data from the cache servers and then transmit the certain data to the user device. If one of the cache servers fails to respond to the request, the Web server may reallocate the cached data and update the cookie by overwriting the allocation information stored in the cookie. | 11-22-2012 |
20120297011 | Intelligent Reception of Broadcasted Information Items - A method comprising: receiving a plurality of broadcasted information items in a client device; determining fondness of the information items to the user of the client device according to predefined criteria; and selecting a subset of the information items to be stored in a memory of the client device at least partly based on the determined fondness of the information items. | 11-22-2012 |
20120297012 | UPDATING MULTIPLE COMPUTING DEVICES - A system includes a server site that includes a memory for storing update data sets that correspond to data sets stored on multiple computing devices of a user. The system also includes a synchronization manager for determining that one computing device associated with the user and another computing device associated with the user are absent one or more data updates stored in the memory at the server site. The synchronization manager is configured to send in parallel, absent establishing a data transfer lock, the one or more data updates to the both computing devices of the user for updating the corresponding data stored on each computing device. | 11-22-2012 |
20120303736 | Method And Apparatus For Achieving Data Security In A Distributed Cloud Computing Environment - A distributed cloud storage system includes a cloud storage broker logically residing between a client platform and a plurality of remote cloud storage platforms. The cloud storage broker mediates execution of a cloud storage process that involves dividing a data item into multiple portions and allocating the portions to multiple selected cloud storage platforms according to first and second rules defining a key known only to the cloud storage broker or to the client. At some later time when it is desired to retrieve the data item, the key is retrieved from storage and the rules are executed in a reverse fashion to retrieve and reassemble the data item. | 11-29-2012 |
20120303737 | System and Method for Storing Data in Clusters Located Remotely From Each Other - A system for storing data includes a plurality of clusters located remotely from each other in which the data is stored. Each cluster has a token server that controls access to the data with only one token server responsible for any piece of data. Each cluster has a plurality of Cache appliances. Each cluster has at least one backend file server in which the data is stored. The system includes a communication network through which the servers and appliances communicate with each other. A Cache Appliance cluster in which data is stored in back-end servers within each of a plurality of clusters located remotely from each other. A method for storing data. | 11-29-2012 |
20120311064 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR CACHING CALL SESSION CONTROL FUNCTION (CSCF) DATA AT A DIAMETER SIGNALING ROUTER (DSR) - According to one aspect, the subject matter described herein includes a method for caching call session control function (CSCF) data at a Diameter signaling router (DSR). The method includes steps occurring at a DSR network node comprising a communication interface, a processor, and a memory. The steps include receiving, via the communication interface, a Diameter message associated with a network subscriber. The steps also include identifying, by the processor, a CSCF associated with the network subscriber based on the Diameter message. The steps further include storing, in the memory, a record associating the CSCF and the network subscriber. | 12-06-2012 |
20120311065 | ASYNCHRONOUS FILE OPERATIONS IN A SCALABLE MULTI-NODE FILE SYSTEM CACHE FOR A REMOTE CLUSTER FILE SYSTEM - Asynchronous file operations in a scalable multi-node file system cache for a remote cluster file system, is provided. One implementation involves maintaining a scalable multi-node file system cache in a local cluster file system, and caching local file data in the cache by fetching file data on demand from the remote cluster file system into the cache over the network. The local file data corresponds to file data in the remote cluster file system. Local file information is asynchronously committed from the cache to the remote cluster file system over the network. | 12-06-2012 |
20120311066 | System for the Delivery and Dynamic Presentation of Large Media Assets over Bandwidth Constrained Networks - Media content, based on a predetermined set of constraints, from a content provider is delivered to a local cache of a user device before viewing the media. A client asset manager process resides in the user device, an asset list at the content provider site, and the media assets are located at a remote site. | 12-06-2012 |
20120311067 | Data Communication Efficiency - To reduce repetitive data transfers, data content of an outgoing message is stored within cache storage of an intermediate node of a data communications network. A token for identifying the cached data content is stored at the intermediate node and the sender. When a subsequent outgoing message is to be routed from a first network node to a target destination via the intermediate node, a process running at the first node checks whether the content of the message matches data cached at the intermediate node. If there is a match, a copy of the token is sent from the first node to the intermediate node instead date data content. The token is used at the intermediate node to identify the cached data, and the cached data is retrieved from the cache and forwarded to the target destination as an outgoing message. | 12-06-2012 |
20120324035 | SHARED NETWORK RESPONSE CACHE - An apparatus and system are disclosed for reducing network traffic using a shared network response cache. A request filter module intercepts a network request to prevent the network request from entering a data network. The network request is sent by a client and is intended for one or more recipients on the data network. A cache check module checks a shared response cache for an entry matching the network request. A local response module sends a local response to the client in response to an entry in the shared response cache matching the network request. The local response satisfies the network request based on information from the matching entry in the shared response cache. | 12-20-2012 |
20120324036 | System And Method For Acceleration Of A Secure Transmission Over Satellite - A broadband communication system with improved latency is disclosed. The system employs acceleration of secure web-based communications over a satellite communication network. In accordance with aspects of the invention, secure protocol acceleration is employed such that required protocol signals transmitted from a computer employing a web browser may be intercepted by a remote terminal. To insure that the browser will continue transmitting data, the remote terminal generates required acknowledgment and security signals to continue the secure communication, which may then transmitted back to the computer. Meanwhile, the received protocol signals may be converted by the remote terminal for transmission through the satellite communications system in a format appropriate for that communication medium. Aspects of the invention further include a hub or similar device for communicating with the satellite communications system. | 12-20-2012 |
20120324037 | FLOW CONTROL METHOD AND APPARATUS FOR ENHANCING THE PERFORMANCE OF WEB BROWSERS OVER BANDWIDTH CONSTRAINED LINKS - Flow control is applied to increasing the performance of a browser while pre-fetching Web objects while operating over bandwidth constrained links to increase the level of concurrency, thus reducing contention for limited bandwidth resources with increased levels of concurrency. Using an agent or a gateway to speed up its Internet transactions over bandwidth constrained connections to source servers. Assisting a browser in the fetching of objects in such a way that an object is ready and available locally before the browser requires it, without suffering congestion on any bandwidth constrained link. Providing seemingly instantaneous availability of objects to a browser enabling it to complete processing the object to request the next object without much wait. | 12-20-2012 |
20120324038 | Controlling Shared Memory - In view of the characteristics of distributed applications, the present invention proposes a technical solution for applying a shared memory on an NIC comprising: a shared memory configured to provide shared storage space for a task of a distributed application, and a microcontroller. Furthermore, the present invention provides a computer device that includes the above-mentioned NIC, a method for controlling a read/write operation on a shared memory of a NIC, and a method for invoking the NIC. The use of the technical solution provided in the present invention bypasses the processing of network protocol stack, avoids the time delay introduced by the network protocol stack. The present invention does not need to perform TCP/IP encapsulation on the data packet, thus greatly saving additional packet header and packet tail overheads generated from the TCP/IP layer data encapsulation. | 12-20-2012 |
20120331084 | Method and System for Operation of Memory System Having Multiple Storage Devices - Systems and methods for operation of a memory system are disclosed. In some example embodiments, a system for storing or retrieving data in response to one or more signals provided from one or more clients includes a plurality of memcached-type memory devices arranged in a cluster, and a proxy module configured to communicate at least indirectly with each of the memcached-type memory devices and further configured to receive the one or more signals. The proxy module is configured to perform a determination of how to proceed in communicating with the memcached-type memory devices for the purpose of the storing or retrieving of data at or from one or more of the memcached-type memory devices in response to the one or more signals. In additional example embodiments, the proxy module is a centralized proxy and makes selections among the memory devices based upon performing of a memcache selection/fail-over algorithm (MSFOA). | 12-27-2012 |
20120331085 | LOAD BALANCING BASED UPON DATA USAGE - A method of load balancing can include segmenting data from a plurality of servers into usage patterns determined from accesses to the data. Items of the data can be cached in one or more servers of the plurality of servers according to the usage patterns. Each of the plurality of servers can be designated to cache items of the data of a particular usage pattern. A reference to an item of the data cached in one of the plurality of servers can be updated to specify the server of the plurality of servers within which the item is cached. | 12-27-2012 |
20120331086 | Clustered Storage Network - A data storage network is provided. The network includes a client connected to the data storage network; a plurality nodes on the data storage network, wherein each data node has two or more RAID controllers, wherein a first RAID controller of a first node is configured to receive a data storage request from the client and to generate RAID parity data on a data set received from the client, and to store all of the generated RAID parity data on a single node of the plurality of nodes. | 12-27-2012 |
20120331087 | TIMING OF KEEP-ALIVE MESSAGES USED IN A SYSTEM FOR MOBILE NETWORK RESOURCE CONSERVATION AND OPTIMIZATION - Systems and methods for timing of a keep-alive messages used in a system for mobile network resource conservation and optimization are disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, of detecting a rate of content change at the content source and adjusting adjusts timing of keep-alive messages sent to the mobile device based on the rate of content change. The timing of the keep-alive messages can further be determined using different polling rates for the content polls of the multiple applications on the mobile device detected by the local proxy. | 12-27-2012 |
20130007182 | FACILITATING COMMUNICATION BETWEEN ISOLATED MEMORY SPACES OF A COMMUNICATIONS ENVIRONMENT - Automatically converting a synchronous data transfer to an asynchronous data transfer. Data to be transferred from a sender to a receiver is initiated using a synchronous data transfer protocol. Responsive to a determination that the data is to be sent asynchronously, the data transfer is automatically converted from the synchronous data transfer to the asynchronous data transfer. | 01-03-2013 |
20130007183 | Methods And Apparatus For Remotely Updating Executing Processes - Methods, apparatus, and computer-accessible storage media for remotely updating an executing process that receives I/O requests on I/O port(s), stores write data to a write log on local storage, and uploads data from the write log to remote storage. An update for the process is detected and downloaded, and an updated process is instantiated from the update. The current process is directed to perform a shutdown for update during an update window. In response, the current process saves its current configuration, flushes an in-memory portion of the write log to local storage, and releases its I/O port(s). The updated process loads the saved configuration, detects that the port(s) have been released, and starts accepting I/O requests on the ports. During flushing, the current process flushes current data in memory while continuing to append new write data, stops accepting new write requests, and then flushes the new write data. | 01-03-2013 |
20130007184 | MESSAGE ORIENTED MIDDLEWARE WITH INTEGRATED RULES ENGINE - Embodiments of the present invention provide a method, system and computer program product for the integration of a rules engine with message oriented middleware. In an embodiment of the invention, a method for managing a messaging component in message oriented middleware has been provided. The method includes creating shared memory in the memory of a computer and adding or deleting tokens in the shared memory corresponding to objects such as messages and message queues, created in and removed from, respectively, in a messaging component of message oriented middleware. The method additionally includes applying rules in a rules engine to the tokens in the shared memory. Finally, the method includes directing management operations in the messaging component responsive to the applied rules by the rules engine. | 01-03-2013 |
20130007185 | METHOD FOR CATALOGUING AND ACCESSING DIGITAL CINEMA FRAME CONTENT - Systems and methods for providing remote access to a cinematic production. A server may generate and cache frames for a cinematic production while creating frame descriptors that are placed in the catalogue. A synchronization process synchronizes the catalogue with one or more clients. Using the catalogue, the client is able to select desired frames for viewing before frames are received at the client from the server. The server may receive a request for frames from the client, where the request includes an identifier component of the frame descriptor in the catalogue. The requested frames are returned by the server to the client for display at the client. | 01-03-2013 |
20130007186 | CONTROLLING CONTENT CACHING AND RETRIEVAL - A tracker application server (AS) instructs a content cache server (CCS) to join a peer-to-peer (P2P) swarm based on the status of the P2P swarm. The tracker AS determines whether to invite a CCS to join the P2P swarm be based on an underlying network condition change, a peer node joining or leaving the P2P swarm, change(s) in traffic condition, location, capability or workload of the peer node(s) in the swarm. The tracker AS sends an invitation message to the CCS, indicating the content of interest and a peer list identifying the peer nodes of the P2P swarm. Upon receiving the invitation message from the tracker AS, the CCS sends a response to the tracker AS. Upon receiving a response indicating the acceptance of the invitation, the tracker AS puts the CCS into the P2P swarm, and the CCS joins the swarm using a P2P protocol. | 01-03-2013 |
20130007187 | TOPOLOGY AWARE CACHE STORAGE - A content distribution network (CDN) comprising a hierarchy of content storage nodes (CSNs) or caches having storage space that is allocated between local space for storing locally popular content objects and federated space for storing a portion of the less popular content objects. Local space and federated space based upon changes in content object popularity and/or other utility factors. Optionally, parent/child (upstream/downstream) communication paths are used to migrate content between CSNs or caches of the same or different hierarchical levels to avoid utilizing higher price top hierarchical level communications channels. | 01-03-2013 |
20130007188 | METHOD AND SYSTEM FOR COMMUNITY DATA CACHING - A cache module ( | 01-03-2013 |
20130007189 | METHOD AND APPARATUS FOR MANAGING SHARED DATA AT A PORTABLE ELECTRONIC DEVICE OF A FIRST ENTITY - A method and apparatus for managing shared data at a portable electronic device of a first entity is provided. A message is received advising that data associated with a second entity is being shared. A request is transmitted to a server for a list of shared folders associated with the second entity, in response to an option to view shared folders associated with the second entity being selected. The list is received. An initialize command is transmitted to the server, the initialize command identifying at least one folder in the list. The data associated with the second entity is received, responsive to the transmitting the initialize command. The data is stored in association with a second entity identifier. | 01-03-2013 |
20130013724 | METHOD, SYSTEM AND APPARATUS FOR DELIVERING WEB CONTENT - According to embodiments described in the specification, a method, system and apparatus for delivering web content are provided. The method comprises maintaining a web page in a memory of a web server identifiable by a network address, the web page including at least one reference to a foreign element maintained at a second web server identifiable by a second network address; identifying the at least one reference; transmitting a request from an interface of the web server for obtaining the second network address; receiving the second network address of the second web server and storing the second network address in the memory in association with an identifier of the web page. | 01-10-2013 |
20130013725 | SYSTEM AND METHOD FOR MANAGING PAGE VARIATIONS IN A PAGE DELIVERY CACHE - Embodiments disclosed herein provide a high performance content delivery system in which versions of content are cached for servicing web site requests containing the same uniform resource locator (URL). When a page is cached, certain metadata is also stored along with the page. That metadata includes a description of what extra attributes, if any, must be consulted to determine what version of content to serve in response to a request. When a request is fielded, a cache reader consults this metadata at a primary cache address, then extracts the values of attributes, if any are specified, and uses them in conjunction with the URL to search for an appropriate response at a secondary cache address. These attributes may include HTTP request headers, cookies, query string, and session variables. If no entry exists at the secondary address, the request is forwarded to a page generator at the back-end. | 01-10-2013 |
20130013726 | CACHING IN MOBILE NETWORKS - A method for optimising the distribution of data objects between caches in a cache domain of a resource limited network. User requests for data objects are received at caches in the cache domain. A notification is sent from each cache at which a request received to a cache manager. The notification reports the user request and identifies the requested data object. At the cache manager, object information including the request frequency of each requested data object and the locations of the caches at which the requests were received are collated and stored, and objects for distribution within the cache domain are identified on the basis of the object information. Instructions are sent from the cache manager to the caches to distribute data objects stored in those caches between themselves. The data objects are distributed between the caches using transmission capacity of the network that would otherwise be unused. | 01-10-2013 |
20130018976 | CACHING EMAIL UNIQUE IDENTIFIERS - Accessing, via an end user device, email messages of an external mail source. A direct access proxy is operative to reconcile the email contents of external email sources with the email contents of user devices through the use of lists of unique email identifiers (UIDs). A Partition Database returns UID lists reflective of the UIDs of email messages previously received from the external email source and forwarded to a network server of the system (forwarded UID lists). A memory cache external to the direct access proxy and its corresponding Partition Database returns forwarded UID lists. The direct access proxy determines the data reliability of the Partition Database and memory cache, and obtains forwarded UID lists from the memory cache when it determines that the memory cache is at least as reliable as the Partition Database. | 01-17-2013 |
20130018977 | DATA SHARING METHODS AND PORTABLE TERMINALSAANM Peng; GangAACI BeijingAACO CNAAGP Peng; Gang Beijing CN - A data sharing method and a portable terminal are provided. The portable terminal is a first terminal having a first system and a second system which have a capability of operating a shared storage area. The method comprises: starting transmitting a file in the shared storage area to a second terminal by the first system; acquiring uploaded information of the file by the second system, when detecting that the first system fulfills a predetermined condition, during the transmission of the file in the shared storage area to the second terminal by the first system; and continuing the transmission of the file to the second terminal by the second system in accordance with the uploaded information. In the transmission of shared date according to the embodiments of the present disclosure, due to the two-system hybrid architecture of the terminal, one of the two systems may continue the transmission of the shared data if the transmission is interrupt by shut down or fault of the other one of the two systems, thereby the user experience in transmitting the shared data may be improved. | 01-17-2013 |
20130024538 | FAST SEQUENTIAL MESSAGE STORE - A broker may be used as an intermediary to exchange messages between producers and consumers. The broker may store and dispatch messages from a physical queue stored in a persistent memory. More specifically, the broker may enqueue messages to the physical queue that are received from producers and may dispatch messages from the physical queue to interested consumers. The broker may further utilize one or more logical queues stored in transient memory to track the status of the messages stored in persistent memory. As messages are dispatched to and acknowledged by interested consumers, the broker deletes acknowledged messages from the physical queue. The messages deleted are those preceding a physical ACKlevel pointer that specifies the first non-acknowledged message in the physical queue. The physical ACKlevel pointer is advanced in the physical queue based on the relative position of corresponding logical ACKlevel pointers maintained by the logical queues. | 01-24-2013 |
20130031197 | INTERNET CACHE SUBSCRIPTION FOR WIRELESS MOBILE USERS - A server device may receive an indication that a mobile device has enrolled in a cache subscription service. The server device may receive cache parameters associated with the cache subscription service, where the cache parameters are specific to the mobile device. Content may be retrieved from a network and stored, in a memory associated with the one or more server devices, based on the received cache parameters. The server device may receive, from the mobile device, a request for particular content, determine whether the request for particular content corresponds to content that is stored in the memory, and provide, when determining that the requested particular content corresponds to content that is stored in the memory, the corresponding stored content to the mobile device. | 01-31-2013 |
20130031198 | TAILORING CONTENT TO BE DELIVERED TO MOBILE DEVICE BASED UPON FEATURES OF MOBILE DEVICE - A system and computer program product for delivering tailored specific content to a mobile device. A shim application is provided to the mobile device by a content server after the mobile device visits the content server for the first time. The shim application detects the capabilities of the mobile device, such as the screen size, screen resolution, memory size, browser capabilities, etc. The shim application then includes such information in the header of the requests, such as a request for content, sent from the mobile device to the content server. The content server then generates the requested content in the appropriate format based on the information provided in the header. In this manner, the content server will now be able to ensure that the content provided by the content server for a particular mobile device will be appropriately displayed on the mobile device. | 01-31-2013 |
20130031199 | TRANSMITTING DATA INCLUDING PIECES OF DATA - A method and system for transmitting data including pieces of data. The method includes the steps of: placing a piece of data on at least one cache memory; and sending a signal indicating a presence of the piece of data on the cache memory to at least one client, where at least one of the steps is carried out by a computer device. | 01-31-2013 |
20130031200 | QUALITY OF SERVICE MANAGEMENT - A method for managing an amount of IO requests transmitted from a host computer to a storage system is described. A current latency value of an IO request most recently removed from an issue queue maintained by the host computer in order to transmit IO requests from the host computer to the storage system is periodically determined. An average latency value is the calculated based on the current latency value and a size limit of the issue queue is adjusted based in part on the average latency value. Upon receiving an IO request from one of a plurality of client applications running on the host computer, it can then be determined whether a number of pending IO requests in the issue queue has reached the size limit and the IO request can be transmitted to the issue queue if the number of pending IO request falls within the size limit. | 01-31-2013 |
20130031201 | INTELLIGENT ELECTRONIC DEVICE COMMUNICATION SOLUTIONS FOR NETWORK TOPOLOGIES - Systems and methods for communicating data from an IED on an internal network to a server, a client or device on an external network through a firewall are provided. | 01-31-2013 |
20130036186 | CACHING REMOTE SWITCH INFORMATION IN A FIBRE CHANNEL SWITCH - A network of switches with a distributed name server configuration and caching of remote node device information is disclosed. The network preferably comprises a first switch coupled to a second switch. Each of the switches directly couple to respective node devices. The first switch maintains a name server database about its local node devices, as does the second switch. The second switch further maintains a information cache about remote node devices. The name server preferably notifies other switches of changes to the database, and the cache manager preferably uses the notifications from other switches to maintain the cache. The name server accesses the cache to respond to queries about remote node devices. The cache manager may also aggregate notification messages from other switches when notifying local devices of state changes. Traffic overhead and peak traffic loads may advantageously be reduced. | 02-07-2013 |
20130041970 | CLIENT SIDE CACHING - A method for client side caching includes, with a client system, running a proxy caching application designed for execution on a proxy server, with a content presentation application running on the client system, accessing content from a server communicatively coupled to the client system, and with said proxy caching application, transparently caching said content into a cache system of said client system. | 02-14-2013 |
20130041971 | TECHNIQUE FOR IMPROVING REPLICATION PERSISTANCE IN A CACHING APPLICANCE STRUCTURE - A method for improving replication persistence in a caching appliance structure can begin when a primary catalog service receives a command to instantiate a data partition. The primary catalog service can manage a collective of caching appliances in a networked computing environment. The data partition can include a primary shard and at least one replica shard. The primary shard of the data partition can be stored within a memory space of a first caching appliance. The at least one replica shard of the data partition can be stored within a non-volatile storage space of a second caching appliance. The first and the second caching appliances can be separate physical devices. The memory space of the second caching appliance that could have been used to store the at least one replica shard can be available for storing primary shards for other data partitions, increasing the capacity of the collective. | 02-14-2013 |
20130041972 | Content Delivery Network Routing Using Border Gateway Protocol - An announcement protocol may allow disparate, and previously incompatible, content delivery network caches to exchange information and cache content for one another. Announcement data may be stored by the respective caches, and used to determine whether a cache is able to service an incoming request. URL prefixes may be included in the announcements to identify the content, and longest-match lookups may be used to help determine a secondary option when a first cache determines that it lacks a requested content. | 02-14-2013 |
20130041973 | Method and System for Sharing Audio and/or Video - The disclosure discloses a method for sharing audio and/or video. The method includes the steps that: a first terminal writes audio and/or video from an audio-video providing module into a cache space according to a play request of a second terminal, and transmits the audio and/or video stored in the cache space to the second terminal. | 02-14-2013 |
20130041974 | APPLICATION AND NETWORK-BASED LONG POLL REQUEST DETECTION AND CACHEABILITY ASSESSMENT THEREFOR - Systems and methods for application and network-based long poll request detection and cacheability assessment therefore are disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a distributed proxy and cache system, including, determining relative timings between a first request initiated by the application, a response received responsive to the first request, and a second request initiated subsequent to the first request also by the application. The relative timings can be compared to request-response timing characteristics for other applications to determine whether the requests of the application are long poll requests. | 02-14-2013 |
20130046845 | STORAGE SYSTEM, CONTROL METHOD FOR STORAGE SYSTEM, AND COMPUTER PROGRAM - A control method for a storage system, whereby a plurality of storage nodes included in the storage system are grouped into a first group composed of storage nodes with a network distance in the storage system within a predetermined distance range, and second groups composed of storage nodes that share position information for the storage nodes that store data. A logical spatial identifier that identifies the second groups is allocated for each of the second groups, to calculate a logical spatial position using a data identifier as an input value for a distributed function, and store data corresponding to the data identifier in the storage node that belongs the second group to which the identifier corresponding to the calculated position is allocated. | 02-21-2013 |
20130054728 | SYSTEM AND METHOD FOR EFFICIENT CACHING AND DELIVERY OF ADAPTIVE BITRATE STREAMING - A non-transitory computer readable medium, a system and a method for streaming, the method may include: intercepting, by a redirector, a request from a streaming application, to receive metadata indicative of location of multiple video file segments; sending to the streaming application metadata that points to locations of cached video file segments within a single streaming cache or multiple streaming caches and points to locations outside the streaming cache of other video file segments that are not cached; receiving, by the streaming cache, a request from the streaming application to receive a cached video file segment; sending from the streaming cache the cached video file segment. | 02-28-2013 |
20130054729 | SYSTEM AND METHOD FOR PRE-FETCHING AND CACHING CONTENT - A system and method for caching and pre-fetching content is disclosed. This invention relates to mobile devices and, more particularly but not exclusively, to delivering content to a mobile device. Existing systems employ different mechanisms for delivering content such as multimedia and the like to users of mobile device. Mechanisms such as broadcast services, delivery from the interne, Wi-Fi hotspots, Bluetooth kiosks etc face problems of offering innovative services to users due to insufficient network capacity, high end costs to consumers. The disclosed system delivers contents such as multimedia, data and the like by pre-fetching and caching techniques. The contents preferred by a user is identified and pre-fetched to access points located in vicinity of the user. The user can access the contents from the access points via a short range communication means such as Bluetooth, Infrared and so on. | 02-28-2013 |
20130054730 | PORT CIRCUIT FOR HARD DISK BACKPLANE AND SERVER SYSTEM - A port circuit for a hard disk backplane of a server system includes a control microchip and at least one selecting microchip. The hard disk backplane includes a number of ports. The server system includes a number of servers connected to a portion of the ports. The at least one selecting microchip is connected to the control microchip and other portion of ports of a hard disk backplane. When the control microchip detects that one or more standby servers form part of the server system, the control microchip selects the one or more standby server to connect to the other portion of the ports. When the control microchip does not detect that the one or more standby servers form part of the server system, the control microchip selects the servers to connect to the other portion of the ports. | 02-28-2013 |
20130054731 | CUT/COPY AND PASTE FUNCTIONALITY - An apparatus including a clipboard monitor at a first device is described. The clipboard monitor is operatively coupled to a data management module. The clipboard monitor is configured to receive metadata associated with data acquired in an acquire operation at the first device. The clipboard monitor is configured to send the metadata to the data management module in response to the acquire operation. The clipboard monitor is configured to receive a request associated with a paste operation at a second device. The clipboard monitor is configured to provide the data to the second device in response to the request. | 02-28-2013 |
20130054732 | METHOD AND SYSTEM FOR SEAMLESSLY ACCESSING REMOTELY STORED FILES - A system and method by which users via programs on one computer may seamlessly access files remotely stored on other computers that run a well known file access protocol. An operating system extension and an application level network access program are provided. The operating system extension receives file system requests for remote files from the operating system that were issued according to a well known application program interface. The operating system extension forwards the remote file system request to the network access program. The network access program reformats the request according to a well known application level network protocol extension and sends it over a network to a remote computer system. | 02-28-2013 |
20130060881 | COMMUNICATION DEVICE AND METHOD FOR RECEIVING MEDIA DATA - Communication devices are provided comprising a receiver configured to receive a data stream including data for reconstructing media data at a first quality level; a memory for storing data for reconstructing the media data at a second quality level wherein the first quality level is higher than the second quality level; a determiner configured to determine whether the reception rate of the data included in the data stream fulfills a predetermined criterion; and a processing circuit configured to reconstruct the media data from the data included in the data stream if it has been determined that the reception rate of the data included in the data stream fulfills the predetermined criterion and to reconstruct the media data from the data stored in the memory if it has been determined that the reception rate of the data included in the data stream does not fulfill the predetermined criterion. | 03-07-2013 |
20130060882 | TRANSMITTING DATA INCLUDING PIECES OF DATA - A method and system for transmitting data including pieces of data. The method includes the steps of: placing a piece of data on at least one cache memory; and sending a signal indicating a presence of the piece of data on the cache memory to at least one client, where at least one of the steps is carried out by a computer device. | 03-07-2013 |
20130060883 | MULTIMEDIA PLAYBACK CALIBRATION METHODS, DEVICES AND SYSTEMS - A multimedia playback calibration method includes a calibration module operating on a mobile communications device to cause it to: introduce test data at a first end, in the mobile device, of a playback path and receive data, played back by a playback device at a second end of the playback path, at a sensor integral to the mobile device; compare the received data against the test data to determine a characteristic of the playback path; and configure the mobile device to compensate for this characteristic. The mobile device may comprise a handheld casing enclosing a central processing unit, a multimedia player module for initiating playback of at least one data stream on a playback device, communication capability for forwarding the at least one data stream from the mobile device to the playback device along a playback path and the calibration module. | 03-07-2013 |
20130067019 | SELECTIVE USE OF SHARED MEMORY FOR REMOTE DESKTOP APPLICATION - A method includes determining if a server supporting an application and a client having remote desktop access to the server are on a same physical computing device. Upon determining that the server and the client are on the same physical computing device, graphics data related to the application is stored from the server to shared memory that is accessible by the server and by the client. Information to enable the client to retrieve the graphics data stored by the server in the shared memory is communicated from the server to the client. | 03-14-2013 |
20130067020 | METHOD AND APPARATUS FOR SERVER SIDE REMOTE DESKTOP RECORDATION AND PLAYBACK - Various methods for server-side recordation and playback of a remote desktop session are provided. One example method may comprise receiving data related to a remote desktop protocol session. The method of this example embodiment may further comprise providing for storage of the data at a location other than the device associated with the remote desktop protocol client of the remote desktop protocol session. Furthermore, the method of this example embodiment may comprise receiving a request to reproduce the remote desktop protocol session. The method of this example embodiment may also comprise retrieving the data from storage. Additionally, the method of this example embodiment may comprise facilitating reproduction of at least a portion of the remote desktop protocol session based at least in part on the retrieved data. Similar and related example methods, apparatuses, systems, and computer program products are also provided. | 03-14-2013 |
20130073666 | DISTRIBUTED CACHE CONTROL TECHNIQUE - A disclosed method include: receiving, an identifier of a user, an identifier of contents associated with the user and identification data concerning a sensor that read the identifier of the user; reading an identifier of a node associated with the received identification data or a combination of the received identification data and the received identifier of the user, from a data storage unit storing an identifier of a node that will cache contents to be outputted to a display device provided at a different position from a position of a sensor in association with identification data concerning the sensor or a combination of identification data concerning the sensor and an identifier of a user; and transmitting the received identifier of the user and an identifier of contents associated with the user to a node whose identifier was read. | 03-21-2013 |
20130073667 | TECHNIQUES FOR ADMINISTERING AND MONITORING MULTI-TENANT STORAGE - Techniques for managing and monitoring multi-tenant storage in a cloud environment are presented. Storage resources are monitored on a per tenant bases and as a whole for the cloud environment. New and existing administrative types can be dynamically created and managed within the cloud environment. | 03-21-2013 |
20130073668 | SPECULATIVE AND COORDINATED DATA ACCESS IN A HYBRID MEMORY SERVER - A method, accelerator system, and computer program product, for prefetching data from a server system in an out-of-order processing environment. A plurality of prefetch requests associated with one or more given data sets residing on the server system are received from an application on the server system. Each prefetch request is stored in a prefetch request queue. A score is assigned to each prefetch request. A set of the prefetch requests are selected from the prefetch queue that comprise a score above a given threshold. A set of data, for each prefetch request in the set of prefetch requests, is prefetched from the server system that satisfies each prefetch request, respectively. | 03-21-2013 |
20130080565 | METHOD AND APPARATUS FOR COLLABORATIVE UPLOAD OF CONTENT - A collaborative cloud DVR system (ccDVR), which includes a cloud storage system and a plurality of participating DVR client devices, acts collaboratively as a single communal entity in which community members authorize each other to upload, remotely store and download licensed content for time shifted viewing, in a manner which rigorously protects legal rights of the content owners while overcoming the potential physical obstacles of limited bandwidth, power failures, incomplete uploads/downloads of content, limited cloud storage capacity, etc. The collaborative cloud DVR community collaboratively shares bandwidth and cloud storage capacity among DVR viewer/users with each owner/user of a DVR client device authorizing his or her individual DVR client device to be utilized by a cloud storage system server and any other owner/user of a DVR client device in the respective service community, and receiving similar permission in return to promote the convenience of cloud storage in an authorized manner. | 03-28-2013 |
20130080566 | SYSTEM AND METHOD FOR DYNAMIC CACHE DATA DECOMPRESSION IN A TRAFFIC DIRECTOR ENVIRONMENT - Described herein are systems and methods for use with a load balancer or traffic director, and administration thereof, wherein the traffic director is provided as a software-based load balancer that can be used to deliver a fast, reliable, scalable, and secure platform for load-balancing Internet and other traffic to back-end origin servers, such as web servers, application servers, or other resource servers. In accordance with an embodiment, the traffic director can be configured to compress data stored in its cache, and to respond to requests from clients by serving content from origin servers either as compressed data, or by dynamically decompressing the data before serving it, should a particular client prefer to receive a non-compressed variant of the data. In accordance with an embodiment, the traffic director can be configured to make use of hardware-assisted compression primitives, to further improve the performance of its data compression and decompression. | 03-28-2013 |
20130080567 | ENCAPSULATED ACCELERATOR - A data processing system comprising: a host computer system supporting a software entity and a receive queue for the software entity; a network interface device having a controller unit configured to provide a data port for receiving data packets from a network and a data bus interface for connection to a host computer system, the network interface device being connected to the host computer system by means of the data bus interface; and an accelerator module arranged between the controller unit and a network and having a first medium access controller for connection to the network and a second medium access controller coupled to the data port of the controller unit, the accelerator module being configured to: on behalf of the software entity, process incoming data packets received from the network in one or more streams associated with a first set of one or more network endpoints; encapsulate data resulting from said processing in network data packets directed to the software entity; and deliver the network data packets to the data port of the controller unit so as to cause the network data packets to be written to the receive queue of the software entity. | 03-28-2013 |
20130080568 | SYSTEM AND METHOD FOR CACHING INQUIRY DATA ABOUT SEQUENTIAL ACCESS DEVICES - An intermediate device communicatively connected to a host device and a sequential device in a storage area network. The host device is configured to issue different kinds of commands to the sequential device, including an inquiry command. The sequential device is configured to sequentially process requests from the host device. The intermediate device is configured to cache inquiry data about the sequential device itself in a cache memory connected to the intermediate device and service inquiry commands from the host device. | 03-28-2013 |
20130086198 | APPLICATION-GUIDED BANDWIDTH-MANAGED CACHING - Methods and systems for populating a cache memory that services a media composition system. Caching priorities are based on a state of the media composition system, such as media currently within a media composition timeline, a composition playback location, media playback history, and temporal location within clips that are included in the composition. Caching may also be informed by descriptive metadata and media search results within a media composition client or a within a media asset management system accessed by the client. Additional caching priorities may be based on a project workflow phase or a client project schedule. Media may be partially written to or read from cache in order to meet media request deadlines. Caches may be local to a media composition system or remote, and may be fixed or portable. | 04-04-2013 |
20130086199 | SYSTEM AND METHOD FOR MANAGING MESSAGE QUEUES FOR MULTINODE APPLICATIONS IN A TRANSACTIONAL MIDDLEWARE MACHINE ENVIRONMENT - A middleware machine environment can manage message queues for multimode applications. The middleware machine environment includes a shared memory on a message receiver, wherein the shared memory maintains one or more message queues for the middleware machine environment. The middleware machine environment further includes a daemon process that is capable of creating at least one message queue in the shared memory, when a client requests that the at least one message queue be set up to support sending and receiving messages. Additionally, different processes on a client operate to use at least one proxy to communicate with the message server. Furthermore, the middleware machine environment can protect message queues for multimode applications using a security token created by the daemon process. | 04-04-2013 |
20130091237 | Aligned Data Storage for Network Attached Media Streaming Systems - Described embodiments provide a server for transferring data packets of streaming data sessions between devices. A redundant array of inexpensive disks (RAID) array having one or more stripe sector units (SSU) stores media files corresponding to the one or more data sessions. The RAID control module receives a request to perform the write operation to the RAID array beginning at a starting data storage address (DSA) and pads the data of the write operation if the amount of data is less than a full SSU of data, such that the padded data of the write operation is a full SSU of data. The RAID control module stores the full SSU of data beginning at a starting data storage address (DSA) that is aligned with a second SSU boundary, without performing a read-modify-write operation. | 04-11-2013 |
20130097275 | CLOUD-BASED STORAGE DEPROVISIONING - A device creates a first cloud storage container in a first region of cloud storage, clears a delete flag associated with the first cloud storage container, and stores a first data object in the first cloud storage container in the first region of cloud storage. The device receives a request to delete the first cloud storage container, sets a delete flag associated with the first cloud storage container based on the request to delete the first cloud storage container, and deletes the first cloud storage container if the request to delete has not been rescinded prior to expiration of a time period. | 04-18-2013 |
20130103778 | METHOD AND APPARATUS TO CHANGE TIERS - Systems and methods directed to changing tiers for a storage area that utilizes thin provisioning. Systems and methods check the area subject to a tier change command and change the tier based on the tier specified in the tier change command, and the tier presently associated with the targeted storage area. The pages of the systems and methods may be further restricted to one file per page. | 04-25-2013 |
20130103779 | METHOD AND APPARATUS FOR AUGMENTING SMARTPHONE-CENTRIC IN-CAR INFOTAINMENT SYSTEM USING VEHICLE WIFI/DSRC - A method and system for augmenting smartphone-centric in-car infotainment systems using Wi-Fi or DSRC communications between a vehicle and surrounding infrastructure. One or more smartphones or other electronic devices within a vehicle electronically communicate with the vehicle via a wireless protocol, such as Bluetooth, or a wired connection. The electronic devices run applications which submit requests for internet-based files or data, such as web pages, audio or video files. The vehicle brokers these requests and, using its own external wireless communications systems, such as Wi-Fi or DSRC, retrieves as many of the files or data as possible whenever internet access is available via an external wireless connection. The vehicle then provides the files or data to the requesting electronic devices. A token-based method for prioritizing the requests and rendering the data to the electronic devices is also disclosed. | 04-25-2013 |
20130103780 | DELAYED PUBLISHING IN PROCESS CONTROL SYSTEMS - Techniques for delaying the publication of data to a network by a device in a process control system or plant include obtaining, at the device, data to be published to the network; storing the obtained data and a corresponding timestamp in a cache; triggering a publication of cached data; and, based on the trigger, publishing the oldest cached data to the network during the publishing timeslot assigned to the device. The cached data may correspond to a sample rate of the device and may include multiple instances of data obtained over time. The device includes a network interface, a cache, and a publisher, and the device may be configured to operate in the delayed publishing mode, or to operate in an immediate publishing mode in which currently obtained data that has not been cached is published to the network during the publishing time slot assigned to the device. | 04-25-2013 |
20130103781 | MOBILE COMMUNICATION DEVICE - A mobile communication device is mounted on a vehicle, and has a reception unit for receiving a distributed cache that is data having divided information, a distributed cache restoration unit for restoring the distributed cache into original information, a data dividing unit for producing the distributed cache by dividing the information, and a transmission unit for transmitting the distributed cache. | 04-25-2013 |
20130103782 | APPARATUS AND METHOD FOR CACHING OF COMPRESSED CONTENT IN A CONTENT DELIVERY NETWORK - A content delivery network (CDN) edge server is provisioned to provide last mile acceleration of content to requesting end users. The CDN edge server fetches, compresses and caches content obtained from a content provider origin server, and serves that content in compressed form in response to receipt of an end user request for that content. It also provides “on-the-fly” compression of otherwise uncompressed content as such content is retrieved from cache and is delivered in response to receipt of an end user request for such content. A preferred compression routine is gzip, as most end user browsers support the capability to decompress files that are received in this format. The compression functionality preferably is enabled on the edge server using customer-specific metadata tags. | 04-25-2013 |
20130110961 | CLOUD-BASED DISTRIBUTED PERSISTENCE AND CACHE DATA MODEL | 05-02-2013 |
20130110962 | Wirelessly Sending a Set of Encoded Data Slices | 05-02-2013 |
20130110963 | METHOD AND APPARATUS THAT ENABLES A WEB-BASED CLIENT-SERVER APPLICATION TO BE USED OFFLINE | 05-02-2013 |
20130110964 | DELIVERING MULTIMEDIA SERVICES | 05-02-2013 |
20130110965 | COMPUTER SYSTEM AND METHOD FOR OPERATING THE SAME | 05-02-2013 |
20130117405 | SYSTEM AND METHOD FOR MANAGING AN OBJECT CACHE - In order to optimize efficiency of serialization, a serialization cache is maintained at an object server. The serialization cache is maintained in conjunction with an object cache and stores serialized forms of objects cached within the object cache. When an object is to be sent from the server to the client, a serialization module determines if a serialized form of the object is stored in the serialization cache. If the object is already serialized within the serialization cache, the serialized form is retrieved and provided to the client. Otherwise, the object is serialized, the object is cached in the object cache and the serialized form of the object is cached in the serialization cache. | 05-09-2013 |
20130124667 | SYSTEM AND METHOD FOR MANAGING DEDICATED CACHES - A client-based computer system configured to communicate with a remote server through a network and to provide access to content or services provided by the server is provided. The system includes a processor, a storage device, a client-side cache dedicated to a set of resources specified by a configuration, and a caching manager to automatically manage the cache as directed by the configuration. The client-side cache is directed by the configuration to transparently intercept a request for one of the resources from a client application to the server, and to automatically determine when to send the request to and provide a response from the server over the network to appear to the client application as though the client application sent the request to and received the response from the server. | 05-16-2013 |
20130132503 | COMPUTER SYSTEM AND NETWORK INTERFACE SUPPORTING CLASS OF SERVICE QUEUES - A data processing system adapted for high-speed network communications, a method for managing a network interface and a network interface for such system, are provided, in which processing of packets received over the network is achieved by embedded logic at the network interface level. Incoming packets on the network interface are parsed and classified as they are stored in a buffer memory. Functional logic coupled to the buffer memory on the network interface is enabled to access any data field within a packet in a single cycle, using pointers and packet classification information produced by the parsing and classifying step. Results of operations on the data fields in the packets are available before the packets are transferred out of the buffer memory. A data processing system, a method for management of a network interface and a network interface are also provided by the present invention that include an embedded firewall at the network interface level of the system, which protects against inside and outside attacks on the security of data processing system. Furthermore, a data processing system, a method for management of a network interface and a network interface are a provided by the present invention that support class of service management for packets incoming from the network, by applying priority rules at the network interface level of the system. | 05-23-2013 |
20130132504 | ADAPTIVE NETWORK CONTENT DELIVERY SYSTEM - A method and apparatus stores media content in a variety of storage devices, with at least a portion of the storage devices having different performance characteristics. The system can deliver media to a large number of clients while maintaining a high level of viewing experience for each client by automatically adapting the bit rate of a media being delivered to a client using the client's last mile bit rate variation. The system provides clients with smooth viewing of video without buffering stops. The client does not need a custom video content player to communicate with the system. | 05-23-2013 |
20130138760 | APPLICATION-DRIVEN SHARED DEVICE QUEUE POLLING - Methods and systems for application-driven polling of shared device queues are provided. One or more applications running in non-virtualized or virtualized computing environments may be adapted to enable methods for polling shared device queues. Applications adapted to operate in a polling mode may transmit a request to initiate polling of shared device queues, wherein operating in the polling mode disables corresponding device interrupts. Applications adapted to operate in a polling mode may be regulated by one or more predefined threshold limitations. | 05-30-2013 |
20130138761 | STREAMING AND BULK DATA TRANSFER TRANSFORMATION WITH CONTEXT SWITCHING - In described embodiments, processing of a data stream, such as a packet stream or flow, associated with data streaming is improved by context switching that employs context history. For each data stream that is transformed through processing, a context is maintained that comprises state information and includes a history and state information that enables the transformation for the data stream, Processing for the data transformation examines currently arriving data and then processes the data based on the context data and previously known context information for the data stream from the history stored in memory. | 05-30-2013 |
20130138762 | FACILITATING COMMUNICATION BETWEEN ISOLATED MEMORY SPACES OF A COMMUNICATIONS ENVIRONMENT - Automatically converting a synchronous data transfer to an asynchronous data transfer. Data to be transferred from a sender to a receiver is initiated using a synchronous data transfer protocol. Responsive to a determination that the data is to be sent asynchronously, the data transfer is automatically converted from the synchronous data transfer to the asynchronous data transfer. | 05-30-2013 |
20130138763 | SYSTEMS AND METHODS FOR CACHING AND SERVING DYNAMIC CONTENT - A web server and a shared caching server are described for serving dynamic content to users of at least two different types, where the different types of users receive different versions of the dynamic content. A version of the dynamic content includes a validation header, such as an ETag, that stores information indicative of the currency of the dynamic content and information indicative of a user type for which the version of the dynamic content is intended. In response to a user request for the dynamic content, the shared caching server sends a validation request to the web server with the validation header information. The web server determines, based on the user type of the requestor and/or on the currency of the cached dynamic content whether to instruct the shared caching server to send the cached content or to send updated content for serving to the user. | 05-30-2013 |
20130144967 | Scalable Queuing System - A method, an apparatus and an article of manufacture for providing queuing semantics in a distributed queuing service while maintaining service scalability. The method includes supporting at least one of an en-queue and a de-queue operation of one or more queued messages in a non-guaranteed order, maintaining the ordering of the one or more queued messages, and routing an en-queue operation to a persistent queue server and a de-queue operation to a cache manager in the maintained ordering of the one or more queued messages to provide queuing semantics in a distributed queuing service while maintaining service scalability. | 06-06-2013 |
20130151645 | METHOD AND APPARATUS FOR PRE-FETCHING PLACE PAGE DATA FOR SUBSEQUENT DISPLAY ON A MOBILE COMPUTING DEVICE - A computer-implemented method and system for pre-fetching place page data from a remote mapping system for display on a client computing device is disclosed. User preference data collected from various data sources including applications executing on the client device, online or local user profiles, and other sources may be analyzed to generate a request for place page data from the remote mapping system. The user preference data may indicate a map feature such as a place of business, park, or historic landmark having the characteristics of both a user's preferred geographic location and the user's personal interests. For example, where the user indicates a geographic preference for “Boston” and a personal interest for “home brewing” the system and method may request place page data for all home brewing or craft beer-related map features near Boston. | 06-13-2013 |
20130151646 | STORAGE TRAFFIC COMMUNICATION VIA A SWITCH FABRIC IN ACCORDANCE WITH A VLAN - A plurality of SMP modules and an IOP module communicate storage traffic via respective corresponding I/O controllers coupled to respective physical ports of a switch fabric by addressing cells to physical port addresses corresponding to the physical ports. One of the SMPs executes initiator software to partially manage the storage traffic and the IOP executes target software to partially manage the storage traffic. Storage controllers are coupled to the IOP, enabling communication with storage devices, such as disk drives, tape drives, and/or networks of same. Respective network identification registers are included in each of the I/O controller corresponding to the SMP executing the initiator software and the I/O controller corresponding to the IOP. Transport of the storage traffic in accordance with a particular VLAN is enabled by writing a same particular value into each of the network identification registers. | 06-13-2013 |
20130151647 | METHOD FOR REWRITING PROGRAM, REPROGRAM APPARATUS, AND ELECTRONIC CONTROL UNIT - A reprogram apparatus does not transmit a reprogram data set as it is. The reprogram data set has a plurality of unit blocks and is used for rewriting a program in a memory of a subject electronic control unit (ECU). A consecutive range having at least the predetermined number of consecutive specified unit blocks is extracted. Range size information indicating a range size of the extracted consecutive range is transmitted to the subject ECU. The reprogram data set excluding the specified unit blocks included in the consecutive range is transmitted to the subject ECU on a unit-block-by-unit-block basis. The subject ECU restores the data corresponding to the consecutive range containing the specified unit blocks, which are not received from the reprogram apparatus, based on the range size information received. The reprogram data set is thereby restored. Rewriting of the program is executed using the reprogram data set restored. | 06-13-2013 |
20130151648 | FLEXIBLE AND DYNAMIC INTEGRATION SCHEMAS OF A TRAFFIC MANAGEMENT SYSTEM WITH VARIOUS NETWORK OPERATORS FOR NETWORK TRAFFIC ALLIEVIATION - Flexible and dynamic integration schemas of a traffic management system with various network operators for network traffic alleviation are disclosed. One embodiment includes a method of integration of content caching with a network operator for traffic alleviation a wireless network, including detecting, by an operator proxy of the network operator, a poll from an application on a mobile device which would have been served using a cache element from a local cache on the mobile device, after the cache element stored in the local cache has been invalidated and forwarding the poll from the application on the mobile device to a proxy server. Whether the poll is sent to a service provider of the application directly by the proxy server, or by the proxy server through the operator proxy is configurable or reconfigurable. | 06-13-2013 |
20130151649 | MOBILE DEVICE HAVING CONTENT CACHING MECHANISMS INTEGRATED WITH A NETWORK OPERATOR FOR TRAFFIC ALLEVIATION IN A WIRELESS NETWORK AND METHODS THEREFOR - Mobile device having content caching mechanisms integrated with a network operator for traffic alleviation in a wireless network and methods therefor are disclosed. One embodiment includes a method of integration of content caching with a network operator for traffic alleviation a wireless network, which may be embodied on a mobile device, including determining whether a cache element stored in a local cache on the mobile device for an application poll on the mobile device is valid and forwarding the application poll to an external entity to service the application poll in response to determining that the cache element is no longer valid. The external entity is in part managed by the network operator of the wireless network and can be in part or in whole, a component of an infrastructure of the network operator or external to an infrastructure of the network operator. | 06-13-2013 |
20130151650 | SYSTEMS AND METHODS FOR GENERATING AND MANAGING COOKIE SIGNATURES FOR PREVENTION OF HTTP DENIAL OF SERVICE IN A MULTI-CORE SYSTEM - The present application is directed towards systems and methods for generating and maintaining cookie consistency for security protection across a plurality of cores in a multi-core system. A packet processing engine executing on one core designated as a primary packet processing engine generates and maintains a global random seed. The global random seed may be used as an initial seed for creation of cookie signatures by each of a plurality of packet processing engines executing on a plurality of cores of the multi-core system using a deterministic pseudo-random number generation function such that each core creates an identical set of cookie signatures. | 06-13-2013 |
20130159451 | SEMANTIC CACHE CLOUD SERVICES FOR CONNECTED DEVICES - Technologies are described for semantic cache for connected devices (semantic cache) as a set of next generation cloud services to primarily support the Internet of things scenario: a massive network of devices and device application services inter-communicating, facilitated by cloud-based semantic cache services. The semantic cache may be an instrumented caching reverse proxy with auto-detection of semantic web traffic, public, shadow and private namespace management and control, and real time semantic object temporal versioning, geospatial versioning, semantic contextual versioning and groupings and semantic object transformations. | 06-20-2013 |
20130159452 | Memory Server Architecture - A memory server system is provided herein. It includes a first plurality of Field Programmable Gate Arrays (FPGA) application server nodes that are configured to parse the location of the FPGA data server nodes; a second plurality of FPGA data server nodes that are configured as memory controllers, each of the second plurality of FPGA data server nodes being connected to a plurality of RAM memory banks; and a network connection between the first plurality of FPGAs and the second plurality of FPGA processing nodes. | 06-20-2013 |
20130166670 | NETWORKED STORAGE SYSTEM AND METHOD INCLUDING PRIVATE DATA NETWORK - A networked storage system includes a source mass storage device coupled to a client via a storage area network (SAN). A target mass storage device is coupled to the source mass storage device via a private data network. The source mass storage device stores source data which is provided to the client via the SAN in response to a request by the client to read the data. If the request is to copy or move the source data, however, the source mass storage device determines an identifier for the target mass storage device and directly provides, based on the identifier, the source data to the target mass storage device via the private data network. The transfer via the private data network bypasses the client and the SAN. | 06-27-2013 |
20130173736 | COMMUNICATIONS SYSTEM PROVIDING ENHANCED TRUSTED SERVICE MANAGER (TSM)VERIFICATION FEATURES AND RELATED METHODS - A trusted service manager (TSM) server may include at least one communications device capable of communicating with at least one application server, a verification database server, and at least one mobile communications device. The TSM server may further include a processor coupled with the at least one communications device and capable of registering the at least one application server with the verification database server, receiving a request from the at least one application server to access the memory of the mobile communications device, cooperating with the verification database server to verify the at least one application server based upon the access request and based upon registering of the at least one application server, and writing application data from the at least one application server to the memory of the at least one mobile communications device based upon verifying the at least one application server. | 07-04-2013 |
20130173737 | METHOD AND APPARATUS FOR FLEXIBLE CACHING OF DELIVERED MEDIA - Various methods are described for selecting an access method for flexible caching in DASH. One example method may comprise causing a request for at least one of a primary representation for a segment or an alternative representation for the segment to be transmitted to a caching proxy. The method of this example embodiment may further comprise causing the caching proxy to respond with at least one of the primary representation or the alternate representation based on the caching status at a caching proxy. In some example embodiments, the caching proxy is configured to determine whether the request enables an alternative representation to be included in a response. Furthermore, the method of this example embodiment may comprise receiving at least one of the primary representation and the alternative representation for the segment from the caching proxy. Similar and related example methods, apparatuses, and computer program products are also provided. | 07-04-2013 |
20130173738 | Administering Globally Accessible Memory Space In A Distributed Computing System - In a distributed computing system that includes compute nodes that include computer memory, globally accessible memory space is administered by: for each compute node: mapping a memory region of a predefined size beginning at a predefined address; executing one or more memory management operations within the memory region, including, for each memory management operation executed within the memory region: executing the operation collectively by all compute nodes, where the operation includes a specification of one or more parameters and the parameters are the same across all compute nodes; receiving, by each compute node from a deterministic memory management module in response to the memory management operation, a return value, where the return value is the same across all compute nodes; entering, by each compute node after local completion of the memory management operation, a barrier; and when all compute nodes have entered the barrier, resuming execution. | 07-04-2013 |
20130173739 | Reverse Mapping Method and Apparatus for Form Filing - In the presently preferred embodiment of the invention, every time a user submits a form the client software tries to match the submitted information with the stored profile of that user. If a match is discovered, the program tags the field of the recognized data with a corresponding type. The resulting profile can be used after that to help all subsequent users to fill the same form. | 07-04-2013 |
20130179528 | USE OF MULTICORE PROCESSORS FOR NETWORK COMMUNICATION IN CONTROL SYSTEMS - Various embodiments of the present invention relate to use of one or more multicore processors for network communication (e.g., Ethernet-based communication) in control systems (e.g., vehicle control systems, medical control systems, hospital control systems, instrumentation control systems, test instrument control systems, energy control systems and/or industrial control systems). In one example, one or more systems may be provided with regard to use of multicore processor(s) for network communication (e.g., Ethernet-based communication) in control systems. In another example, one or more methods may be provided with regard to use of multicore processor(s) for network communication (e.g., Ethernet-based communication) in control systems. | 07-11-2013 |
20130179529 | Optimizing Multi-Hit Caching for Long Tail Content - Some embodiments provide an optimized multi-hit caching technique that minimizes the performance impact associated with caching of long-tail content while retaining much of the efficiency and minimal overhead associated with first hit caching in determining when to cache content. The optimized multi-hit caching utilizes a modified bloom filter implementation that performs flushing and state rolling to delete indices representing stale content from a bit array used to track hit counts without affecting identification of other content that may be represented with indices overlapping with those representing the stale content. Specifically, a copy of the bit array is stored prior to flushing the bit array so as to avoid losing track of previously requested and cached content when flushing the bit arras and the flushing is performed to remove the bit indices representing stale content from the bit array and to minimize the possibility of a false positive. | 07-11-2013 |
20130179530 | ENVIRONMENT CONSTRUCTION APPARATUS AND METHOD, ENVIRONMENT REGISTRATION APPARATUS AND METHOD, ENVIRONMENT SWITCHING APPARATUS AND METHOD - An environment construction apparatus that carries out, in a second system, acquiring a connection permission data of a first storage in a first system that was set in a second storage of the second system; and extracting identification data of a first server in the first system based on the connection permission data of the first storage of the first system, and assigning the extracted identification data of the first server in the first system as identification data stored in a connection section of a second server in the second system. | 07-11-2013 |
20130179531 | NETWORK COMMUNICATIONS APPARATUS, METHOD, AND MEDIUM - The present invention provides a novel network communications apparatus that includes a LAN interface that transmits and receives data via a network, a plurality of memory resources to transfer data to an application, an analyzing unit that divides data to be sent and received data into a control part and a content part and analyzes the control part, a storage unit that stores rules to determine resources to be used and transfer control method in accordance with characteristic of the data to be sent and the received data, and a controller that transfers the content data to the application in accordance with a result of analyzing the control part of the data to be sent and the received data and applying the rule. | 07-11-2013 |
20130179532 | COMPUTER SYSTEM AND SYSTEM SWITCH CONTROL METHOD FOR COMPUTER SYSTEM - Disclosed is a computer system provided with an I/O processing unit comprising a buffer and a control unit, wherein the buffer is located between the first computer and a storage apparatus and between a second computer and the storage apparatus and temporarily stores an I/O output from a first computer, and the control unit outputs data stored in the buffer to the storage apparatus, and wherein, a management computer functions to store the I/O output of the first computer in the buffer at a predetermined time, to separate a first storage unit and a second storage unit which are mirror volumes, to connect the buffer and the second storage unit, to connect the second computer and the first storage unit, to output data stored in the buffer to the second storage unit, and to activate the second computer using the first storage unit. | 07-11-2013 |
20130179533 | DATA STORAGE CONTROL SYSTEM, DATA STORAGE CONTROL METHOD, AND DATA STORAGE CONTROL PROGRAM - A reduction in network load as well as an increase in speed of response through caching and an increase in communication efficiency through buffering are both achieved. A data storage control system that temporarily stores and controls data exchanged between a user terminal | 07-11-2013 |
20130185376 | EFFICIENT STATE TRACKING FOR CLUSTERS - Exemplary system and computer program product embodiments for efficient state tracking for clusters are provided. In one embodiment, by way of example only, in a distributed shared memory architecture, an asynchronous calculation of deltas and the views is performed while concurrently receiving client requests and concurrently tracking the client requests times. The results of the asynchronous calculation may be applied to each of the client requests that are competing for data of the same concurrency during a certain period with currently executing client requests. Additional system and computer program product embodiments are disclosed and provide related advantages. | 07-18-2013 |
20130185377 | PIPELINE SYSTEMS AND METHOD FOR TRANSFERRING DATA IN A NETWORK ENVIRONMENT - A communications system having a data transfer pipeline apparatus for transferring data in a sequence of N stages from an origination device to a destination device. The apparatus comprises dedicated memory having buffers dedicated for carrying data and a master control for registering and controlling processes associated with the apparatus for participation in the N stage data transfer sequence. The processes include a first stage process for initiating the data transfer and a last Nth stage process for completing data transfer. The first stage process allocates a buffer from a predetermined number of buffers available within the memory for collection, processing, and sending of the data from the origination device to a next stage process. The Nth stage process receives a buffer allocated to the first stage process from the (N−1)th stage and to free the buffer upon processing completion to permit reallocation of the buffer. | 07-18-2013 |
20130185378 | CACHED HASH TABLE FOR NETWORKING - Systems, methods, and devices are provided for managing hash table lookups. In certain network devices, a hash table having multiple buckets may be allocated for network socket lookups. Network socket information for multiple open network socket connections may be distributed among the buckets of the hash table. For each of the buckets of the hash table, at least a subset of the network socket information that is most likely to be used may be identified, and the identified subset of most likely to be used network socket information may be promoted at each bucket to a position having a faster lookup time than a remaining subset of the network socket information at that bucket. | 07-18-2013 |
20130185379 | EFFICIENT STATE TRACKING FOR CLUSTERS - Exemplary method, system, and computer program product embodiments for efficient state tracking for clusters are provided. In one embodiment, by way of example only, in a distributed shared memory architecture, an asynchronous calculation of deltas and the views is performed while concurrently receiving client requests and concurrently tracking the client requests times. The results of the asynchronous calculation may be applied to each of the client requests that are competing for data of the same concurrency during a certain period with currently executing client requests. Additional system and computer program product embodiments are disclosed and provide related advantages. | 07-18-2013 |
20130191487 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR RECEIVING DIGITAL DATA FILES - A method, apparatus and computer program product are provided to efficiently receive digital imaging data files, regardless of their size. For a respective data packet of a digital imaging data file, the method may determine whether that portion of the digital imaging data file that has been received satisfies the first threshold. If the first threshold is not satisfied, the method may receive the respective data packet using memory, such as by appending the data packet to a linked list. However, if the first threshold is satisfied, the method may receive the respective data packet and subsequent data packet(s) of the digital imaging data file using file storage. The receipt of the respective data packet using file storage is slower than the receipt of the respective data packet using memory. | 07-25-2013 |
20130191488 | SYSTEM AND METHOD FOR EFFICIENT DELIVERY OF MULTI-UNICAST COMMUNICATION TRAFFIC - Disclosed is a system and method for the delivery of multi-unicast communication traffic. A multimedia router is adapted to analyze and identify contents which it handles and one or more access nodes are adapted to receive one or more of the identified contents, cache contents based on said identification; and use cached contents as substitutes for redundant traffic, received by the same access node. | 07-25-2013 |
20130191489 | Media Content Streaming Using Stream Message Fragments - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for media content streaming can include transacting access information associated with a media stream and transacting one or more fragments associated with the media stream to facilitate a delivery of media content associated with the media stream. Access information can include fragment sequencing information to facilitate individual retrieval of fragments associated with the media stream using a uniform resource identifier via a processing device configured to cache content. A fragment can include one or more stream messages. A stream message can include a message header and a corresponding media data sample. The message header can include a message stream identifier, a message type identifier, a timestamp, and a message length value. | 07-25-2013 |
20130191490 | SENDING DATA OF READ REQUESTS TO A CLIENT IN A NETWORKED CLIENT-SERVER ARCHITECTURE - Read messages are issued by a client for data stored in a storage system of the networked client-server architecture. A client agent mediates between the client and the storage system. The storage system sends to the client agent the requested data by partitioning the returned data into segments for each read request. The storage system sends each segment in a separate network message. | 07-25-2013 |
20130191491 | System and Method for Optimizing Secured Internet Small Computer System Interface Storage Area Networks - A network device includes a port coupled to a device, another port coupled to another device, and an access control list with an access control entry that causes the network device to permit log in frames to be forwarded from the first device to the second device. The network device receives a frame addressed to the second device and determines the frame type. If the frame type is a log in frame, then the frame is forwarded to the second device and another access control entry is added to the access control list. The second access control entry causes the network device to permit data frames to be forwarded from the first device to the second device. If not, then the frame is dropped based upon the first access control entry. | 07-25-2013 |
20130198313 | USING ENTITY TAGS (ETAGS) IN A HIERARCHICAL HTTP PROXY CACHE TO REDUCE NETWORK TRAFFIC - Disclosed is a program for validating a web cache independent of an origin server. A computer in between a client computer and the origin server computer receives a request for a resource and an entity tag (ETag) corresponding to the request. The computer forwards the request to the origin server and subsequently receives the resource. The computer generates an ETag for the received resource and compares the generated ETag to the ETag corresponding to the request. If the ETags match, the computer sends an indication toward the client computer that the resource has not been modified. | 08-01-2013 |
20130198314 | METHOD OF OPTIMIZATION OF CACHE MEMORY MANAGEMENT AND CORRESPONDING APPARATUS - In order to optimize cache memory management, the invention proposes a method and corresponding apparatus that comprises application of different cache memory management policies according to data origin and possibly to data type and the use of increasing levels of exclusion from adding to cache of data the exclusion levels being increasingly restrictive with regard to adding data to cache as the cache memory fill level increases. The method and device allows among others to keep important information in cache memory and reduce time spent in swapping information in- and out of cache memory. | 08-01-2013 |
20130198315 | Method and System For Network Latency Virtualization In A Cloud Transport Environment - A cache device is disposed on a connection path between a user computer executing a software application and a network. The application exchanges data with a further computer via the network. The cache device includes a cache memory and a processor. The cache device is configured to measure, by the processor, a first latency between the user computer and the further computer. The cache device is further configured to determine an acceptable latency range based on the latency and a requirement of the software application. The cache device is further configured to measure a second latency between the user computer and the further computer. The cache device is further configured to store, in the cache memory, a set of data transmitted from the user computer to the further computer, if the second latency is not within the acceptable latency range. | 08-01-2013 |
20130198316 | SECURE RESOURCE NAME RESOLUTION USING A CACHE - Techniques for securing name resolution technologies and for ensuring that name resolution technologies can function in modern networks that have a plurality of overlay networks accessible via a single network interface. In accordance with some of the principles described herein, a set of resolution parameters may be implemented by a user to be used during a name resolution process. In some implementations, when an identifier is obtained for a network resource, the identifier may be stored in a cache with resolution parameters that were used in obtaining the identifier. When a new name resolution request is received, the cache may be examined to determine whether a corresponding second identifier is in the cache, and whether resolution parameters used to retrieve the second identifier in the cache match the resolution parameters for the new resolution request. If so, the second identifier may be returned from the cache. | 08-01-2013 |
20130204959 | SYSTEMS AND METHODS OF REAL-TIME DATA SUBSCRIPTION AND REPORTING FOR TELECOMMUNICATIONS SYSTEMS AND DEVICES - Systems and methods of performing real-time data subscription and reporting for telecommunications systems and devices. The systems and methods employ a real-time data aggregation component that can manage subscription requests for real-time data objects stored on the telecommunications systems and devices from one or more users over a network, dynamically start and stop such subscription requests, cache the requested real-time data objects, and supply the real-time data to the respective users. By employing the real-time data aggregation component to handle such subscription requests for data from one or more users, the systems and methods can supply such data, including real-time data, to the respective users, while reducing the overhead on the telecommunications systems and devices and increasing overall system performance. | 08-08-2013 |
20130204960 | ALLOCATION AND BALANCING OF STORAGE RESOURCES - A method and technique for allocation and balancing of storage resources includes: determining, for each of a plurality of storage controllers, an input/output (I/O) latency value based on an I/O latency associated with each storage volume controlled by a respective storage controller; determining network bandwidth utilization and network latency values corresponding to each storage controller; responsive to receiving a request to allocate a new storage volume, selecting a storage controller having a desired I/O latency value; determining whether the network bandwidth utilization and network latency values for the selected storage controller are below respective network bandwidth utilization and network latency value thresholds; and responsive to determining that the network bandwidth utilization and network latency values for the selected storage controller are below the respective thresholds, allocating the new storage volume to the selected storage controller. | 08-08-2013 |
20130212207 | ARCHITECTURE AND METHOD FOR REMOTE MEMORY SYSTEM DIAGNOSTIC AND OPTIMIZATION - A smart memory system preferably includes a memory including one or more memory chips and a smart memory controller. The smart memory controller includes a transmitter communicatively coupled to the cloud. The transmitter securely transmits a product identification (ID) associated with the memory to the cloud. A cloud-based data center receives and stores the product ID and related information associated with the memory. A smart memory tester receives a product specific test program from the cloud-based data center. The smart memory tester may remotely test the memory via the cloud in accordance with the product specific test program. The information stored in the cloud-based data center can be accessed anywhere in the world by authorized personnel. Repair solutions can be remotely determined based on the test results and the diagnostic information. The repair solutions are transmitted to the smart memory controller, which repairs the memory. | 08-15-2013 |
20130212208 | PARTIAL OBJECT CACHING - A method of providing media at multiple bit rates using partial object caching may include receiving, from a first user device, a first request for a media object encoded at a first bit rate; providing the first portion of the media object to the first user device; and caching, in a partial object cache, the first portion of the media object. The method may additionally include receiving, from a second user device, a subsequent request for the media object encoded at the first bit rate; providing the first portion of the media object as retrieved from the partial object cache; and receiving a request for the media object encoded at a second bit rate. The method may further include modifying the request for the media object encoded at the second bit rate to instead request a second portion of the media object at the second bit rate. | 08-15-2013 |
20130212209 | INFORMATION PROCESSING APPARATUS, SWITCH, STORAGE SYSTEM, AND STORAGE SYSTEM CONTROL METHOD - In an information processing apparatus, a data controller performs transmission and reception of data with a storage apparatus having a storage region allocated to the information processing apparatus by a physical port or a virtual port set at the physical port. The physical port transmits and receives data by communicating with the storage apparatus. A management controller calculates a use rate based on a storage capacity of the allocated storage region of the storage apparatus and an amount of use of the storage region and determines whether to perform allocation based on the calculated use rate. When determining to perform the allocation, the management controller allocates an unallocated storage region allocated to none of information processing apparatuses to the information processing apparatus, and also sets a virtual port and connects the information processing apparatus to the allocated storage region by the virtual port. | 08-15-2013 |
20130219006 | MULTIPLE MEDIA DEVICES THROUGH A GATEWAY SERVER OR SERVICES TO ACCESS CLOUD COMPUTING SERVICE STORAGE - A system, method, and computer program product are provided for enabling client devices to transparently access cloud computing services, service storage, and related data via a gateway server that connects to an external network such as the internet or a social network. Data requests are transmitted from at least one client device to the gateway server. The gateway server determines if the data request cannot be satisfied by data stored in its memory, and responsively transmits a second data request to the external network and stores data received in response to the second data request in its memory. The gateway server then satisfies the data request using the stored data, which may include a web computing service, an application program interface, streaming data, metadata, and/or media data. | 08-22-2013 |
20130219007 | SYSTEMS AND METHODS THERETO FOR ACCELERATION OF WEB PAGES ACCESS USING NEXT PAGE OPTIMIZATION, CACHING AND PRE-FETCHING TECHNIQUES - A method and system for acceleration of access to a web page using next page optimization, caching and pre-fetching techniques. The method comprises receiving a web page responsive to a request by a user; analyzing the received web page for possible acceleration improvements of the web page access; generating a modified web page of the received web page using at least one of a plurality of pre-fetching techniques; providing the modified web page to the user, wherein the user experiences an accelerated access to the modified web page resulting from execution of the at least one of a plurality of pre-fetching techniques; and storing the modified web page for use responsive to future user requests. | 08-22-2013 |
20130227047 | METHODS FOR MANAGING CONTENT STORED IN CLOUD-BASED STORAGES - A server receives over a network from a client a request for accessing files stored in a plurality of heterogeneous storage devices hosted by a plurality of storage providers over the network, including a first storage device of a first storage provider and a second storage device of a second storage provider. In response, the server accesses, on behalf of a user of the client, the first storage device and the second storage device to retrieve information concerning the files. The server transmits data to the client over the network, the data representing a logical file system view of the files without exposing actual storage locations of the files stored in the first and second storage devices. | 08-29-2013 |
20130227048 | Method for Collaborative Caching for Content-Oriented Networks - A content router comprising a plurality of interfaces configured to receive and forward a plurality of interests for content and content data in a content oriented network (CON), a cache configured to store content data, and a memory component configured to maintain a forward information base (FIB) that associates content with one or more interfaces on which the interests and content data are received and forwarded, and an availability FIB (AFIB) that associates content data with one or more corresponding collaborative caching routers in the CON that cache the content data. | 08-29-2013 |
20130227049 | DISTRIBUTED CACHE SYSTEM - A disclosed system includes a first computer that stores data, a display apparatus that is capable of reading a user identifier, a second computer, and plural third computers. The second computer includes a data storage unit storing first correlation data to correlate a user identifier with at least one third computer, and a controller that refers to the first correlation data upon detecting an event data, identifies a third computer correlated with a first user identifier included in the event data, and transmits the first user identifier to the identified third computer. Each third computer includes a receiver that receives the first user identifier, a storing unit that obtains from the first computer, and stores data identified based on the received first user identifier, and a controller to transmit data corresponding to a second user identifier, which was received from the display apparatus, based on the second user identifier. | 08-29-2013 |
20130227050 | ASYMMETRIC DATA MIRRORING - Methods, systems, and products mirror data between local memory and remote storage. A write command is sent from a server to a remote storage device, and a timer is established. A current time of the timer is compared to a maximum time period. If the maximum time period expires without receipt of an acknowledgment to the write command, then a write error is assumed to exist to the remote storage device. | 08-29-2013 |
20130227051 | Multi-Layer Multi-Hit Caching for Long Tail Content - Some embodiments provide an optimized multi-hit caching technique that minimizes the performance impact associated with caching of long-tail content while retaining much of the efficiency and minimal overhead associated with first hit caching in determining when to cache content. The optimized multi-hit caching utilizes a modified bloom filter implementation that performs flushing and state rolling to delete indices representing stale content from a bit array used to track hit counts without affecting identification of other content that may be represented with indices overlapping with those representing the stale content. Specifically, a copy of the bit array is stored prior to flushing the bit array so as to avoid losing track of previously requested and cached content when flushing the bit arrays and the flushing is performed to remove the bit indices representing stale content from the bit array and to minimize the possibility of a false positive. | 08-29-2013 |
20130227052 | IMAGE CONTENT BASED PREDICTION AND IMAGE CACHE CONTROLLER - Cache controller ( | 08-29-2013 |
20130232215 | VIRTUALIZED DATA STORAGE SYSTEM ARCHITECTURE USING PREFETCHING AGENT - Virtual storage arrays consolidate data storage from branch locations at data centers. The virtual storage array appears to storage clients as a local data storage; however, the virtual storage array data is actually stored at a data center. To overcome the bandwidth and latency limitations of wide area networks between branch locations and the data center, systems and methods predict, prefetch, and cache at the branch location storage blocks that are likely to be requested in the future by storage clients. When this prediction is successful, storage block requests are fulfilled from branch locations' storage block caches. Predictions may leverage an understanding of the semantics and structure of the high-level data structures associated with the storage blocks. Prefetching agents on storage clients monitor storage requests to determine the associations between requested storage blocks and the corresponding high-level data structures as well as other attributes useful for prediction. | 09-05-2013 |
20130232216 | METHOD FOR EFFICIENT USE OF CONTENT STORED IN A CACHE MEMORY OF A MOBILE DEVICE - A method for cache management of a mobile device communicatively connected to a network component via a network is provided. The method comprises receiving by the network component a request from the mobile device for a data item, the request accompanied by a unique identifier associated thereto, the data item residing in the cache; fetching the data item from at least a server communicatively connected to the network component; generating a unique identifier respective of the fetched data item; and comparing the generated unique identifier and the received unique identifier to determine whether the data item in the cache is the same as the data item fetched from the at least a server. | 09-05-2013 |
20130232217 | Signalling Gateway, Method, Computer Program And Computer Program Product For Communication Between HTTP And SIP - A signalling gateway is arranged to allow a first client using hypertext transfer protocol, HTTP, to initiate a real-time connection to a SIP, session initiation protocol, client using SIP. The signalling gateway is arranged to use a distributed shared memory to support communication between the first client and the signalling gateway regarding session information of the real-time connection. Corresponding methods, computer programs, and computer program products are also presented. | 09-05-2013 |
20130238740 | Caching of Fragmented Streaming Media - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for caching fragmented streaming media, e.g., for caching fragmented media documents streamed in accordance with HTTP, are described in this specification. In one aspect, a system including means for obtaining, locally, fragments of a media document from a remote media source based on a manifest that refers to storage locations where the fragments are stored at the remote media source. Further, the system includes means for generating index points into a locally cached media item of the obtained fragments, the generated index points being different from any index point of the manifest. Additionally, the system includes means for playing the locally cached media item based on the generated index points. | 09-12-2013 |
20130238741 | METHOD AND A CONTROL NODE IN AN OVERLAY NETWORK - A first control node and a method therein for selecting the first control node or a second control node to act as a server are provided. The first and second control nodes are comprised in an overlay network. The first control node obtains a first indication relating to a ranking of a suitability of the first control node to act as the server. Furthermore, the first control node receives a second indication from the second control node. The second indication relates to a ranking of a suitability of the second control node to act as the server. Then, the first control node selects, based on the first and second indications, one of the first and second control nodes to act as the server for managing a master representation of a distributed shared memory being accessible within the overlay network. | 09-12-2013 |
20130238742 | TIERS OF DATA STORAGE FOR WEB APPLICATIONS AND BROWSER EXTENSIONS - Access is provided to a first tier of limited persistent storage at a server. A first set of data from the first tier is synchronized across devices associated with a user account. Access is provided to a second tier of persistent storage on a local, tangible non-volatile storage medium, and to a third tier of temporary storage on a local, tangible volatile storage medium. A web browser receives a storage request from a web application or browser extension. The request includes a type of a tier of data storage associated with a feature of the web application or browser extension. The type includes at least one of the first tier of remote limited persistent storage, the second tier of local persistent storage, or the third tier of local temporary storage. At least one feature of the web application or browser extension is associated with the tier of data storage. | 09-12-2013 |
20130246553 | DATA MIGRATION - Technologies are generally described for processing data. In some examples, a method performed under control of a server may include receiving, from an end device, an instruction to migrate or move data stored in an original storage to a target storage, moving the data from the original storage to the target storage in response to the receipt of the instruction and updating meta-data stored in the server based on the movement of the data. | 09-19-2013 |
20130246554 | SYSTEM AND METHOD FOR TRANSMITTING COMPLEX STRUCTURES BASED ON A SHARED MEMORY QUEUE - A system and method can support intra-node communication based on a shared memory queue. A transactional middleware machine can provide a complex structure with a plurality of blocks in the shared memory, wherein the shared memory is associated with one or more communication peers, and wherein the communication peers include a sender and a receiver of a message that includes the complex structure. Furthermore, the sender can link a head block of the complex structure to a shared memory queue associated with the receiver, wherein the head block is selected from the plurality of blocks in the complex structure. Then, the receiver can access the complex structure based on the head block of the complex structure. | 09-19-2013 |
20130254322 | NETWORK LINKED DATA CARRIERS - A data carrier carries at least one identifier capable of being read by an electronic aid and is used by associating content in the form of a digital audio recording with an identifier coded on a data carrier to be read out loud by an aid. Such content is based on the words entered on the pages of a printed book. Such data carriers may be generated directly by the use of circuitry for recording contained in said electronic aid or generated indirectly by recording said content and storing the resultant digital files in memory on a computer, and thereafter uploading said files into memory of a server that may be accessed by the intended user of an electronic aid through a network by downloading said files into a handheld electronic aid for use with correlated data carriers on the pages of such book. | 09-26-2013 |
20130254323 | DETERMINING PRIORITIES FOR CACHED OBJECTS TO ORDER THE TRANSFER OF MODIFICATIONS OF CACHED OBJECTS BASED ON MEASURED NETWORK BANDWIDTH - Provided are a computer program product, system, and method for determining priorities for cached objects to order the transfer of modifications of cached objects based on measured network bandwidth. Objects are copied from a primary site to a secondary site to cache at the secondary site. The primary site includes a primary server and primary storage and the secondary site includes a secondary server and a secondary storage. Priorities are received from the secondary server for the objects at the secondary site based on determinations made by the secondary server with respect to the objects cached at the secondary storage. A determination is made of modifications to the objects at the primary storage that are cached at the secondary storage. The received priorities for the objects from the secondary server are used to control a transfer of the determined modifications to the objects to the secondary server. | 09-26-2013 |
20130254324 | Read-throttled input/output scheduler - In accordance with the principles of the present invention, read throttled input/output scheduler applications and methods are provided. A read-throttling input/output scheduler takes write requests for data captured from a network, provides this data to a system that persists the captured data, and takes read requests from external user systems. The rate of read and write requests is determined by maintaining two sliding windows over previous write requests, with the second window being longer then the first. The read-throttling input/output scheduler is configured such that, when write requests activity exceeds a threshold as determined over the first window, the read-throttling input/output scheduler throttles the flow of read requests. A storage medium is provided onto which the read and write requests are forwarded. This Abstract is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. | 09-26-2013 |
20130254325 | CACHE SYSTEM AND CACHE SERVICE PROVIDING METHOD USING NETWORK SWITCH - A cache system configured to provide a cache service includes a network switch and a cache device. The network switch is configured to route data associated with a plurality of servers. The cache device is disposed in association with the network switch. The cache device is configured to cache, in at least one memory available to the network switch, data published by the plurality of servers via the network switch. The cache device is also configured to transmit at least some of the cached data in response to a request received from at least one of the plurality of servers. The network switch provides the cache service via the cache device. | 09-26-2013 |
20130262615 | SHARED NETWORK-AVAILABLE STORAGE THAT PERMITS CONCURRENT DATA ACCESS - Techniques for providing shared access to, e.g., a small computer system interface (SCSI) storage device in a computer network include providing an operational mode on SCSI interfaces with a first media agent and a second media agent such that, in response to inquiry messages on the SCSI interfaces, the SCSI storage device appears as a SCSI target device to the first media agent and the second media agent and mapping data operations between the first media agent and the SCSI storage device and the second media agent and the SCSI storage device to logically unique channel numbers for the first media agent and the second media agent to perform data storage operations over their respective SCSI interfaces by concurrently sharing the SCSI storage device. | 10-03-2013 |
20130268614 | CACHE MANAGEMENT - Concepts and technologies are described herein for cache management. In accordance with the concepts and technologies disclosed herein, the server computer can be configured to communicate with a client device configured to execute a cache module to maintain a cache storing data downloaded from and/or uploaded to the server computer by the client device. The server computer can be configured to receive requests for data stored at the server computer. The server computer can be configured to respond to the request with hashes that correspond to the requested data. The client device can search the cache for the hashes, obtain the data from the cache if the hashes are found, and/or download the data from the server computer if the hashes are not found. The client device also can be configured to update the cache upon uploading the data to the server computer. | 10-10-2013 |
20130268615 | COMPUTER, COMPUTER SYSTEM, AND COMPUTER SYSTEM STARTING METHOD - A first server stores start administration information identifying a second server into a storage unit which stores an operating system to be started in the second server. The second server makes a decision whether a storage unit which is access set has start administration information identifying the second server stored therein If the storage unit which is access set is judged according to the decision to have start administration information identifying the second server stored therein the second server starts the operating system stored in the storage unit which is access set. If the storage unit which is access set is judged not to have start administration information identifying the second server stored therein, the second server deters from starting the operating system stored in the storage unit which is access set | 10-10-2013 |
20130268616 | Discrete Mapping for Targeted Caching - Some embodiments provide systems and methods for implementing discrete mapping for targeted caching in a carrier network. In some embodiments, discrete mapping is implemented using a method that caches content from a content provider to a caching server. The method modifies a DNS entry at a particular DNS server to resolve a request that identifies either a hostname or a domain for the content provider to an address of the caching server so that the requested content is passed from the cached content of the caching server and not the source content provider. In some embodiments, the particular DNS server is a recursive DNS server, a local DNS server of the carrier network, or a DNS server that is not authoritative for the hostname or domain of the content provider. | 10-10-2013 |
20130268617 | DISTRIBUTED CACHE FOR STATE TRANSFER OPERATIONS - A network arrangement that employs a cache having copies distributed among a plurality of different locations. The cache stores state information for a session with any of the server devices so that it is accessible to at least one other server device. Using this arrangement, when a client device switches from a connection with a first server device to a connection with a second server device, the second server device can retrieve state information from the cache corresponding to the session between the client device and the first server device. The second server device can then use the retrieved state information to accept a session with the client device. | 10-10-2013 |
20130268618 | DETERMINING, AT LEAST IN PART, ONE OR MORE RESPECTIVE AMOUNTS OF BUFFER MEMORY FIELD - An embodiment may include determining at least one respective amount of buffer memory to be used to store at least one respective portion of network traffic. The determining may be based at least in part upon at least one respective parameter associated with the at least one respective network traffic portion. The at least one respective amount may be sufficient to store the at least one respective portion of the network traffic. The at least one respective parameter may reflect at least one actual characteristic of the at least one respective portion of the network traffic. This embodiment also may permit at least one respective portion of the buffer memory that may correspond to the at least one respective amount to be selectively powered-on to permit the at least one portion of the buffer memory to be used to store the at least one respective network traffic portion. | 10-10-2013 |
20130268619 | SERVER INCLUDING SWITCH CIRCUITRY - An embodiment may include at least one server processor that may control, at least in part, server switch circuitry data and control plane processing. The at least one processor may include at least one cache memory that is capable of being involved in at least one data transfer that involves at least one component of the server. The at least one data transfer may be carried out in a manner that by-passes involvement of server system memory. The switch circuitry may be communicatively coupled to the at least one processor and to at least one node via communication links. The at least one processor may select, at least in part, at least one communication protocol to be used by the links. The switch circuitry may forward, at least in part, via at least one of the links at least one received packet. Many modifications are possible. | 10-10-2013 |
20130275542 | Online Game System and Method - The present disclosure relates to a method in an online game system, comprising steps of providing a plurality of data files in a file directory of a file system, processing encoded file system information by encoding the plurality of data files and file directory information, storing the encoded file system information in a central memory device accessible by a plurality of client devices, and performing an online session in one of the plurality of client devices, the online session comprising steps of receiving a request for a data file of the plurality of data files, the request comprising request information identifying the data file, receiving at least part of the encoded information from the central memory device in a local memory device assigned to the client device, and determining whether the data file requested is available in the local memory device. | 10-17-2013 |
20130275543 | SYSTEMS AND METHODS FOR CACHING SNMP DATA IN MULTI-CORE AND CLUSTER SYSTEMS - The SNMP cache of the present solution supports multi-core/multi-node environment by recalculating the SNMP ordering of the entities in the response from multiple cores/nodes at insertion time. The most significant gain is achieved by prefetching or augmenting the cache, wherein while requesting an entity and its stat information, next few entities in SNMP order are requested from the owner processes. SNMP Management systems extensively utilize repeated GETNEXT (such as via a SNMP WALK) and few next responses may be served from the cache directly. Further performance improvements are obtained by introducing another level of cache on top of the existing cache. This auxiliary cache ensures a high hit ratio for repeated SNMP GETNEXT request (SNMP WALK operation) by caching last accessed entity within the main cache. This auxiliary cache also aids in insertion in the larger main cache by maintaining pointers to last accessed entity before the main cache miss. Cache implements other features like new stat inclusion/updating of the already cached entity. | 10-17-2013 |
20130275544 | Systems and Methods for Synchronizing Content Tables Between Routers - System and method embodiments for exchanging information between a first and second content router enable the content routers to synchronize their caches with a minimal exchange of information. In an embodiment, the method includes creating a hash of contents of a cache in the first content router using a joint hash function shared with the second content router, encoding the hash of contents of the cache in the first content router with distributed source coding, and transmitting the encoded hash to the second content router. | 10-17-2013 |
20130282852 | MULTI-FUNCTIONAL PORTABLE INFORMATION SHARING MANAGEMENT DEVICE - A multifunctional portable data sharing management device for sharing data between electronic devices is introduced. The device establishes an online connection (including a direct connection or a connection via the Internet) with the electronic devices, and stores data provided by any one of the electronic devices into a local storage unit, and a processing unit transmits and broadcasts the data to the electronic devices, such that the electronic devices can share the data similar to a localized cloud sharing mode. The processing unit further performs a data conversion of the data according to a data code to achieve the effect of using the converted data freely in the electronic devices without the data code. In addition, the multifunctional portable data sharing management device can be connected to a cloud server on the Internet to share the data. | 10-24-2013 |
20130282853 | APPARATUS AND METHOD FOR PROCESSING DATA IN MIDDLEWARE FOR DATA DISTRIBUTION SERVICE - The present invention relates to an apparatus and method that are capable of optimizing the overall performance of DDS middleware for processing data by managing network threads, writer/reader threads, and memory resources. For this, an apparatus for processing data in middleware for DDS includes a network thread management module for managing, using a thread pool, a network thread which has sockets for transmitting or receiving data to or from a network in an RTPS layer. A lock-free queue management module manages a lock-free queue which has a lock-free function and which transmits or receives the data to or from the network thread. A writer/reader thread management module manages a writer thread and a reader thread so that the writer thread or the reader thread transmits or receives the data to or from the lock-free queue and performs a behavior in the RTPS layer. | 10-24-2013 |
20130282854 | NODE AND METHOD FOR GENERATING SHORTENED NAME ROBUST AGAINST CHANGE IN HIERARCHICAL NAME IN CONTENT-CENTRIC NETWORK (CCN) - A node and a method for generating a shortened name robust against a change in a hierarchical name in a Content-Centric Network (CCN) are provided. The method includes receiving a packet requesting content including a hierarchical name of the content, and determining whether a prefix of the hierarchical name is identical to a name of the node. The method further includes generating a shortened name by removing the prefix from the hierarchical name if the prefix is identical to the name component, and changing the hierarchical name to the shortened name. The shortened name is used to check whether the corresponding content is stored in the content cache, to check whether the same content-request packet is already under processing, and to decide an outgoing face to which the content-request packet is transmitted. | 10-24-2013 |
20130282855 | CACHE DEVICE, CACHE CONTROL DEVICE, AND METHODS FOR DETECTING HANDOVER - For a user terminal connected to one of cache devices distributed in a network and receiving contents, terminal access information including terminal address information about the user terminal and identification information of the cache device is stored and managed. If a content retransmission request message of the user terminal is detected at other cache device, this is regarded as a handover of the user terminal. This allows a simple detection of handover that occurs during content transmission. | 10-24-2013 |
20130282856 | REMOTE ACCESS OF MEDIA ITEMS - Methods and systems that facilitate the downloading of media items to a first network device from a second network device are disclosed. A plurality of media items are identified Media item metadata associated with the plurality of media items is obtained from the second network device and stored on the first network device. Media item content data associate with a first subset of the plurality of media items is obtained from the second network device and stored on the first network device. In this manner, only media item metadata associate with a second subset of the plurality of media items is stored on the first network device. | 10-24-2013 |
20130290462 | DATA CACHING USING LOCAL AND REMOTE MEMORY - A system and method for retrieving cached data are disclosed herein. The system includes a cache server including a local memory and a table residing on the local memory, wherein the table is used to identify data objects corresponding to cached data. The system also includes the data objects residing on the local memory, wherein the data objects contain pointers to the cached data. The system further includes a remote memory communicatively coupled to the cache server through an Input-Output (I/O) connection, wherein the cached data resides on the remote memory. | 10-31-2013 |
20130290463 | STORAGE FABRIC ADDRESS BASED DATA BLOCK RETRIEVAL - Techniques for retrieving data blocks are provided. In one aspect, a storage fabric address of a controller associated with a data block is retrieved by a node. If the node is on the same storage fabric as the retrieved address, the data block may be retrieved over the storage fabric. In another aspect, a directory server maintains mappings of data blocks to storage fabric addresses of controllers associated with the data blocks. A request for the location of the data block includes the storage fabric address of the associated controller. | 10-31-2013 |
20130290464 | System and Method for Socially Organized Storage and Shared Access to Storage Appliances - In various embodiments, the present invention relates to systems and methods for managing user data in a plurality of storage appliances coupled to a wide area network. In some embodiments, the present invention relates to systems and methods that allow users to view and manipulate files in a shared virtual container. In other embodiments, the present invention also relates to systems and methods that allow users to access virtual containers located on storage appliances that are owned by other users. | 10-31-2013 |
20130290465 | SYSTEM AND METHOD FOR PROXY MEDIA CACHING - Systems and methods for proxy media caching are disclosed. A method in accordance with an embodiment of the invention includes receiving at a proxy a response to a request for media content, generating a fingerprint from a sample of media content contained in the response, searching a cache using the fingerprint, and if a cache hit occurs, causing cached media content, which is associated with the cache hit, to be sent to the client device. | 10-31-2013 |
20130290466 | METHOD OF PROVIDING CONTENT DURING HAND-OVER AND APPARTUS THEREFOR - The present invention relates to a method and apparatus for providing mobile content to seamlessly transmit content to a mobile node even during a hand-over of the mobile node through local caching devices distributed in a mobile network. | 10-31-2013 |
20130290467 | Balancing Caching Load In A Peer-To-Peer Based Network File System - Systems and techniques relating to network file systems for balancing caching load in peer-to-peer based network file systems are described. In one aspect, a method includes maintaining, by a cluster containing two or more computer systems, information about files cached at a network that includes three or more computer systems configured to cache data associated with a file server system. The method also includes receiving, from one of the computer systems of the network, a request to identify at least one computer system of the network that caches a specified file. Further, the method includes identifying, by the cluster in response to the received request, one or more computer systems of the network that cache the specified file based on the maintained information. Furthermore, the method includes providing, by the cluster to the requesting computer system, information referencing at least the identified one or more computer systems of the network. | 10-31-2013 |
20130290468 | Methods and Apparatus to Migrate Virtual Machines Between Distributive Computing Networks Across a Wide Area Network - Methods and apparatus to migrate virtual machines between distributive computing networks across a network are disclosed. A disclosed example method includes establishing a data link across a network between a first distributive computing network and a second distributive computing network, the first distributive computing network including a virtual machine operated by a first host communicatively coupled to a virtual private network via a first virtual local area network, communicatively coupling a second host included within the second distributive computing network to the virtual private network via a second virtual local area network, and migrating the virtual machine via the data link by transmitting a memory state of at least one application on the first host to the second host while the at least one application is operating. | 10-31-2013 |
20130290469 | ANTICIPATORY RESPONSE PRE-CACHING - Interaction between a client and a service in which the service responds to requests from the client. In addition to responding to specific client requests, the service also anticipates or speculates about what the client may request in the future. Rather than await the client request (that may or may not ultimately be made), the service provides the unrequested anticipatory data to the client in the same data stream as the response data that actual responds to the specific client requests. The client may then use the anticipatory data to fully or partially respond to future requests from the client, if the client does make the request anticipated by the service. Thus, in some cases, latency may be reduced when responding to requests in which anticipatory data has already been provided. The service may give priority to the actual requested data, and gives secondary priority to the anticipatory data. | 10-31-2013 |
20130297717 | CONTENT MANAGEMENT - A system and method for management and processing of resource requests is provided. A content delivery network service provider determines a class associated with a set of client computing devices and monitors resources requests for the determined class. The content delivery network service provider then identifies at least one cache component for providing additional content, such as advertisement content or other additional content provided in anticipation of future resource requests, to client computing devices as a function of the determined class. In other embodiments, instead of cache components, the content delivery network service provider identifies a second set of client computing devices as a function of the determined class for providing the additional content information. | 11-07-2013 |
20130297718 | SERVER DEVICE, CLIENT DEVICE, DATA SHARING SYSTEM AND METHOD FOR SHARING DATA BETWEEN CLIENT DEVICE AND SERVER DEVICE THEREOF - A server device, data sharing system, and data sharing method thereof are provided. A data sharing method of the server device for sharing data with the client device receives a request used for communication with the client device from a client device, determines reliability of the client device, and when determined that there is reliability in the client device, emulates a virtual USB device which includes a large capacity device, transmits a USB control message used to access the large capacity device of the virtual USB device to the client device, and when the client device is authenticated through the USB control message, shares data with the client device. | 11-07-2013 |
20130304842 | Endpoint Caching for Data Storage Systems - A data storage system including a central storage system, at least one endpoint computer system in network communication with the central storage system by a network infrastructure, and a storage accelerator in communication with a CPU of the computer system, wherein the storage accelerator provides endpoint caching of data on the central storage system that is accessible to the at least one endpoint computer. Preferably, the storage accelerator is positioned at a location where a throughput of data from the CPU to the storage accelerator is greater than the throughput of data through a connection from the CPU to the central storage system. | 11-14-2013 |
20130304843 | SYSTEM AND METHOD FOR PROVIDING VIRTUAL WEB ACCESS - A client-based computer system adapted to communicate with a remote server through a network and to provide access to content or services provided by the server. The system includes a storage device and a cache. The cache is adapted to communicate with the server over the network, to intercept a request from the client to the server, and to store responses from the server on the storage device. The cache is further adapted to automatically determine when to send the request to the server over the network. The cache is still further adapted to provide a response, including from the responses stored on the storage device based upon the request, to appear as through the server provided the response. The system may also include a crawler. The crawler is adapted to operate in conjunction with the cache to cause requests to be sent to the server over the network. | 11-14-2013 |
20130311593 | INCORPORATING WEB APPLICATIONS INTO WEB PAGES AT THE NETWORK LEVEL - A proxy server automatically includes web applications in web pages at the network level. The proxy server receives, from a client device, a request for a network resource at a domain and is hosted at an origin server. The proxy server retrieves the requested network resource. The retrieved network resource does not include the web applications. The proxy server determines that the web applications are to be installed within the network resource. The proxy server automatically modifies the retrieved network resource to include the web applications. The proxy server transmits a response to the client device that includes the modified network resource. The network resource may remain unchanged at the origin server. | 11-21-2013 |
20130311594 | MOBILE DEVICE AND METHOD FOR PRESERVING SESSION STATE INFORMATION WITHOUT A NEED FOR TRANSMISSION OF MESSAGES IN A WIRELESS NETWORK - Systems and methods for management of a network connection without heartbeat messages are disclosed. One embodiment of a distributed proxy system performs a method for the communication of state between a client and a server in a distributed content delivery network using a state map. The state map sets a predicted communication correspondence frequency and thus eliminates the use or need of heart beat messages to manage session state and/or convey health status of system components. | 11-21-2013 |
20130318191 | TIME-BASED DATA CACHING - A system is configured to receive, by a first server, a request, from a user device, for a first record stored by a cache associated with the first server, determine, a first timestamp associated with the first record, determine that the first record is invalid based on the first timestamp, and determine, based on determining that the first record is invalid, whether the first record is out of date with respect to a corresponding second record stored by a second server by comparing a second timestamp of the first record with a timestamp of the second record. The system is further configured to update the first record with information from the second record to form an updated first record when the first record is out of date, and to send the updated first record to the user device associated with the request. | 11-28-2013 |
20130318192 | COMPOSITE GRAPH CACHE MANAGEMENT - Methods, systems, and computer program products for synchronizing data between a mobile application and an enterprise data source are provided. A computer-implemented method may include receiving a request for data from an application executing on a mobile device, receiving a document including the requested data from a data source where the document represents a graph of data having a plurality of nodes, and providing a partial graph of data to the application where the partial graph is derived from the received document and at least includes the requested data. | 11-28-2013 |
20130318193 | METHOD AND APPARATUS FOR MANAGING CONTENT AND ASSOCIATED INFORMATION SOURCED FROM MULTIPLE PLATFORMS - An approach is provided for managing content and associated information sourced from multiple platforms. A dynamic information management platform determines a request to present one or more content items. The one or more content items include inventory for presenting associated information. The dynamic information management platform determines at least one platform from among a plurality of platforms based, at least in part, on the one or more content items, metadata associated with the one or more content items, or a combination thereof. The plurality of platforms is associated with at least one common service. The dynamic information management platform then determines the information associated with the one or more content items from the at least one platform in either an online or offline mode of operation. | 11-28-2013 |
20130318194 | Micro-Staging Device and Method for Micro-Staging - A micro-staging device has a wireless interface module for detecting a first data request that indicates a presence of a user and an application processor that establishes a network connection to a remote data center. The micro-staging device further allocates a portion of storage in a cache memory storage device for storing pre-fetched workflow data objects associated with the detected user. | 11-28-2013 |
20130318195 | ADAPTIVE ROUTING OF CONTENT REQUESTS USING MULTIPLE ANYCAST ADDRESSES - A system includes a plurality of cache servers and a domain name server. Each of the cache servers is configured to respond to a content request. The plurality of cache servers is divided into a plurality of subsets and configured to respond to an anycast address for each subset to which the cache server belongs. The domain name server is configured to receive a request from a requestor for a cache server address, identify an anycast address for a largest available subset, and provide the anycast address of the largest available subset to the requestor. | 11-28-2013 |
20130325999 | INFORMATION-PROCESSING SYSTEM, INFORMATION-PROCESSING DEVICE, INFORMATION-PROCESSING METHOD, AND STORAGE MEDIUM - An exemplary information-processing system includes: a storage unit configured to store identification information of one or more other users; an execution unit configured to execute at least one of a plurality of programs including a first program for accessing an information sharing service and a second program that differs from the first program; a first registration unit configured to register, in the storage unit, identification information of another user, through execution of the first program by the execution unit; and a second registration unit configured to register, in the storage unit, identification information of another user, through execution of the second program by the execution unit. | 12-05-2013 |
20130326000 | NUMA-AWARE SCALING FOR NETWORK DEVICES - The present disclosure describes a method and apparatus for network traffic processing in a non-uniform memory access architecture system. The method includes allocating a Tx/Rx Queue pair for a node, the Tx/Rx Queue pair allocated in a local memory of the node. The method further includes routing network traffic to the allocated Tx/Rx Queue pair. The method may include designating a core in the node for network traffic processing. Of course, many alternatives, variations and modifications are possible without departing from this embodiment. | 12-05-2013 |
20130326001 | GENERIC PERSISTENCE IN A DIAMETER ROUTING AGENT - Various exemplary embodiments relate to a method and related network node including one or more of the following: receiving a first Diameter message at the DRA; evaluating a first rule, including accessing data from a generic binding context object including: locating a record associated with a key specified by the first rule, and retrieving the data from the record; and transmitting a message based on the evaluation of the first rule. Various embodiments additionally relate to receiving a second Diameter message at the DRA; and evaluating a second rule, including accessing the generic binding context object, including storing the data in the record associated with the key. | 12-05-2013 |
20130339468 | NON-VOLATILE MEMORY PHYSICAL NETWORKS - A method for communication between computing devices includes identifying the parameters of a data transfer between a source computing device and a target computing device and identifying communication paths between a source computing device and target computing device, in which at least one of the communications paths is a physical network. A communication path is selected for the data transfer. When a data transfer over the physical network is selected as a communication path, a nonvolatile memory (NVM) unit is removed from the source computing device and placed in a cartridge and the cartridge is programmed with transfer information. The NVM unit and cartridge are transported through the physical network to the target computing device according to the transfer information and the NVM unit is electrically connected to the target computing device. | 12-19-2013 |
20130339469 | SELF-REPLENISHING CACHES - Various embodiments pertain to self-replenishing caches. In various embodiments, a cache on a client device automatically updates without intervention from a user and without data calls from the executable. In other words, the cache can be configured to automatically update without the executable retrieving the content from the content server and causing the content to be displayed to the user. For example, when the executable causes a different background image to be displayed each day, background images for days that a user did not interact with the executable can be cached and will be accessible to a user upon the user's next interaction with the executable. In various embodiments, the cache is configured to poll the content server on a periodic basis effective to retrieve a current version of the content. | 12-19-2013 |
20130339470 | Distributed Image Cache For Servicing Virtual Resource Requests in the Cloud - A method of provisioning in a cloud compute environment having a set of cloud hosts associated with one another. The method begins by forming a distributed, cooperative cache across the set of cloud hosts by declaring a portion of a data store associated with a cloud host as a cache, and storing template images and patches in the cache. Caching activity across the distributed, cooperated cache is coordinated by having the caches share information about their respective contents. A control routine at a cache receives requests for template images or patches, responds to the requests if the requested artifacts are available or, upon a cache miss, forwards the request to another one of the caches. Periodically, the composition of the distributed, cooperative cache is computed, and the template images and patches are populated into the caches using the computed cache composition. | 12-19-2013 |
20130339471 | SYSTEM AND METHOD FOR QUICK-LINKING USER INTERFACE JOBS ACROSS SERVICES BASED ON SYSTEM IMPLEMENTATION INFORMATION - Systems and methods are provided for a data management virtualization display. A set of services is stored that includes a set of user interfaces. Each service can communicate with the remaining services using a shared services cache. A request is received to perform a data management virtualization job that, without knowledge of a profile associated with the data management virtualization system, the set of subsystems, or both, would require a user of the data management virtualization system to manually navigate through a sequence of webpages across two or more services in the set of services. A quick link for the data management virtualization job is defined based on the profile associated with the data management virtualization system, the set of subsystems, or both, using the shared services cache, wherein the quick link eliminates one or more of the manual navigations of the data management virtualization job. | 12-19-2013 |
20130346532 | VIRTUAL SHARED STORAGE IN A CLUSTER - The present invention minimizes the cost of establishing a cluster that utilizes shared storage by creating a storage namespace within the cluster that makes each storage device, which is physically connected to any of the nodes in the cluster, appear to be physically connected to all nodes in the cluster. A virtual host bus adapter (VHBA) is executed on each node, and is used to create the storage namespace. Each VHBA determines which storage devices are physically connected to the node on which the VHBA executes, as well as each storage device that is physically connected to each of the other nodes. All storage devices determined in this manner are aggregated into the storage namespace which is then presented to the operating system on each node so as to provide the illusion that all storage devices in the storage namespace are physically connected to each node. | 12-26-2013 |
20130346533 | NEAR-REAL TIME DISTRIBUTED USAGE AGGREGATION SYSTEM - Gathering tenant usage data of server resources. A method includes a server in a cluster providing server resources for one or more tenants of the server. Data is stored in a local usage cache at the server. The data characterizes the resources provided to the one or more tenants of the server. At the server, data stored in the local usage cache is aggregated on a tenant basis, such that data is aggregated for given tenants. The aggregated data is sent to a distributed cache. At the server, aggregated data from other servers in the cluster is received from the distributed cache. The aggregated data from other servers in the cluster is globally aggregated and stored at an aggregated usage cache at the server in the globally aggregated form. | 12-26-2013 |
20130346534 | POINT OF PRESENCE MANAGMENT IN REQUEST ROUTING - A system and method for the management of client computing device DNS queries and subsequent resource requests within a content delivery network service provider domain are provided. The management of the DNS queries can include the selection of computing devices corresponding to various Point of Presence locations for processing DNS queries. Additionally, the management of the content requests can include the selection of computing devices corresponding to resource cache components corresponding to various Point of Presence locations for providing requested content. The selection of the computing devices can incorporate logic related to geographic criteria, performance threshold criteria, testing criteria, and the like. | 12-26-2013 |
20130346535 | COMMON WEB ACCESSIBLE DATA STORE FOR CLIENT SIDE PAGE PROCESSING - Embodiments of the present invention provide a method, system and computer program product for shared data storage in page processing over a computer communications network. In an embodiment of the invention, a method of shared data storage has been provided for page processing over a computer communications network. The method can include registering a content browser executing in memory of a computer with a remote storage service and receiving content from a content server over the computer communications network. The method additionally can include invoking in the content browser an instance of a localStorage object to cache data associated with the content according to a unique key. Thereafter, in response to the invocation of the instance of the localStorage object, the data can be stored in the remote storage service in reference to the unique key. | 12-26-2013 |
20130346536 | WEB STORAGE OPTIMIZATION - Embodiments of the present invention provide a method, system and computer program product for Web storage optimization and cache management. In one embodiment, a method of client side cache management using Web storage can include first registering a client browser session in a content browser as a listener to events for Web storage for a particular domain. Subsequently, notification can be received from the content browser of an event of a different client browser session associated with the Web storage. For instance, the notification can result from the different client browser adding a new cache entry to the Web storage, or from the different client browser periodically at a specified time interval indicating a state of one or more cache entries in the Web storage. Finally, in response to the notification, a cache entry in the Web storage can be invalided such as through cache entry removal or compression. | 12-26-2013 |
20130346537 | STORAGE OPTIMIZATION TECHNOLOGY - A rule-based system for utilizing available storage in combination with arbitrary transformations, such as compression or encryption, within an email system is disclosed herein. The system may include an event-based storage of messages in specific tiers of storage based on a subscriber's class-of-service, attributes of the message, or attributes of the attachments. An automated or administrator directed application of storage rules over an existing mailbox or set of mailboxes may also be implemented. A plurality of storage locations may be included in the system, and each may be associated with at least one of a type, protocol, or transformation to be applied. | 12-26-2013 |
20130346538 | MANAGING CACHE MEMORIES - A method for managing cache memories includes providing a computerized system including a shared data storage system (CS) configured to interact with several local servers that serve applications using respective cache memories, and access data stored in the shared data storage system; providing cache data information from each of the local servers to the shared data storage system, the cache data information comprising cache hit data representative of cache hits of each of the local servers, and cache miss data representative of cache misses of each of the local servers; aggregating, at the shared data storage system, at least part of the cache hit and miss data received and providing the aggregated cache data information to one or more of the local servers; and at the local servers, updating respective one or more cache memories used to serve respective one or more applications based on the aggregated cache data information. | 12-26-2013 |
20130346539 | CLIENT SIDE CACHE MANAGEMENT - A system, method and computer-readable medium for client-side cache management are provided. A client request for content is returned that includes executable code for generating a request for preload information. Based on processing the executable code, a client computing device requests preload information from a content delivery service provider. The content delivery service provider provides an identification of content based on resource requests previously served by the content delivery service provider. The client computing device processes the preload information and generates and obtains identified resources for maintenance in a client computing device memory, such as cache. | 12-26-2013 |
20140006537 | HIGH SPEED RECORD AND PLAYBACK SYSTEM | 01-02-2014 |
20140006538 | Intelligent Client-Side Caching On Mobile Devices | 01-02-2014 |
20140006539 | CACHE CONTROL FOR WEB APPLICATION RESOURCES | 01-02-2014 |
20140006540 | Cloud Storage and Processing System for Mobile Devices, Cellular Phones, and Smart Devices | 01-02-2014 |
20140006541 | PERSISTENT MESSAGING | 01-02-2014 |
20140006542 | RECURSIVE ASCENT NETWORK LINK FAILURE NOTIFICATIONS | 01-02-2014 |
20140006543 | DISTRIBUTED FILESYSTEM ATOMIC FLUSH TRANSACTIONS | 01-02-2014 |
20140006544 | NETWORK BASED STORAGE AND ACCOUNTS | 01-02-2014 |
20140012936 | COMPUTER SYSTEM, CACHE CONTROL METHOD AND COMPUTER PROGRAM - The first application program and/or the second application program send(s) an access request to the second cache management module. The second cache management module receives the access request from the first application program and/or the second application program, and references the second cache management table to identify the storage location of the access-target data conforming to the access request. When access-target data exists in first cache area, the second cache management module sends a data transfer request to the first cache management module storing the access-target data, and where access-target data does not exist in the first cache area, acquires the access-target data from the second storage device. When the access-target data is in first cache area, the first cache management module acquires the access-target data conforming to the data transfer request from the relevant first cache area, and sends access-target data to the second cache management module. | 01-09-2014 |
20140012937 | REMOTELY CACHEABLE VARIABLE WEB CONTENT - A method for caching targeted webpage content is disclosed. In one embodiment, such a method includes dividing a cacheable content pertaining to a website into a static portion and a dynamic frame for displaying visitor targeted content. The method determines a result for one or more targeting rules applied to a visitor's activity on a portion of the website and provides the result to the visitor's browser. The method further includes loading the dynamic frame of the cacheable content with visitor targeted content based on the provided result. A corresponding apparatus and computer program product are also disclosed. | 01-09-2014 |
20140012938 | PREVENTING RACE CONDITION FROM CAUSING STALE DATA ITEMS IN CACHE - A data cache server may process requests from a data cache client to put, get, and delete data items into or from the data cache server. Each data item may be based on data in a data store. In response to each request to put a data item into the data cache server, the data cache server may determine whether any of the data in the data store on which the data item is based has or may have changed; put the data item into the data cache memory if none of the data in the data store on which the data item is based has been determined to have or maybe to have changed, and not put the data item into the data cache memory if data in the data store on which the data item is based has been determined to have or maybe to have changed. | 01-09-2014 |
20140012939 | METHOD FOR PROVIDING RESOURCES BY A TERMINAL, AND METHOD FOR ACQUIRING RESOURCES BY A SERVER - A method for caching a DM tree includes determining whether a cache validator of a first type for a first resource exists, wherein the cache validator of the first type is directly used for the first resource; determining whether a cache validator of a second type for the first resource exists, wherein the cache validator of the second type is used for a second resource including the first resource, when the cache validator of the first type for the first resource does not exist; and transmitting a request for the first resource to the device management client using an identifier for the first resource, an identifier for the second resource and the cache validator for the second resource, when the cache validator of the second type for the first resource exists. | 01-09-2014 |
20140019575 | Maintaining Client-Side Persistent Data using Caching - Non-cookie methods for distinguishing among web-server clients (browsers) use personalized information stored in the browser's cache. The information may be extracted by programs, such as JavaScript programs, executing at the client side; or by sending resource data to cause the client to report the personalized information to the server in conjunction with a resource request. | 01-16-2014 |
20140019576 | INTELLIGENT EDGE CACHING - Disclosed is a program for pre-fetching resources. A computer, communicatively coupled to a plurality of client computers and a server computer, identifies a resource, through an examination of one or more HTTP server logs, that is cached on at least one of the plurality of client computers and has been validated by the server computer. The computer determines to pre-fetch the resource based on one or more predefined rules, at least one of the predefined rules including a threshold number of responses validating the resource that must be received by the computer. The computer pre-fetches and caches the resource from the server computer. The computer receives a request for the resource from a client computer that does not have the resource cached. The computer validates resource cached locally and sends the resource to the client computer from the local cache on the computer. | 01-16-2014 |
20140019577 | INTELLIGENT EDGE CACHING - Disclosed is a program for pre-fetching resources. A computer, communicatively coupled to a plurality of client computers and a server computer, identifies a resource, through an examination of one or more HTTP server logs, that is cached on at least one of the plurality of client computers and has been validated by the server computer. The computer determines to pre-fetch the resource based on one or more predefined rules, at least one of the predefined rules including a threshold number of responses validating the resource that must be received by the computer. The computer pre-fetches and caches the resource from the server computer. The computer receives a request for the resource from a client computer that does not have the resource cached. The computer validates resource cached locally and sends the resource to the client computer from the local cache on the computer. | 01-16-2014 |
20140019578 | WIRELESS COMMUNICATION SYSTEM AND METHOD FOR TRANSMITTING CONTENT IN WIRELESS COMMUNICATION SYSTEM - The present invention relates to a wireless communication system and a method for managing a cache server in the wireless communication system, the invention includes a step for checking a regional cache server to transmit contents when a DNS (Domain Name System) request a message for receiving contents acquired from a terminal, and a step for transmitting the contents by the cache server according to a contents request message received from the terminal. These steps can prevent the same data from being transmitted several times through a wireless communication network. Therefore, network usage associated with the service is reduced, and the network can be used more efficiently. | 01-16-2014 |
20140025769 | INTELLIGENT CACHING OF CONTENT ITEMS - Systems, methods, and computer-readable media for intelligent caching of content items are provided. A content item may be received by a caching device from a content provider based at least in part on a first request from a user. The caching device may determine a content viewing profile. The caching device may direct storage of the received content item for later retrieval. Additionally, the caching device may provide the stored content item to the user in response to a second request for the content. | 01-23-2014 |
20140025770 | SYSTEMS, METHODS AND DEVICES FOR INTEGRATING END-HOST AND NETWORK RESOURCES IN DISTRIBUTED MEMORY - Systems, methods and devices for distributed memory management comprising a network component configured for network communication with one or more memory resources that store data and one or more consumer devices that use data, the network component comprising a switching device in operative communication with a mapping resource, wherein the mapping resource is configured to associate mappings between data addresses associated with memory requests from a consumer device relating to a data object and information relating to a storage location in the one or more memory resources associated with the data from the data object, wherein each data address has contained therein identification information for identifying the data from the data object associated with that data address; and the switching device is configured to route memory requests based on the mappings. | 01-23-2014 |
20140025771 | DATA TRANSFERRING APPARATUS, DATA TRANSMITTING SYSTEM, DATA TRANSMITTING METHOD AND COMPUTER PROGRAM PRODUCT - According to an embodiment, a data transferring apparatus is connected to a control device and transmits data stored in a memory device in units of blocks to a network. The apparatus includes a command issuing unit, a transmission data extracting unit, and a communication processing unit. The command issuing unit is configured to issue, to the memory device, a read command for reading a block for which a read instruction is given from the control device. The transmission data extracting unit is configured to extract transmission data from data read from the memory device according to the read command. The communication processing unit is configured to transmit the extracted transmission data to the network based on a predetermined protocol. | 01-23-2014 |
20140025772 | Connection Rate Limiting For Server Load Balancing And Transparent Cache Switching - Each service in a computer network may have a connection rate limit. The number of new connections per time period may be limited by using a series of rules. In a specific embodiment of the present invention, a counter is increased each time a server is selected to handle a connection request. For each service, connections coming in are tracked. Therefore, the source of connection-request packets need not be examined. Only the destination service is important. This saves significant time in the examination of the incoming requests. Each service may have its own set of rules to best handle the new traffic for its particular situation. For server load balancing, a reset may be sent to the source address of the new connection request. For transparent cache switching, the connection request maybe forwarded to the Internet. | 01-23-2014 |
20140032698 | Utilize Extra Web Semantic for Video Caching - Semantic data corresponding to video data may be received. Next, the received semantic data corresponding to the video data may be analyzed. Caching decisions may then be made based upon the analysis of the received semantic data corresponding to the video data. | 01-30-2014 |
20140032699 | REMOTE USER INTERFACE IN A TERMINAL SERVER ENVIRONMENT - Methods, apparatus, systems and computer program product for updating a user session in a terminal server environment. Transfer of display data corresponding to an updated user interface can occur via a memory shared between an agent server and an agent client in a terminal server environment. Access to the shared memory can be synchronized via token passing or other operation to prevent simultaneous access to the shared memory. Token sharing and synchronized input/output can be performed using FIFOs, sockets, files, semaphores and the like, allowing communications between the agent server and agent client communications to adapt to different operating system architecture. | 01-30-2014 |
20140032700 | DATA PROCESSING METHOD AND MOBILE TERMINAL - A data processing method is executed by a first device, and includes suspending execution of a first process by the first device that belongs to a first device group that includes plural devices; saving based on a request for execution of a second process from a second device that belongs to a second device group that includes plural devices, process information of the first process to shared memory that is set in each of the devices of the first device group and shared by the devices of the first device group; and releasing the saving of the process information of the first process consequent to completion of the execution of the second process. | 01-30-2014 |
20140032701 | MEMORY NETWORK METHODS, APPARATUS, AND SYSTEMS - Apparatus and systems may include a first node group include a first network node coupled to a memory, the first network node including a first port, a second port, a processor port, and a hop port. Network node group may include a second network node coupled to a memory, the second network node including a first port, a second port, a processor port, and a hop port, the hop port of the second network node coupled to the hop port of the first network node and configured to communicate between the first network node and the second network node. Network node group may include a processor coupled to the processor port of the first network node and coupled to the processor port of the second network node, the processor configured to access the first memory through the first network node and the second memory through the second network node. Other apparatus, systems, and methods are disclosed. | 01-30-2014 |
20140032702 | CONTENT DISTRIBUTION SYSTEM, CONTROL APPARATUS, AND CONTENT DISTRIBUTION METHOD - A control apparatus calculates an access frequency for a content item stored in each of a plurality of cache servers temporarily holding the content item based on a number of accesses to the content item, determines an arrangement of content items to the plurality of cache servers using at least one of load status of the plurality of cache servers, topology information of the mobile network, location information of a terminal requesting a content item, and the access frequency, instructs the plurality of cache servers to hold a content item according to the determined arrangement, and, upon reception of a request for a content item from a terminal, instructs a cache server holding the requested content item among the plurality of cache servers to send the requested content item via a packet forwarding apparatus. | 01-30-2014 |
20140040412 | DELIVERING CONTENT TO ELECTRONIC DEVICES USING LOCAL CACHING SERVERS - The disclosed embodiments provide a system that delivers content to an electronic device. The system includes a content provider that obtains a public address of the electronic device from a first request for the content from the electronic device. Next, the content provider uses the public address to identify a local caching server on a local area network (LAN) of the electronic device. Finally, the content provider provides a local address of the local caching server to the electronic device, wherein the local address is used by the electronic device to obtain the content from the local caching server and the LAN without accessing a content delivery network (CDN) outside the LAN. | 02-06-2014 |
20140040413 | Storage Medium, Transmittal System and Control Method Thereof - A storage medium including a first transmittal module and a control module. The first transmittal module includes a plurality of first transmittal pads. The control module determines whether a level state of the first transmittal module is equal to a pre-determined state. When the level state is equal to the pre-determined state, the control module operates in a secure digital (SD) mode. When the level state is not equal to the pre-determined state, the control module operates in an embedded multimedia card (eMMC) mode. | 02-06-2014 |
20140040414 | METHODS AND SYSTEMS FOR PROVIDING EVENT RELATED INFORMATION - Systems and methods are disclosed for providing a platform where a user is able to obtain live information about an event such as live recordings, video recordings, photos, comments and the like, from a location of the event. A user may connect to the platform via the Internet and have access to content regarding the event and generated either by a platform manager or by other users. The user may also obtain recorded information, such as footage of the event, after the event has taken place, by accessing the platform and searching footage via search criteria or keywords. | 02-06-2014 |
20140040415 | SYSTEMS AND METHODS FOR CACHING HTTP POST REQUESTS AND RESPONSES - With an idempotent POST request, the URL (and headers) cannot be used as an HTTP cache key. To cache idempotent POST requests, the POST body is digested and appended the URL with the digest and used as the cache key. Subsequent requests with the same payload will end up hitting the cache rather than the origin server. A forward cache proxy at the client end and reverse cache proxy at the server end are deployed. The client sends the request to the forward proxy that looks up the cache. If there is a cache miss, the forward cache proxy digests the body and sends only the digest to the reverse proxy. The reverse cache proxy looks up request cache to find if there is a match for the request and send that request to the server. | 02-06-2014 |
20140040416 | CONTENT DELIVERY PLATFORM APPARATUSES, METHODS AND SYSTEMS - The CONTENT DELIVERY PLATFORM APPARATUSES, METHODS AND SYSTEMS (“CDP”) transform content seed selections and recommendations via CDP components such as discovery and gurus into events and discovery of other contents for users and revenue for right-holders. In one embodiment, the CDP may provide facilities for obtaining a universally resolvable list of content items on a local client and identifying a non-local item from the list that is absent on the local client. The CDP may generate a local cache request for the identified non-local item having an associated universally resolvable content identifier and transmit the generated local cache request to a universally resolvable content server. The CDP may then receive, in response to the transmitted request, a universally resolvable content item corresponding to the local cache request and may mark the requested item as temporary and locally available upon receiving the content item. | 02-06-2014 |
20140047059 | METHOD FOR IMPROVING MOBILE NETWORK PERFORMANCE VIA AD-HOC PEER-TO-PEER REQUEST PARTITIONING - Method, computer program product, and system for identifying, responsive to a request for a network resource, at least one peer device, wherein the request is made by a first device on a mobile network, the at least one peer device on a local network with the first device, the local network different than the mobile network; partitioning, based on at least one content element of the requested network resource, the request into a plurality of subrequests, each subrequest specifying to retrieve one or more content elements of the requested network resource; assigning each subrequest to one of the peer devices and the first device, wherein each peer device and the first device retrieves the content elements specified by the subrequest assigned to the respective device, wherein each peer device transmits the retrieved portion of the network resource to the first device over the local network. | 02-13-2014 |
20140047060 | REMOTE PROCESSING AND MEMORY UTILIZATION - According to one embodiment of the present invention, a system for operating memory includes a first node coupled to a second node by a network, the system configured to perform a method including receiving the remote transaction message from the second node in a processing element in the first node via the network, wherein the remote transaction message bypasses a main processor in the first node as it is transmitted to the processing element. In addition, the method includes accessing, by the processing element, data from a location in a memory in the first node based on the remote transaction message, and performing, by the processing element, computations based on the data and the remote transaction message. | 02-13-2014 |
20140047061 | INTER POINT OF PRESENCE SPLIT ARCHITECTURE - A system and method for accelerating web page delivery is disclosed in one embodiment. Web content requests are made to an edge server of a first point of presence (POP) of a content delivery network (CDN). The web content has embedded resource links. The first POP can rewrite the embedded resource links to route requests for the embedded resource links to any POP in the CDN or even the origin server. In some embodiments, the first POP can decide if the first POP and/or another POP referenced in a rewritten embedded resource link should cache and/or accelerate the resource referenced in the embedded resource link. | 02-13-2014 |
20140052809 | Token Based Applications Platform Method, System and Apparatus - A method that enables the mapping of token identity and token presentation context to invoke one or more applications that are associated with the given token and context is disclosed. The method enables the construction of a flexible and efficient token-in-context services platform. | 02-20-2014 |
20140052810 | PROCESSING, STORING, AND DELIVERING DIGITAL CONTENT - Implementations of the present invention include a Public Cloud, one or more End-Caches and optionally one or more Edge-Caches in computerized architecture that provides digital content, such as entertainment services and/or informational content, to a guest display (e.g., End-Cache connected to in-room TV, End-Cache connected to personal portable device) or control of one or more devices (e.g., in-room TV and/or in-room control). Implementations of the present invention also include a Content Distribution Architecture that uses the public Internet to securely transmit digital content and data to all desired locations (e.g., End-Caches). Implementations of the present invention further include a Channel Processor that takes one of more video signal(s) and prepares them for redistribution to an end user. Implementations of the present invention leverage existing wiring at the property (whether coax, Ethernet, home-run, or loop-thru) to transport content/data to/from End-Caches. | 02-20-2014 |
20140052811 | Dynamic content assembly on edge-of network servers in a content delivery network - Content is dynamically assembled at the edge of the Internet, preferably on content delivery network (CDN) edge servers. A content provider leverages an “edge side include” (ESI) markup language that is used to define Web page fragments for dynamic assembly at the edge. Dynamic assembly improves site performance by caching objects that comprise dynamically-generated pages at the edge of the Internet, close to the end user. Instead of being assembled by an application/web server in a centralized data center, the application/web server sends a page template and content fragments to a CDN edge server where the page is assembled. Each content fragment can have its own cacheability profile to manage the “freshness” of the content. Once a user requests a page, the edge server examines its cache for the included fragments and assembles the page on-the-fly. | 02-20-2014 |
20140052812 | CONTENT DISTRIBUTION SYSTEM, CONTROL APPARATUS, AND CONTENT DISTRIBUTION METHOD - A control apparatus computes an access frequency for a content item stored in a plurality of cache servers that temporarily hold a content item based on a number of accesses to the content item, determines disposition of content items in the plurality of cache servers, using at least one of a load status of the plurality of cache servers, topology information of a mobile network, in-zone information of a terminal requesting a content item, and the access frequency to instruct the plurality of cache servers to obtain a content item according to the determined disposition, and, upon receipt of a request for a content item from the terminal, instructs a cache server that holds the content item among the plurality of cache servers to transmit the content item through a packet forwarding apparatus. | 02-20-2014 |
20140059156 | PREDICTIVE CACHING FOR CONTENT - Disclosed are various embodiments for predictive caching of content to facilitate instantaneous use of the content. If a user is likely to commence use of a content item through a client, and if the client has available resources to facilitate instantaneous use, the client is configured to predictively cache the content item before the user commences use. In doing so, the client may obtain metadata for the content item and an initial portion of the content item from a server. The client may then initialize various resources to facilitate instantaneous use of the content item by the client based at least in part on the metadata and the initial portion. | 02-27-2014 |
20140059157 | APPARATUS AND METHOD FOR TRANSFERRING DATA VIA HETEROGENEOUS NETWORKS - A relay device communicates with first and second terminal devices via first and second communication networks, respectively. The relay device determines whether or not a line stability and a line speed of each of the first and second communication networks satisfy a predetermined condition. When a line stability and a line speed of each of the first and second communication networks satisfy the predetermined condition, the relay device temporarily stores, in a memory, data received from the first terminal device via the first communication network, and transfers the temporarily stored data to the second terminal device via the second communication network. | 02-27-2014 |
20140059158 | METHOD, DEVICE AND SYSTEM FOR PROCESSING CONTENT - The disclosure relates to a method, device and system for processing content. In the method: a server receives a content acquiring request transmitted by a terminal. The server determines a first storage node list according to a mapping relation acquired in advance between contents and storage nodes. The first storage node list includes a plurality of first storage nodes that store a first content corresponding to the content acquiring request. The server transmits a sorting request to a network storage management server and receives a sorting result transmitted by the network storage management server. The sorting result includes priorities of the plurality of first storage nodes. The server transmits a content acquiring response to the terminal, where the content acquiring response includes first access information and priority of at least one of the first storage nodes. | 02-27-2014 |
20140059159 | METHOD AND SYSTEM FOR DYNAMIC DISTRIBUTED DATA CACHING - A method and system for dynamic distributed data caching is presented. The system includes one or more peer members and a master member. The master member and the one or more peer members form cache community for data storage. The master member is operable to select one of the one or more peer members to become a new master member. The master member is operable to update a peer list for the cache community by removing itself from the peer list. The master member is operable to send a nominate master message and an updated peer list to a peer member selected by the master member to become the new master member. | 02-27-2014 |
20140067984 | Integrated Storage and Switching for Memory Systems - An integrated networked storage and switching apparatus comprises one or more flash memory controllers, a system controller, and a network switch integrated within a common chassis. The integration of storage and switching enables the components to share a common power supply and temperature regulation system, achieving efficient use of available space and power, and eliminating added complexity of external cables between the switch a storage devices. Additionally, the architecture enables substantial flexibility and optimization of network traffic policies for both network and storage-related traffic. | 03-06-2014 |
20140067985 | TECHNIQUES FOR MAPPING AND MANAGING RESOURCES - Techniques for mapping and managing resources are presented. Hardware capacity and information is collected over multiple processing environments for hardware resources. The information is mapped to logical business resources and resource pools. Capacity is rolled up and managed within logical groupings and the information gathering is managed via in-memory and on-file caching techniques. | 03-06-2014 |
20140067986 | NETWORK SERVICE SYSTEM AND METHOD WITH OFF-HEAP CACHING - A method for providing data over a network using, an application server having off-heap caching includes receiving at an application server coupled to a network a request for requested data, using an key index stored on the application server to locate where the requested data is stored in oil-heap memory of the application server, retrieving the requested data from the off-heap memory of the application server, and resolving the request. | 03-06-2014 |
20140067987 | BYTE CACHING IN WIRELESS COMMUNICATION NETWORKS - Various embodiments provide byte caching in wireless communication networks. In one embodiment, a plurality of data packets are received through an internet protocol (IP) data flow established between a wireless communication device and at least one server. Each of the plurality of data packets are combined into a packet bundle. A determination is made as to whether a second byte caching system is available. The packet bundle is transformed using one or more byte caching operations based on a second byte caching system being available. The transformed packet bundle is sent to the second byte caching system using an IP communication mechanism. | 03-06-2014 |
20140067988 | On-Demand Caching in a WAN Separated Distributed File System or Clustered File System Cache - A mechanism is provided in a data processing system for on-demand caching in a wide area network (WAN) separated distributed file system or clustered file system. The mechanism monitors file access by a plurality of cache sites in the WAN separated distributed file system or clustered file system. The mechanism identifies access patterns by cache sites. The mechanism shares the access patterns with the plurality of cache sites. A given cache site within the plurality of cache sites combines the access patterns with local access information and identifies files to pre-fetch based on the combined information. | 03-06-2014 |
20140067989 | CACHING PROVENANCE INFORMATION - Techniques are disclosed for caching provenance information. For example, in an information system comprising a first computing device requesting provenance data from at least a second computing device, a method for improving the delivery of provenance data to the first computing device, comprises the following steps. At least one cache is maintained for storing provenance data which the first computing device can access with less overhead than accessing the second computing device. Aggregated provenance data is produced from input provenance data. A decision whether or not to cache input provenance data is made based on a likelihood of the input provenance data being used to produce aggregated provenance data. By way of example, the first computing device may comprise a client and the second computing device may comprise a server. | 03-06-2014 |
20140067990 | METHOD FOR ACCESSING A CONTENT ITEM IN A CLOUD STORAGE SYSTEM, AND A CORRESPONDING CLOUD BROKER, CLOUD CACHE AGENT AND CLIENT APPLICATION - In a cloud storage system, an editable version of each content item is stored in a cloud data store and one or more HTML versions of content items are stored in a cloud cache store. In order to access a content item, a client application sends a request to a cloud broker, specifying a user action that is either “view” or “edit”. In case the user action is “edit”, the cloud broker sends a request for retrieving the editable version of the content item from the cloud data store. In case the user action is “view”, the cloud broker sends a request for obtaining a URL to one of the HTML versions of the content item to a cloud cache agent. | 03-06-2014 |
20140074958 | METHODS AND SYSTEMS FOR DISPLAYING SCHEDULE INFORMATION - A method and system comprising: receiving travel request data; retrieving schedule data associated with the travel request data from the database, the schedule data being further associated with a scheduled travel time; retrieving availability data associated with the travel request data from the cache, the availability data being further associated with an available travel time; and sending the schedule data and the availability data via a computer network. | 03-13-2014 |
20140074959 | CLIENT SIDE MEDIA STATION GENERATION - To generate a media station, a client device can receive a candidate media item playlist and media playback rules corresponding to the media station. When a new media item is needed for the media station, the client device can apply the media playback rules to a next media item in the list of candidate media items. The playback rules can be used to determine whether the next media item is currently eligible for playback. Additionally, the client device can receive a candidate invitational content item playlist and invitational content playback rules corresponding to the media station. In response to detecting an invitational content triggering action, the client device can apply the invitational content item rules to the candidate invitational content item playlist to select at least one invitational content item to present in the media stream. | 03-13-2014 |
20140074960 | COMPACTING A NON-BIASED RESULTS MULTISET - A method, system, and computer program product for compacting a non-biased results multiset are provided in the illustrative embodiments. A set of references and a multiset of values are identified. The multiset includes a first and a second set of values, each set including a first value. A first reference in the set of references refers to the first set of values and a second reference in the set of references refers to the second set of values. The values in the first and second set of values are re-arranged to form permuted first and second sets of values. The multiset is compacted by overlaying the permuted first and second sets of values in a portion such that the permuted first set of values and the permuted second set of values share a single instance of the first value in a portion of the compacted multiset. | 03-13-2014 |
20140074961 | Efficiently Delivering Time-Shifted Media Content via Content Delivery Networks (CDNs) - Example embodiments herein provide for efficient distribution of content in a content distribution network (CDN) by a CDN server. The content is efficiently distributed by associating live content and time-shifted content with a common resource identifier, which may (in some instances) avoid re-transporting content across the network. To facilitate this, an entry point CDN server is configured to map the common resource identifier to a permanent storage location (that is itself associated with a different resource identifier) after expiration of the live viewing period. | 03-13-2014 |
20140074962 | BROWSER DEVICE, BROWSER PROGRAM, BROWSER SYSTEM, IMAGE FORMING APPARATUS, AND NON-TRANSITORY STORAGE MEDIUM - Browser device for obtaining web data of specified URL. Browser device includes: registration unit configured to register one or more URLs; dedicated cache memory configured to, when first web data is obtained from registered URLs, store first web data without deleting existing web data that is stored already therein; general-purpose cache memory configured to, when second web data is obtained from unregistered URL, delete part or all of existing web data that is stored already therein, in accordance with capacity of general-purpose cache memory, an amount of existing web data, and amount of second web data, and then store second web data; and obtaining unit configured to, when web data of specified URL is stored in one of dedicated cache memory and general-purpose cache memory, obtain web data therefrom. URLs registered by registration unit are in range that allows for storage of web data in dedicated cache memory. | 03-13-2014 |
20140082120 | EFFICIENT CPU MAILBOX READ ACCESS TO GPU MEMORY - Techniques are disclosed for peer-to-peer data transfers where a source device receives a request to read data words from a target device. The source device creates a first and second read command for reading a first portion and a second portion of a plurality of data words from the target device, respectively. The source device transmits the first read command to the target device, and, before a first read operation associated with the first read command is complete, transmits the second read command to the target device. The first and second portions of the plurality of data words are stored in a first and second portion a buffer memory, respectively. Advantageously, an arbitrary number of multiple read operations may be in progress at a given time without using multiple peer-to-peer memory buffers. Performance for large data block transfers is improved without consuming peer-to-peer memory buffers needed by other peer GPUs. | 03-20-2014 |
20140082121 | MODELLING DEPENDENCIES IN DATA TRAFFIC - A method of modifying timings of data traffic in a test system by introducing dependencies that would arise in response to data requiring access to a resource. The resource receives the data traffic from at least one initiator and is connected via an interconnect to at least one recipient, the resource comprises a buffer for storing pending data related to an access to the resource that cannot currently complete. The method comprises the steps of:
| 03-20-2014 |
20140082122 | USING SPECIAL-CASE HARDWARE UNITS FOR FACILITATING ACCESS CONTROL LISTS ON A NETWORKING ELEMENT - Access control lists (ACLs) include one or more rules that each define a condition and one or more actions to be performed if the condition is satisfied. In one embodiment, the conditions are stored on a ternary content-addressable memory (TCAM), which receives a portion of network traffic, such as a frame header, and compares different portions of the header to entries in the TCAM. If the frame header satisfies the condition, the TCAM reports the match to other elements in the ACL. For certain conditions, the TCAM may divide the condition into a plurality of sub-conditions which are each stored in a row of the TCAM. To efficiently use the limited space in TCAM, the networking element may include one or more comparator units which check for special-case conditions. The comparator units may be used in lieu of the TCAM to determine whether the condition is satisfied. | 03-20-2014 |
20140082123 | CONTENT CACHING AND DELIVERING SYSTEM WITH TRAFFIC OF REPETITIVELY REQUESTED CONTENT REDUCED - A content caching and delivering apparatus transmits and receives communication signals in between a content delivering apparatus delivering content on a telecommunications network and communication terminals. A content storage stores content delivered by the content caching and delivering apparatus. A delivery controller receives a content delivery request for the content stored in the content storage, and maps information on the position of the requested content to information on the content for each terminal and manages the mapping. The delivery controller delivers the requested content to the communication terminal. | 03-20-2014 |
20140082124 | METHOD AND SYSTEM HAVING COLLABORATIVE NETWORK MEDIA APPLIANCES UTILIZING PRIORITIZED LOCAL STORAGE OF RECOMMENDED CONTENT - A collaborative system for more efficiently viewing streamed content in a time shifted manner utilizes collaborative home set-top box (STB) and a cloud component (the cloud STB) and a receiving device, such as an antenna or cable component, all cooperatively shared among a community of users. The cloud STB may further comprise a network accessible distributed part and/or a cross licensed portion of other home STBs. The home STB may be connected to the cloud STB and other home STBs over any local or wide area network topology infrastructure, such as the Internet. A group of home STBs may be cross licensed to each other within a community of users sharing the same viewing rights to a collaboratively copy content retained either in the system or among a plurality of shared user systems. Various network infrastructure configurations and streaming techniques, including unicast, multicast, and upstream and downstream collaborative streaming, are proposed to optimize network bandwidth efficiency and increase viewing options. | 03-20-2014 |
20140082125 | METHOD AND SYSTEM HAVING COLLABORATIVE NETWORK MEDIA APPLIANCES UTILIZING PRIORITIZED LOCAL STORAGE OF RECOMMENDED CONTENT - A collaborative system for more efficiently viewing streamed content in a time shifted manner utilizes collaborative home set-top box (STB) and a cloud component (the cloud STB) and a receiving device, such as an antenna or cable component, all cooperatively shared among a community of users. The cloud STB may further comprise a network accessible distributed part and/or a cross licensed portion of other home STBs. The home STB may be connected to the cloud STB and other home STBs over any local or wide area network topology infrastructure, such as the Internet. A group of home STBs may be cross licensed to each other within a community of users sharing the same viewing rights to a collaboratively copy content retained either in the system or among a plurality of shared user systems. Various network infrastructure configurations and streaming techniques, including unicast, multicast, and upstream and downstream collaborative streaming, are proposed to optimize network bandwidth efficiency and increase viewing options. | 03-20-2014 |
20140082126 | Sandboxing Content Optimization at the Network Edge - Some embodiments provide systems and methods for sandboxing content optimization to occur entirely within a network edge or PoP of a CDN. Some embodiments pass a first request for a first URL to a first back-end at the network edge that is configured to cache an optimized instance of the particular object. When the optimized instance of the particular object is not cached at the first back-end, a second request is issued for a second URL identifying a non-optimized instance of the particular object. The second request resolves internally within the network edge to a second back-end that is configured to cache the non-optimized object. The non-optimized object from the second back-end is optimized and passed to the first back-end. The first back-end caches the optimized instance of the non-optimized object and serves the optimized instance to a requesting end user. | 03-20-2014 |
20140082127 | SYSTEM FOR ACCESSING SHARED DATA USING MULTIPLE APPLICATION SERVERS - A system including multiple application servers for accessing shared data and a centralized control unit for centrally controlling a lock applied to the shared data by each of the application servers. Each application server includes a distributed control unit for controlling a lock applied to the shared data by the application server and a selection unit for selecting any one of distributed mode in which a lock is acquired from the distributed control unit or a centralized mode in which a lock is acquired from the centralized control unit. | 03-20-2014 |
20140089448 | SYSTEM AND METHOD FOR CACHING CONTENT AT AN END USER'S CUSTOMER PREMISES EQUIPMENT - A caching method and device for reducing non-local network traffic by caching content at equipment at the premises of one or more end users. The caching device may be connected to a non-local network of a data distribution network that may include the non-local network, a headend connected to the non-local network, a content delivery server connected to the headend and a content source connected to the headend. The premises equipment may include a caching device including a controller and storage medium. The caching device at an end user premises may receive content, which may be sent to a plurality of end user premises as part of a multicast, over a non-local network. The end user may access the received content at local network speeds without having to send an individual request for the content over the non-local network. | 03-27-2014 |
20140089449 | PREDICTIVE DATA MANAGEMENT IN A NETWORKED COMPUTING ENVIRONMENT - An approach for managing file storage between local and remote storage locations in a networked computing environment (e.g., a cloud computing environment) is provided. In a typical embodiment, files/data may be tagged with metadata that associates the files/data with an event that indicates a date/time and a geographical destination of an intended use of the files. The files may then be transferred between local and remote storage (e.g., at the destination) based upon a set of predefined rules for transferring the files/data. | 03-27-2014 |
20140089450 | Look-Ahead Handling of Page Faults in I/O Operations - A method for data transfer includes receiving in an input/output (I/O) operation a first segment of data to be written to a specified virtual address in a host memory. Upon receiving the first segment of the data, it is detected that a first page that contains the specified virtual address is swapped out of the host memory. At least one second page of the host memory is identified, to which a second segment of the data is expected to be written. Responsively to detecting that the first page is swapped out and to identifying the at least one second page, at least the first and second pages are swapped into the host memory. After swapping at least the first and second pages into the host memory, the data are written to the first and second pages. | 03-27-2014 |
20140089451 | Application-assisted handling of page faults in I/O operations - A method for data transfer includes receiving in an operating system of a host computer an instruction initiated by a user application running on the host processor identifying a page of virtual memory of the host computer that is to be used in receiving data in a message that is to be transmitted over a network to the host computer but has not yet been received by the host computer. In response to the instruction, the page is loaded into the memory, and upon receiving the message, the data are written to the loaded page. | 03-27-2014 |
20140089452 | CONTENT STREAM DELIVERY USING VARIABLE CACHE REPLACEMENT GRANULARITY - A method comprises associating at least one cache replacement granularity value with a given one of a plurality of content streams comprising a number of segments, receiving a request for a given segment of the given content stream in a network element, identifying a given portion of the given content stream which contains the given segment, updating a value corresponding to the given portion of the given content stream, and determining whether to store the given portion of the given content stream in a memory of the network element based at least in part on the updated value corresponding to the given portion. The at least one cache replacement granularity value represents a given number of segments, the given content stream being separable into one or more portions based at least in part on the at least one cache replacement granularity value. | 03-27-2014 |
20140089453 | CLEAR IN-MEMORY BUSINESS DATA CACHE ACROSS SERVERS WITHOUT RESTARTING SERVERS - Embodiments of the invention provide systems and methods for updating cache data on multiple servers without requiring a restart of those servers. More specifically, embodiments of the present invention provide an ability for an application to clear one or more cached tables when the table content has been modified. The cache can be refreshed across servers without impacting the active transactions of end users. So for example, during a business process such as the general ledger period close the system will no longer need a system restart to update cached period information. | 03-27-2014 |
20140089454 | METHOD FOR MANAGING CONTENT CACHING BASED ON HOP COUNT AND NETWORK ENTITY THEREOF - Disclosed is hop-count based content caching. The present invention implements hop-count based content cache placement strategies that efficiently decrease traffics of a network by the routing node's primarily judging whether to cache a content chunk by grasping an attribute of the received content chunk; the routing node's secondarily judging whether to cache the content chunk based on a caching probability of ‘1/hop count’; and storing the content chunk and the hop count information in the cache memory of the routing node when the content chunk is determined to cache the content chunk as a result of the secondary judgment. | 03-27-2014 |
20140089455 | METHOD AND SYSTEM FOR MEMORY MANAGEMENT - One embodiment comprises a machine implemented method. The method comprises providing a first memory slice having a plurality of blocks configured for storing information on behalf of a plurality of clients. The first memory slice is a single-port memory that is only accessible to the plurality of clients. The method further comprises configuring a second memory slice having a plurality of blocks for storing links and accessible to the plurality of clients and to a list manager that maintains a data structure for allocating memory blocks from the first memory slice and the second memory slice to the plurality of clients. The second memory slice is accessible to both the plurality of clients and the list manager. The method further comprises receiving a request from a client for access to memory storage at the first memory slice and the second memory slice. The method further comprises allocating a block of the first memory slice to the client and a block of the second memory slice to the client. The method further comprises storing a link for a next available memory block at the second memory slice. The list manager allocates the block of the first memory slice and stores the link at the second memory slice. | 03-27-2014 |
20140089456 | DE-POPULATING CLOUD DATA STORE - Embodiments relate to systems and methods for de-populating a cloud data store. In one method, an identification of a set of cloud-populated data to be transported from a set of host storage clouds to at least one target data store is received. The method identifies a data transport pathway from the set of host storage clouds to the at least one target data store, the data transport pathway including a dedicated reverse staging connection between the set of host storage clouds and the at least one target data store. The method initiates the transport of the set of cloud-populated data to the at least one target data store in view of a set of de-population commands. | 03-27-2014 |
20140095644 | PROCESSING OF WRITE REQUESTS IN APPLICATION SERVER CLUSTERS - An application server of a server cluster may store a payload of a write request in a local cache and thereafter serve read requests based on payloads in the local cache if the corresponding data is present when such read requests are received. The payloads are however later propagated to respective data stores at a later suitable time. Each application server in the server cluster retrieves data from the data stores if the required payload is unavailable in the respective local cache. According to another aspect, an application server signals to other application servers of the server cluster if a required payload is unavailable in the local cache. In response, the application server having the specified payload (in local cache) propagates the payload with a higher priority to the corresponding data store, such that the payload is available to the requesting application server. | 04-03-2014 |
20140095645 | Method for Caching Data on Client Device to Optimize Server Data Persistence in Building of an Image-Based Project - A system for creating image and or text-based projects includes a server connected to a network, the server having access to least one processor and a data repository, the server including a non-transitory physical medium, and software running from the non-transitory physical medium, the software providing a first function for establishing a client server connection between the server and at least one user-operated computing appliance connected to the network, a second function for initiating and maintaining an active data session between one or more users involved in project creation and or in project editing through a graphics user interface (GUI), a third function for establishing a cache memory on the at least one operated computing appliance, the cache dedicated for caching user and server-side data, a fourth function for caching user actions in the cache memory, and a fifth function for persisting the cached data to the server. | 04-03-2014 |
20140095646 | DATA CACHING AMONG INTERCONNECTED DEVICES - Technology is disclosed herein for optimizing data caches among multiple interconnected computing devices. According to at least one embodiment, a storage server transfers a first data set to a computing device. The storage server then identifies a neighbor computing device sharing a local area network (LAN) with the computing device. The neighbor computing device maintains a network connection with the storage server. The storage server transmits a second data set relevant to the first data set to the neighbor computing device. In response to a read request for the second data set from the computing device, the storage server sends to the computing device an instruction indicating that the neighbor computing device is storing a data cache for the computing device. | 04-03-2014 |
20140095647 | ROUTING METHOD - A method executed by a router that establishes a connection between a network and an another network that includes an information processing device and an information storage device, the method includes: detecting an access status of the information processing device to the information storage device; and prohibiting transfer of the information from the information processing device to the another network depending on the access status managed in the detecting. | 04-03-2014 |
20140095648 | Storage and Transmission of Log Data In a Networked System - A method for storing log data in a networked storage system includes receiving one or more log data streams and storing the log data streams in a local memory location. The method also includes accessing the log data streams from the local memory location by a communications adapter and transmitting the log data streams to a storage system over a communications network by the communications adapter. The communications adapter is configured for one way communication with the storage system. | 04-03-2014 |
20140095649 | PROXY-BASED CACHE CONTENT DISTRIBUTION AND AFFINITY - A distributed caching hierarchy that includes multiple edge routing servers, at least some of which receiving content requests from client computing systems via a load balancer. When receiving a content request, an edge routing server identifies which of the edge caching servers the requested content would be in if the requested content were to be cached within the edge caching servers, and distributes the content request to the identified edge caching server in a deterministic and predictable manner to increase the likelihood of increasing a cache-hit ratio. | 04-03-2014 |
20140095650 | ACCESSING A LARGE DATA OBJECT IN A DISPERSED STORAGE NETWORK - A method begins by a dispersed storage (DS) processing module generating a data object identifier for data to be stored in a dispersed storage network (DSN) and partitioning the data into a plurality of data partitions based on a set of retrieval preferences and data boundary information. For a data partition, the method continues with the DS processing module dispersed storage error encoding the data partition to produce a plurality of sets of encoded data slices and generating a plurality of sets of DSN addresses for the plurality of sets of encoded data slices, wherein a DSN address of the plurality of sets of DSN addresses includes a representation of the data object identifier, a representation of one or more retrieval preferences of the set of retrieval preferences, a representation of a corresponding portion of the data boundary information, and dispersed storage addressing information. | 04-03-2014 |
20140101278 | SPECULATIVE PREFETCHING OF REMOTE DATA - A profiler may identify potentially-independent remote data accesses in a program. A remote data access is independent if value returned from said remote data access is not computed from another value returned from another remote data access appearing logically earlier in the program. A program rewriter may generate a program-specific prefetcher that preserves the behavior of the program, based on profiling information including the potentially-independent remote data accesses identified by the profiler. An execution engine may execute the prefetcher and the program concurrently. The execution engine may automatically decide which of said potentially-independent remote data accesses should be executed in parallel speculatively. A shared memory shared by the program and the prefetcher stores returned data from a data source as a result of issuing the remote data accesses. | 04-10-2014 |
20140101279 | SYSTEM MANAGEMENT METHOD, AND COMPUTER SYSTEM - A computer system comprising a storage device including a copy pair of a copied volume and a host computer is provided. The management computer detects a change in the state obtained by monitoring the state of the copy pair, and according to the result of detection, changes the configuration of a cluster constituted by a virtual machine using data stored in the volume, and a host computer using the volume. | 04-10-2014 |
20140108585 | MULTIMEDIA CONTENT MANAGEMENT SYSTEM - A system allows a user to select multimedia content items from sources that include, but are not limited to, any of: Internet, network, or local. Selected multimedia content items may be stored in user specific caches residing in at least one cloud based storage device. Multimedia content items may be transcoded while or after being retrieved from a source and then stored in a user specific cache. Multimedia content items may be selected by a user from the user's specific cache and streamed to a user device. | 04-17-2014 |
20140108586 | METHOD, DEVICE AND SYSTEM FOR DELIVERING LIVE CONTENT - The present invention provides a method, a device, and a system for delivering live content. A pre-delivery request with respect to live content is sent to a CDN cache device, and the CDN cache device caches the live content according to the pre-delivery request with respect to the live content before a user views the live content, thereby solving the problems of a long delay and poor user experience in playing live content that is not cached because a part of live content cannot be cached in the prior art, ensuring the play quality of all live content, and improving user experience. | 04-17-2014 |
20140115089 | METHOD AND APPARATUS FOR JOINING READ REQUESTS - Implementations of the present disclosure involve a system and/or method for joining read requests for the same data block sent to a storage appliance. The system and method is configured to receive the first read request for the data block at an I/O layer of the storage appliance. The I/O layer is configured to manage obtaining data blocks from one or more storage devices on the storage appliance. The system and method may then receive a second read request for the data block at the I/O layer of the storage appliance. The first and second read request may then be joined at I/O layer and only a single copy of the data block is returned to a cache in response to the first and second read requests. | 04-24-2014 |
20140115090 | METHODS AND APPARATUS FOR CONTENT CACHING IN A VIDEO NETWORK - Methods and apparatus for selectively caching (and de-caching) video content in network so as to reduce content transformation requirements and also cache storage requirements. In one embodiment, a content caching controller associated with a content server differentiates content requests based on content attributes such as the requested codec format (e.g., MPEG or Windows Media), resolution, bitrate, and/or encryption type or security environment. If the content requested by a user is not available with the requested attribute(s), the content server transfers to content to the user by first transforming it. The content server also speculatively caches the transformed content locally, so that a future request for the same content with the same attributes can be filled by transferring without the intermediate transformation step. The controller allows the network operator to optimize use of available storage and transcoding resources. | 04-24-2014 |
20140122635 | COMPUTER SYSTEM AND DATA MANAGEMENT METHOD - A computer system comprises a computer, a storage system comprising multiple storage apparatuses, and an edge storage apparatus. The edge storage apparatus stores identification information, which makes it possible to identify a volume, and first access information for accessing a logical storage apparatus, which stores volume data of the volume after associating the identification information with the first access information, a storage apparatus (A1) executes processing for transferring the volume data from a migration source to a logical storage apparatus, which is a migration destination, (A2) stores second access information for accessing the migration-destination logical storage apparatus in the storage apparatus, and (A3) sends the second access information, and the edge storage apparatus (B1) receives the second access information, and (B2) associates the second access information with the identification information enabling the identification of the volume, and stores the associated information in the storage apparatus. | 05-01-2014 |
20140122636 | BALANCING STORAGE NODE UTILIZATION OF A DISPERSED STORAGE NETWORK - A method begins by a dispersed storage (DS) processing module determining memory space utilization state of logical storage nodes of a dispersed storage network DSN. When a logical storage node is in an over-utilized memory space utilization state and another logical storage node is in an under-utilized memory space utilization state, the method continues with the DS processing module selecting the other logical storage node to produce a selected logical storage node and reassigning a portion of a DSN address range assigned to the selected logical storage node to a logical storage node that is in an average memory space utilization state to create an address free logical storage node. The method continues with the DS processing module reassigning address blocks assigned to the logical storage node that is in the over-utilized memory space utilization state to the address free logical storage node. | 05-01-2014 |
20140122637 | METHOD AND APPARATUS FOR PROVIDING CACHING SERVICE IN NETWORK INFRASTRUCTURE - A method and apparatus for providing caching service in network infrastructure. In an embodiment, there is provided a method for providing caching service in network infrastructure, comprising: in response to at least one application node accessing data in a storage node, caching a copy of the data in a cache server; in response to the at least one application node accessing the data in the storage node, obtaining an identifier indicating whether the data in the storage node is valid or not; and in response to the identifier indicating the data in the storage node is valid, returning the copy; wherein the at least one application node and the storage node are connected via the network infrastructure, and the cache server is coupled to a switch in the network infrastructure. In another embodiment of the present invention, there is provided an apparatus for providing caching service in network infrastructure. | 05-01-2014 |
20140122638 | Webpage Browsing Method And Device - A webpage browsing method including a browser of mobile terminal device accesses a server through the Internet in a first working state, downloads data of a current page and data of predefined N pages subsequent to the current page in order from the server, caches the downloaded data and displays the current page, wherein N is a natural number. The browser updates a link address of each page on the server into a cache address corresponding to data of the page, and disconnects from the Internet after the data of the current page and the data of the N pages have been cached. The browser switches to a second working state, selects one page of the cached current page and the N pages, reads the data of the selected page from a cache according to a cache address corresponding to data of the selected page, and displays the selected page. | 05-01-2014 |
20140129666 | PREEMPTIVE CACHING OF DATA - A first computing device receives a first request from a client computing device, wherein the first request includes a markup language request. The first computing device transmits the first request to a second computing device, wherein the second computing device services the first request. The first computing device receives the serviced first request, wherein the serviced first request includes a manifest tag. The first computing device caches the serviced first request. The first computing device transmits the serviced first request to the client computing device. | 05-08-2014 |
20140129667 | CONTENT DELIVERY SYSTEM, CONTROLLER AND CONTENT DELIVERY METHOD - From a cache server that temporarily retains at least part of a plural number of contents stored in a storage apparatus, a controller receives an access frequency to the contents and a load state of the cache server. The controller also receives, from a packet forwarding apparatus, information of a terminal being or not being in a service area. The controller decides that a content an access frequency to which is higher than a predetermined threshold value will be allocated to two or more of the plural number of the cache servers predetermined. The controller also decides that at least part of the other contents will be allocated to the plural number of the cache servers using at least one out of the access frequency, the load state, the information of a terminal being or not being in a service area and topology information of the mobile network. The controller instructs the cache servers to retain the contents depending on the decided allocation. | 05-08-2014 |
20140136643 | Dynamic Buffer Management for a Multimedia Content Delivery System - A method implemented in a computing device that connects over a network to server computers that host content streams. The method displays content items on the computing device, where each content item includes a link to one of the content streams, determines an amount of available bandwidth on a data connection from the computing device to the network, and associates a pre-fetch buffer and a streaming buffer with each content item. For each content item, the method obtains a measurement based on a condition relative to the linked content stream. The method then calculates, for each content item, a size for the pre-fetch buffer based on the amount of available bandwidth and the measurement, allocates memory for the pre-fetch buffer and the streaming buffer, and initiates a download of a first portion of the linked content stream to the pre-fetch buffer. | 05-15-2014 |
20140136644 | DATA STORAGE MANAGEMENT IN COMMUNICATIONS - A method for caching data is disclosed, in which a network apparatus ( | 05-15-2014 |
20140136645 | CONTENT DISTRIBUTION SYSTEM, CACHE SERVER, AND CONTENT DISTRIBUTION METHOD - A cache server, connected to a packet forwarding apparatus that forwards a packet to be sent and received between a user terminal and a distribution server that distributes content over the Internet, temporarily stores at least some of the content in a content temporary storage unit of the cache server, calculates a bit rate when sending content based on a TCP response signal or an ECN (Explicit Congestion Notification) signal received from the terminal, and reads a file or stream of content requested in a content request message received from the terminal, whose bit rate is not greater than the bit rate, from the content temporary storage unit or the distribution server, stores the read file or stream in a packet of a prescribed protocol, and sends the packet. | 05-15-2014 |
20140136646 | FACILITATING, AT LEAST IN PART, BY CIRCUITRY, ACCESSING OF AT LEAST ONE CONTROLLER COMMAND INTERFACE - An embodiment may include circuitry to facilitate, at least in part, a first network interface controller (NIC) in a client to be capable of accessing, via a second NIC in a server that is remote from the client and in a manner that is independent of an operating system environment in the server, at least one command interface of another controller of the server. The command interface may include at least one controller command queue. Such accessing may include writing at least one queue element to the at least one command queue to command the another controller to perform at least one operation associated with the another controller. The another controller may perform the at least one operation in response, at least in part, to the at least one queue element. Many alternatives, variations, and modifications are possible. | 05-15-2014 |
20140143367 | ROBUSTNESS IN A SCALABLE BLOCK STORAGE SYSTEM - A storage system that accomplishes both robustness and scalability. The storage system includes replicated region servers configured to handle computation involving blocks of data in a region. The storage system further includes storage nodes configured to store the blocks of data in the region, where each of the replicated region servers is associated with a particular storage node of the storage nodes. Each storage node is configured to validate that all of the replicated region servers are unanimous in updating the blocks of data in the region prior to updating the blocks of data in the region. In this manner, the storage system provides end-to-end correctness guarantees for read operations, strict ordering guarantees for write operations, and strong durability and availability guarantees despite a wide range of server failures (including memory corruptions, disk corruptions, etc.) and scales these guarantees to thousands of machines and tens of thousands of disks. | 05-22-2014 |
20140143368 | Distributed Symmetric Multiprocessing Computing Architecture - Example embodiments of the present invention includes systems and methods for implementing a scalable symmetric multiprocessing (shared memory) computer architecture using a network of homogeneous multi-core servers. The level of processor and memory performance achieved is suitable for running applications that currently require cache coherent shared memory mainframes and supercomputers. The architecture combines new operating system extensions with a high-speed network that supports remote direct memory access to achieve an effective global distributed shared memory. A distributed thread model allows a process running in a head node to fork threads in other (worker) nodes that run in the same global address space. Thread synchronization is supported by a distributed mutex implementation. A transactional memory model allows a multi-threaded program to maintain global memory page consistency across the distributed architecture. A distributed file access implementation supports non-contentious file I/O for threads. These and other functions provide a symmetric multiprocessing programming model consistent with standards such as Portable Operating System Interface for Unix (POSIX). | 05-22-2014 |
20140143369 | METHOD AND SYSTEM FOR STORING AND READING DATA IN OR FROM A KEY VALUE STORAGE - A method and system for storing data in a key value storage having a plurality of n servers, wherein t05-22-2014 | |
20140143370 | METHOD AND SYSTEM FOR INCREASING SPEED OF DOMAIN NAME SYSTEM RESOLUTION WITHIN A COMPUTING DEVICE - A system for resolving domain name system (DNS) queries, contains a communication device for resolving DNS queries, wherein the communication device further contains a memory and a processor that is configured by the memory, a cache storage for use by the communication device, and a network of authoritative domain name servers, where in a process of the communication device looking up a DNS request within the cache storage, if the communication device views an expired DNS entry within the cache storage, the communication device continues the process of looking up the DNS request in the cache storage while, in parallel, sending out a concurrent DNS request to an authoritative domain name server that the expired DNS entry belongs to. | 05-22-2014 |
20140143371 | METHOD AND SYSTEM FOR CAPTURING AND MANAGING DATA RELATED TO HTTP TRANSACTIONS - A system and method for intercepting and storing information relating to communications with at least one device over a network are described. The system comprises: an interceptor configured to intercept at least some communications with the at least one device over a network; and a processing system. The processing system processes each intercepted communication to determine the type of content which is referenced by the intercepted communication. A storage action of a first type may be performed if the determined type of content satisfies a criterion or a storage action of a second type different to said first type may be performed if the determined type of content does not satisfy said criterion. | 05-22-2014 |
20140149528 | MPI COMMUNICATION OF GPU BUFFERS - A technique for enhancing the efficiency and speed of data transmission within and across multiple, separate computer systems includes the use of an MPI library/engine. The MPI library/engine is configured to facilitate the transfer of data directly from one location to another location within the same computer system and/or on separate computer systems via a network connection. Data stored in one GPU buffer may be transferred directly to another GPU buffer without having to move the data into and out of system memory or other intermediate send and receive buffers. | 05-29-2014 |
20140149529 | CLOUD-BASED NFC CONTENT SHARING - Systems, methods, devices, and computer programming products for NFC-enabled sharing of data files stored by networked computing resources, according to a variety of selectable criteria. | 05-29-2014 |
20140149530 | IN-BAND MANAGEMENT OF A NETWORK ATTACHED STORAGE ENVIRONMENT - An aspect includes a method for in-band management of a network attached storage environment. A client is connected via a standard network attached storage protocol to a network attached storage system using existing authorization and authentication procedures. Advanced management functions are exposed to the client via a special file system structure over the standard network attached storage protocol. The client uses existing standard network attached storage protocol functions on the special file system structure to retrieve and to invoke the advanced management functions. Result data are returned to the client using a feedback channel and the standard network attached storage protocol. | 05-29-2014 |
20140149531 | SYSTEM AND METHOD OF PROVIDING CONTENTS WITH TIC SERVER AND CDN - The present invention relates to a content providing system and method combining a transparent Internet cache (TIC) server and a content delivery network (CDN), and more particularly, to a content providing system and method that may overcome limitation on capacity of a cache storage found in a TIC service and may also overcome limitation on service transparency found in a CDN service. According to the present invention, it is possible to overcome limitation on capacity of a cache storage found in an existing TIC service and to overcome limitation on service transparency found in a CDN by combining a TIC server and the CDN. | 05-29-2014 |
20140149532 | METHOD OF PACKET TRANSMISSION FROM NODE AND CONTENT OWNER IN CONTENT-CENTRIC NETWORKING - A method of transmitting a content reply packet from a content owner in content-centric networking (CCN) includes determining a caching capability value threshold (CCVth) for determining a candidate node for caching a content based on a policy of the content owner, and transmitting a content reply packet including the content and the CCVth in response to a content request packet from a content requester. | 05-29-2014 |
20140149533 | DATA STORAGE BASED ON CONTENT POPULARITY - Methods, systems, and software for operating a data storage system of a content delivery node are provided herein. In one example, a method of operating a data storage system of a content delivery node is presented. The method includes receiving content data into a storage system, storing the content data in a first storage space, determining popular content data within the content data based on at least user requests for the content data, and storing the popular content data in a second storage space. | 05-29-2014 |
20140149534 | METHODS AND ARRANGEMENTS FOR CACHING STATIC INFORMATION FOR PACKET DATA APPLICATIONS IN WIRELESS COMMUNICATION SYSTEMS - The present invention relates to the caching of static information relating to a communication application executed in an user equipment in a wireless communication systems. The method of the invention is applicable in the establishment of, or during, a communication session, between the user equipment and a service application, via a proxy. The user equipment sends a start message, comprising a location indicator, to the proxy requesting to utilize a service application. The proxy access a caching node by the use of the location indicator and retrieves the static information. The static information has been cached in the caching node prior to the communication session. | 05-29-2014 |
20140156777 | DYNAMIC CACHING TECHNIQUE FOR ADAPTIVELY CONTROLLING DATA BLOCK COPIES IN A DISTRIBUTED DATA PROCESSING SYSTEM - A dynamic caching technique adaptively controls copies of data blocks stored within caches (“cached copies”) of a caching layer distributed among servers of a distributed data processing system. A cache coordinator of the distributed system implements the dynamic caching technique to increase the cached copies of the data blocks to improve processing performance of the servers. Alternatively, the technique may decrease the cached copies to reduce storage capacity of the servers. The technique may increase the cached copies when it detects local and/or remote cache bottleneck conditions at the servers, a data popularity condition at the servers, or a shared storage bottleneck condition at the storage system. Otherwise, the technique may decrease the cached copies at the servers. | 06-05-2014 |
20140156778 | MANAGING A DISTRIBUTED CACHE FOR VIRTUAL MACHINES - Clients may display desktop environments to provide users with access to virtual machines (VMs). Graphical objects that displayed in the desktop environments are stored in caches in multiple clients. A host that hosts a VM may track or manage the graphical objects that are in the caches of the multiple clients. The host may instruct a first client to obtain a graphical object from a second client that is near the first client, instead of providing the graphical object to the first client directly. | 06-05-2014 |
20140156779 | DYNAMIC DETECTION AND REDUCTION OF UNALIGNED I/O OPERATIONS - Detection and reduction of unaligned input/output (“I/O”) requests is implemented by a storage server determining an alignment value for data stored by the server within a storage system on behalf of a first client, writing the alignment value to a portion of the volume that stores the data for the first client, but not to a portion of the volume that stores data for a second client, and changing a location of data within the portion of the volume that stores the data for the first client, but not a location of data in the portion of the volume that stores data for the second client, to an alignment corresponding to the alignment value. The alignment value is applied to I/O requests directed to the portion of the volume that stores the data blocks for the first client after the location of the data blocks has been changed. | 06-05-2014 |
20140164546 | Reducing Delay and Delay Variation in a Buffer in Network Communications - There are disclosed systems and methods for reducing the average delay and the average delay variation of network communication data in a buffer. The buffer comprises a plurality of memory entries, and associated with the buffer is a read point and a write pointer. The buffer has a depth defined as the number of memory entries in the buffer between the memory entry pointed to by the read pointer and the memory entry pointed to by the write pointer. In one embodiment, at least one of the read pointer and the write pointer is initially set to establish the depth of the buffer to be a first value. The variation of the depth of the buffer is then monitored for a predetermined period of time as network communication data flows through the buffer. The depth of the buffer is then reduced based upon this monitoring. | 06-12-2014 |
20140164547 | MANAGING CONTENT ON AN ISP CACHE - One embodiment of the present invention sets forth a method for updating content stored in a cache residing at an internet service provider (ISP) location that includes receiving popularity data associated with a first plurality of content assets, where the popularity data indicate the popularity of each content asset in the first plurality of content assets across a user base that spans multiple geographic regions, generating a manifest that includes a second plurality of content assets based on the popularity data and a geographic location associated with the cache, where each content asset included in the manifest is determined to be popular among users proximate to the geographic location or users with preferences similar to users proximate to the geographic location, and transmitting the manifest to the cache, where the cache is configured to update one or more content assets stored in the cache based on the manifest. | 06-12-2014 |
20140164548 | MANAGING DIRECT ATTACHED CACHE AND REMOTE SHARED CACHE - Managing direct attached cache and remote shared cache, including: receiving from an enclosure attached server, by an enclosure that includes enclosure cache, a request for data; determining, by the enclosure, whether the data has been requested by a predetermined number of enclosure attached servers; and responsive to determining that the data has been requested by a predetermined number of enclosure attached servers, marking, by the enclosure, the data as enclosure level cacheable. | 06-12-2014 |
20140164549 | MANAGING DIRECT ATTACHED CACHE AND REMOTE SHARED CACHE - Managing direct attached cache and remote shared cache, including: receiving from an enclosure attached server, by an enclosure that includes enclosure cache, a request for data; determining, by the enclosure, whether the data has been requested by a predetermined number of enclosure attached servers; and responsive to determining that the data has been requested by a predetermined number of enclosure attached servers, marking, by the enclosure, the data as enclosure level cacheable. | 06-12-2014 |
20140164550 | METHOD OF CONNECTING A HARDWARE MODULE TO A FIELDBUS - A method is described of connecting a hardware module to a fieldbus, wherein a data connection between the hardware module and the fieldbus is established by a network module which is connected to the fieldbus and which has an internal memory, said method comprising the steps that the hardware module is connected to the network module, that the communication software is read out of the memory of the network module by the hardware module, said software being provided for the communication of the hardware module with the fieldbus, that the communication software is stored in the hardware module, and that the hardware module is use to communicate over the fieldbus. | 06-12-2014 |
20140164551 | ENCODED DATA SLICE CACHING IN A DISTRIBUTED STORAGE NETWORK - A method begins by receiving a request to retrieve a data segment stored as encoded data slices in a distributed storage network (DSN). The method continues by determining whether at least the threshold number of encoded data slices is cached in temporary storage associated with a distributed storage processing module. When the at least the threshold number of encoded data slices are cached in the temporary storage, the method continues by retrieving the at least the threshold number of encoded data slices from the temporary storage. When the at least the threshold number of encoded data slices is not cached in the temporary storage, the method continues by retrieving one or more of the encoded data slices from the DSN to obtain the at least the threshold number of encoded data slices. | 06-12-2014 |
20140173017 | COMPUTER SYSTEM AND METHOD OF CONTROLLING COMPUTER SYSTEM - In order to reduce the amount of consumption of a back-end bandwidth in a storage apparatus, a computer system includes: a first storage device; and a second storage device that is coupled to the first controller through a first interface and is coupled to the second controller through a second interface. The first controller receives data from a host computer through a first communication channel; write the received data into the first storage device; identify part of the received data as first data, the part satisfying a preset particular condition; and write a replica of the first data as second data into the second storage device. The second controller reads the second data from the second storage device in response to a Read request received from the host computer through a second communication channel; and transmit the second data to the host computer through the second communication channel. | 06-19-2014 |
20140173018 | Content Based Traffic Engineering in Software Defined Information Centric Networks - A method implemented by a network controller, the method comprising obtaining metadata of a content, wherein the content is requested by a client device, allocating one or more network resources to the content based on the metadata of the content, and sending a message identifying the allocated network resources to a switch to direct the content to be served to the client device, wherein the switch is controlled by the network controller and configured to forward the content to the client device using the allocated network resources. | 06-19-2014 |
20140173019 | METHODS AND DEVICES FOR DATA TRANSFER - The present application discloses methods and devices for data transfer and particularly data transfer between mobile terminals and a display device. The display device may connect to a uniquely identified server based on a device identifier corresponding to the display device. In addition, the display device may connect to the server through a default connection setup embedded in the device identifier. Mobile terminals may be searched and identified by terminal identifiers so that the display device may establish communication channels with the mobile terminals. After adding the mobile terminals to the contact lists of the display device, different display regions of the display device may be designated to the mobile terminals so that the regions may display the digital contents sent from the mobile terminals to the display device. In addition, the regions may be further selected to display the digital contents in more detail. | 06-19-2014 |
20140181232 | DISTRIBUTED QUEUE PAIR STATE ON A HOST CHANNEL ADAPTER - A method for managing a distributed cache of a host channel adapter (HCA) that includes receiving a work request including a QP number, determining that a QP state identified by the QP number is not in the distributed cache, retrieving the QP state from main memory, and identifying a first portion and a second portion of the QP state. The method further includes storing the first portion into a first entry of a first sub-cache block associated with the first module, where the first entry is identified by a QP index number, storing the second portion into a second entry of a second sub-cache block associated with the second module, where the second entry is identified by the QP index number; and returning the QP index number of the QP state to the first module and the second module. | 06-26-2014 |
20140181233 | SYSTEM, MESSAGING BROKER AND METHOD FOR MANAGING COMMUNICATION BETWEEN OPEN SERVICES GATEWAY INITIATIVE (OSGI) ENVIRONMENTS - Certain example embodiments relate to techniques for managing communication between a plurality of Open Services Gateway initiative (OSGi) environments. A system includes a messaging broker configured to receive a message from one of the OSGi environments, with the message including a call of a service provided by one of the other OSGi environments. The broker may be further configured to transfer the message to the other OSGi environment. The plural OSGi environments communicate only via the messaging broker. | 06-26-2014 |
20140181234 | DATA STORAGE METHOD, DATA STORAGE SYSTEM AND REQUESTING NODE USING THE SAME - The present disclosure provides a data storage method, a data storage system and a requesting node. The data storage method includes the following steps. A register identifier and a register time are written into a target data table. The target data table is read to look for a register time record, such that an access right of the storage node is determined. A requesting node having the access right computes result data, and writes a usage identifier and the result data in the target data table. The target data table is read to judge the validity of the result data. | 06-26-2014 |
20140181235 | SEPARATION OF DATA AND CONTROL IN A SWITCHING DEVICE - A method and apparatus for switching a data packet between a source and destination in a network. The data packet includes a header portion and a data portion. The header portion includes routing information for the data packet. The method includes defining a data path in the router comprising a path through the router along which the data portion of the data packet travels and defining a control path comprising a path through the router along which routing information from the header portion travels. The method includes separating the data path and control path in the router such that the routing information can be separated from the data portion allowing for the separate processing of each in the router. The data portion can be stored in a global memory while routing decisions are made on the routing information in the control path. | 06-26-2014 |
20140189034 | Adapative, Personal Localized Cache Control Server - A server may be configured to receive an indication that a first user device stores a particular content item; receive, from a second user device, a request for content; and determine that the requested content is available from the first user device. The determining may include determining that the particular content item stored by the first user device corresponds to the request for content, and determining that a local peer connection is available between the first user device and the second user device. The server may further output, to the first user device, an instruction to output the requested content to the second user device via the local peer connection, and/or the server may output, to the second user device, information which may allow the second user device to request the content from the first user device via a local peer connection. | 07-03-2014 |
20140189035 | Virtual Desktop Infrastructure (VDI) Login Acceleration - The time required to login to a remote or virtual desktop can be reduced by caching image data in a persistent memory location in-between remote desktop sessions. For instance, image data related to an image displayed on a client device during a first virtual desktop session may be cached after terminating the first virtual desktop session. The cached data can then be used to display the same image, or a correlated image, on the client device during a subsequent remote desktop session, thereby avoiding the need to re-transport the image data over a network. In a similar manner, cached image data can be shared between multiple users sharing a common local area network (LAN) in order to improve collective virtual desktop performance. | 07-03-2014 |
20140189036 | OPPORTUNISTIC DELIVERY OF CONTENT TO USER DEVICES WITH RATE ADJUSTMENT BASED ON MONITORED CONDITIONS - At least one processing device of a communication network is configured to implement a content delivery system. The content delivery system in one embodiment is configured to identify a set of user devices to receive content in a scheduling interval, to initiate delivery of the content to the set of user devices at respective delivery rates for a first portion of the scheduling interval, to monitor conditions associated with delivery of the content to the set of user devices, and to adjust a delivery rate of at least one of the user devices in the set for a second portion of the scheduling interval based at least in part on the monitored conditions. The monitored conditions may comprise, for example, buffer occupancy and channel quality for each of the user devices. The identifying, initiating, monitoring and adjusting are repeated for each of a plurality of additional scheduling intervals. | 07-03-2014 |
20140189037 | SYSTEMS AND METHODS FOR PREDICTIVE CACHING OF DIGITAL CONTENT - A system for predictively caching digital content in which the system is configured to: (1) receive, from a user of a client device, a request to access at least one particular digital file stored on a remote server; (2) select at least one other digital file to cache locally on the client device based on at least one file-accessing tendency of the user; (3) download the at least one other digital file from the remote server to the client device; and (4) save the downloaded digital file to memory associated with the client device for later access by the user. A file-accessing tendency of the user may include the manner in which the user typically scrolls or otherwise cycles through images or other files. The system may determine the user's file-accessing tendencies based on, for example, the user's location, native language, past content-accessing practices, and/or specified user preferences. | 07-03-2014 |
20140189038 | INTERMEDIATE SERVER, COMMUNICATION APPARATUS AND COMPUTER PROGRAM - There is provided an intermediate server for uploading data from a communication apparatus to a data storage server. While a current target file group including a current document file and some of plural image files to be uploaded is stored in the data storage server, the intermediate server receives an upload command for instructing an upload of a first image file to the data storage server, the current document file including text data for respectively specifying the image files stored in the data storage server. When the upload command is received, the intermediate server uploads the first image file, thereby changing the current target file group into a changed target file group which includes: a changed document file; and the first image file, the changed document file being acquired by adding first text data for specifying the first image file to the current document file. | 07-03-2014 |
20140189039 | System, Method and Computer Readable Medium for Offloaded Computation of Distributed Application Protocols Within a Cluster of Data Processing Nodes - A data processing node includes a management environment, an application environment, and a shared memory segment (SMS). The management environment includes at least one management services daemon (MSD) running on one or more dedicated management processors thereof. One or more application protocols are executed by the at least one MSD on at least one of the dedicated management processors. The management environment has a management interface daemon (MID) running on one or more application central processing unit (CPU) processors thereof. The SMS is accessible by the at least one MSD and the MID for enabling communication of information of the one or more application protocols to be provided between the at least one MSD and the MID. The MID provides at least one of management service to processes running within the application environment and local resource access to one or more processes running on another data processing node. | 07-03-2014 |
20140189040 | Stream-based data deduplication with cache synchronization - Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. Data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. As such, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are then re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. On-the-wire compression techniques are provided to reduce the amount of data transmitted between the peers. | 07-03-2014 |
20140189041 | ROBUST LIVE MIGRATION USING SHARED FILESYSTEM - A method for transferring guest physical memory from a source host to a destination host during live migration of a virtual machine (VM) involves (a) transmitting pages of the guest physical memory from the source host to the destination host over a network connection, (b) transferring state information from the source host to the destination host, (c) while performance benefits regarding continued access to the guest physical memory on the source host persist, using the transferred state information to run the VM on the destination host in place of running the VM on the source host, and (d) while the VM is running on the destination host, writing pages of the guest physical memory from the source host to a shared datastore such that the destination host can retrieve the written guest physical pages from the shared datastore. | 07-03-2014 |
20140189042 | METHOD FOR CAPTURING VIDEO RELATED CONTENT - Its provided a method for publishing content relating to a video being displayed on a first device, using a control device in communication with the first device via a wired or wireless connection. At the side of the control device, the method comprises the steps of sending a command for capturing to the first device for generating at least one picture or video clip from video data cached by the first device; receiving the at least one picture or video clip from the first device; presenting the at least one picture or video clip for the user to choose the content to publish; and sending the content for publication to a destination. | 07-03-2014 |
20140195632 | IMMUTABLE SHARABLE ZERO-COPY DATA AND STREAMING - The environment and use of an immutable buffer. A computing entity acquires data or generates data and populates the data into the buffer, after which the buffer is classified as immutable. The classification protects the data populated within the immutable buffer from changing during the lifetime of the immutable buffer, and also protects the immutable buffer from having its physical address changed during the lifetime of the immutable buffer. As different computing entities consume data from the immutable buffer, they do so through views provided by a view providing entity. The immutable buffer architecture may also be used for streaming data in which each component of the streaming data uses an immutable buffer. Accordingly, different computing entities may view the immutable data differently without having to actually copy the data. | 07-10-2014 |
20140195633 | SYSTEM AND METHOD FOR REMOVABLE DATA STORAGE ELEMENTS PROVIDED AS CLOUD BASED STORAGE SYSTEM - Provided is a system and method for providing removable data storage elements as a cloud based storage system. More specifically, the method achieves this for at least one embodiment by receiving at least one generally random stream of data objects, each data object having at least one identifiable element. The method directs the selection of at least a first identifiable element. The method then orders the stream of data objects against the first identifiable element and disposes the data objects upon at least one of the removable data storage elements in accordance with the ordered stream of data objects. A system for performing the method is also disclosed. | 07-10-2014 |
20140195634 | System and Method for Multiservice Input/Output - An apparatus for multiservice input/output switching includes a plurality of logical storage endpoints coupled to a plurality of remote servers via native input/output bus, a plurality of downstream ports coupled to a plurality of persistent storage drives, a storage transaction switch, and at least one processor configured to communicate with the plurality of remote servers and the plurality of persistent storage drives. The storage transaction switch translates received storage transaction using configured mappings from the server view to the physical view of persistent storage drives. Optionally, a network switch is integrated in the apparatus. Additionally, corresponding methods and computer readable medium embodiments are disclosed. | 07-10-2014 |
20140195635 | METHOD AND SYSTEM FOR REALIZING REST INTERFACE OF CLOUD CACHE IN NGINX - The present invention provides a method for realizing a REST (Representational State Transfer) interface of a cloud cache in an Nginx. The method includes: receiving a Hyper Text Transport Protocol (HTTP) message transmitted from a client, and parsing out key information and an operation type corresponding to the HTTP message according to a preset configuration file; converting the key information and the operation type into a parameter required by a cloud cache operation and a cloud cache operation type, and transmitting the parameter to a cloud cache apparatus, so that the cloud cache apparatus performs the cloud cache operation according to the parameter and the cloud cache operation type; receiving a cloud cache operation result returned by the cloud cache apparatus, processing the cloud cache operation result, and returning the processed result to the client. The present invention also provides a corresponding system. The present invention enables a larger cache capacity and better saves CPU resources, thereby enabling a more powerful cache function of the Nginx. | 07-10-2014 |
20140201307 | CACHING OF LOOK-UP RULES BASED ON FLOW HEURISTICS TO ENABLE HIGH SPEED LOOK-UP - According to one embodiment, a system includes a plurality of ports adapted for connecting to external devices and a switching processor. The switching processor includes a packet processor which includes a look-up interface, fetch and refresh logic (LIFRL) module and a packet processor logic (PPL) module adapted to operate in parallel, an internal look-up table cache including a plurality of look-up entries, each relating to a traffic flow which has been or is anticipated to be received by the switching processor, and a traffic manager module including a buffer memory which is connected to the plurality of ports. The LIFRL module is adapted for accessing the internal look-up table cache, the PPL module is adapted for communicating with the traffic manager module and the buffer memory, and the LIFRL module is adapted for communicating with one or more external look-up tables. | 07-17-2014 |
20140201308 | METHOD FOR OPTIMIZING WAN TRAFFIC - A local stream store of a local proxy caches one or more streams of data transmitted over the WAN to a remote proxy, where each stream is stored in a continuous manner and identified by a unique stream identifier (ID). In response to a flow of data received from a client, the local proxy examines the flow of data to determine whether at least a portion of the flow has been previously transmitted to the remote proxy via one of the streams currently stored in the local stream store. If the portion of the flow has been previously transmitted to the remote proxy, the local proxy transmits a first message to the remote proxy without sending actual content of the portion of the flow to indicate that the portion of the flow has been transmitted in one of the streams previously transmitted to the remote proxy. | 07-17-2014 |
20140201309 | Network Overlay System and Method Using Offload Processors - A method for providing network overlay services capable of processing network packets having associated packet metadata is disclosed. The method can include writing packets to a specific memory location accessible by at least one offload processor, with packets transported using a memory bus having a defined memory transport protocol, modifying packet metadata of the packets written to the specific memory location with the at least one offload processor, without requiring modification of the packets by a host processor, and sending the modified packets to the memory bus. | 07-17-2014 |
20140201310 | Network Overlay System and Method Using Offload Processors - A memory bus connected module for providing network overlay services is disclosed. The module comprising can include a memory bus connection, multiple offload processors coupled to the memory bus connection, each offload processor configured to convert incoming packets having a first network protocol to outgoing packets having a second network protocol, and control logic connected to the multiple offload processors for determining order of packet conversion by respective task execution of the multiple offload processors. | 07-17-2014 |
20140201311 | CACHE-INDUCED OPPORTUNISTIC MIMO COOPERATION FOR WIRELESS NETWORKS - Cooperative caching systems incorporating Plug-and-Play base stations are described herein. Plug-and-Play base stations with large caching capacities are employed in a wireless network to perform cooperative transmission with macro base stations. Each Plug-and-Play base station can either have wireless backhaul or a low-cost wired backhaul connection to the macro base stations. Cooperative caching systems can direct traffic between the Plug-and-Play base stations and the macro base stations. | 07-17-2014 |
20140201312 | VIRTUALIZED DATA STORAGE IN A NETWORK COMPUTING ENVIRONMENT - Methods and systems for load balancing read/write requests of a virtualized storage system. In one embodiment, a storage system includes a plurality of physical storage devices and a storage module operable within a communication network to present the plurality of physical storage devices as a virtual storage device to a plurality of network computing elements that are coupled to the communication network. The virtual storage device comprises a plurality of virtual storage volumes, wherein each virtual storage volume is communicatively coupled to the physical storage devices via the storage module. The storage module comprises maps that are used to route read/write requests from the network computing elements to the virtual storage volumes. Each map links read/write requests from at least one network computing element to a respective virtual storage volume within the virtual storage device. | 07-17-2014 |
20140207897 | DATA TRANSFER APPARATUS AND DATA TRANSFER METHOD - A data transfer apparatus includes a first memory, a second memory, a search unit, and a data transmitting/receiving unit. The first memory holds information that associates a search key with an address. The second memory holds information that associates the address with verification information which is generated by a predetermined generation method based on at least a portion of the search key. The search unit generates the search key based on the received data, obtains, from the first memory, the address that is associated with the generated search key, obtains, from the second memory, the verification information that is associated with the obtained address, and verifies the verification information that is generated by the predetermined generation method based on at least a portion of the generated search key with the verification information obtained from the second memory. The data transmitting/receiving unit executes processing based on a result of the verification. | 07-24-2014 |
20140207898 | WRITE OPERATION DISPERSED STORAGE NETWORK FRAME - A method begins by generating a set of write request frames regarding a write request operation for a set of encoded data. Each of the write request frames includes a payload section and a protocol header. The payload section includes a transaction number field and a data payload section, which includes a name field, a revision number field, a length field, and a payload field. The protocol header includes a payload length field and an operation code field to indicate the write request operation. The method continues by outputting the set of write request frames to storage units of a dispersed storage network. | 07-24-2014 |
20140207899 | LIST DIGEST OPERATION DISPERSED STORAGE NETWORK FRAME - A method begins generating a plurality of list digest request frames. Each list digest request frames includes a payload section and a protocol header. The payload section includes a start slice name field, an end slice name, and a response count field. The protocol header includes a payload length field and an operation code field to indicate the list digest request operation. The method continues by outputting the list digest request frames to storage units of a dispersed storage network. | 07-24-2014 |
20140215000 | SYSTEM AND METHOD FOR DYNAMIC CACHING - A system and method of managing cache units includes providing, by a first cache unit, caching services to a first plurality of clients, collecting information associated with a usage of the first cache unit by the first plurality of clients, determining a similarity in cache usage between every pair of clients selected from the first plurality of clients based on information associated with the collected information, selecting a second plurality of clients from the first plurality of clients based on information associated with the determined similarity in cache usage, replicating the first cache unit to create a second and a third cache unit, providing, by the second cache unit, caching services to the second plurality of clients, and providing, by the third cache unit, caching services to one or more third clients selected from the first plurality of clients, each of the third clients not being in the second plurality of clients. | 07-31-2014 |
20140215001 | REDUCING BANDWIDTH USAGE OF A MOBILE CLIENT - Systems and methods of reducing bandwidth usage of a mobile client are disclosed. An example method may include caching a first version of network content in a mobile client. The method may also include comparing the cached content with a second version of the network content. The method may also include generating a recipe to construct the second version of the network content from the cached network content based on a result of the comparing. The method may also include sending the recipe to the mobile client. | 07-31-2014 |
20140215002 | METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR STORING COMMUNICATION SESSION DATA AT A NETWORK INTERFACE MODULE - The subject matter described herein includes methods, systems, and computer program products for storing communication session information at a network interface module. One method described herein includes receiving a plurality of RTCP packets associated with a communication session at a network interface module. RTCP information is extracted from at least one of the packets. The extracted RTCP information is stored in memory local to the network interface module. | 07-31-2014 |
20140215003 | DATA PROCESSING METHOD, DISTRIBUTED PROCESSING SYSTEM, AND PROGRAM - A storage device stores results of first data processing previously performed. A splitting unit splits, with reference to the storage device, data into a first segment for which the results stored in the storage device are usable and a plurality of second segments for which the results stored in the storage device are not usable. A control unit assigns the plurality of second segments to a plurality of nodes, and uses the plurality of nodes in parallel to perform the first data processing on the plurality of second segments. A control unit exercises control so as to perform second data processing on a previous result corresponding to the first segment, which is stored in the storage device, and results obtained from the plurality of second segments using the plurality of nodes. | 07-31-2014 |
20140222946 | SELECTIVE WARM UP AND WIND DOWN STRATEGIES IN A CONTENT DELIVERY FRAMEWORK - Services in a content delivery framework include selective warm-up and wind-down strategies. The warm up strategies include (i) obtaining and preloading a global configuration object; (ii) obtaining and preloading at least some customer data; and (iii) pre-fetching content. The wind-down strategies include stopping acceptance of requests; flushing a cache; and finishing current processing of said particular service. | 08-07-2014 |
20140222947 | METHOD AND APPARATUS FOR BROWSINGS WEBPAGES, AND STORAGE MEDIUM - In an example, a method for browsing webpages includes: when a browser of a mobile terminal is closed, saving webpage contents in a memory of the mobile terminal into non-transitory storage of the mobile terminal; and when the browser is started or run again, reading the webpage contents that have been saved in the non-transitory storage of the mobile terminal when the browser is was last closed last time, and loading and displaying the webpage contents for a user. | 08-07-2014 |
20140222948 | SENDER-SIDE CONTENT TRANSMISSION METHOD AND INFORMATION TRANSMISSION SYSTEM - The user side is provided with a scanner and a transmitter, the T-center side is provided with a T-code decoder and a dispatch information generator, the user side transmits T-code-inserted image data, while the T-center generates order information (sender name, recipient name, sending method, address (recipient, sender), T-code-inserted image data, URL of supermarket, etc.) from the user, based on the decoding result of the image data, and the order information (simply dispatch information) is transmitted to supermarket via a communication network. | 08-07-2014 |
20140237065 | System, Method, and Computer Program Product for Server Side Processing in a Mobile Device Environment - Described herein are systems, methods, computer program products, and combinations and sub-combinations thereof, for enabling web content (as well as other objects) to be loaded on mobile devices (as well as other types of devices), and for users of mobile devices to operate with such web content on their mobile devices in an interactive manner while in an off-line mode. | 08-21-2014 |
20140237066 | SYSTEMS AND METHODS THERETO FOR ACCELERATION OF WEB PAGES ACCESS USING NEXT PAGE OPTIMIZATION, CACHING AND PRE-FETCHING TECHNIQUES - A method and system for acceleration of access to a web page using next page optimization, caching and pre-fetching techniques. The method comprises receiving a web page responsive to a request by a user; analyzing the received web page for possible acceleration improvements of the web page access; generating a modified web page of the received web page using at least one of a plurality of pre-fetching techniques; providing the modified web page to the user, wherein the user experiences an accelerated access to the modified web page resulting from execution of the at least one of a plurality of pre-fetching techniques; and storing the modified web page for use responsive to future user requests. | 08-21-2014 |
20140237067 | CONNECTION CACHE METHOD AND SYSTEM - A method, apparatus and computer program product for maintaining a connection cache at an intermediate server, wherein the connection cache relating to resource requests from a plurality of devices to a plurality of servers remote therefrom. The method comprises monitoring resource requests addressed to a plurality of said remote servers during a first time period; generating statistics data on the basis of the monitored resource requests; establishing a plurality of connections from the intermediate server to a subset of the plurality of remote servers, said subset being determined on the basis of the generated statistics data; and storing data indicative of the plurality of established connections in a connection cache. Caching of connections in this manner ensures efficient use of proxy server resources by only caching connections to “popular” remote servers. | 08-21-2014 |
20140237068 | METHOD, SYSTEM AND SERVER OF REMOVING A DISTRIBUTED CACHING OBJECT - The present disclosure discloses a method, a system and a server of removing a distributed caching object. In one embodiment, the method receives a removal request, where the removal request includes an identifier of an object. The method may further apply consistent Hashing to the identifier of the object to obtain a Hash result value of the identifier, locates a corresponding cache server based on the Hash result value and renders the corresponding cache server to be a present cache server. In some embodiments, the method determines whether the present cache server is in an active status and has an active period greater than an expiration period associated with the object. Additionally, in response to determining that the present cache server is in an active status and has an active period greater than the expiration period associated with the object, the method removes the object from the present cache server. By comparing an active period of a located cache server with an expiration period associated with an object, the exemplary embodiments precisely locate a cache server that includes the object to be removed and perform a removal operation, thus saving the other cache servers from wasting resources to perform removal operations and hence improving the overall performance of the distributed cache system. | 08-21-2014 |
20140244778 | ACCELERATED NETWORK DELIVERY OF CHANNELIZED CONTENT - An accelerated delivery system for network content comprises local content storage and an associated local network appliance deployed proximate to at least one, and in some embodiments many, consumer devices. The local network appliance communicates with the consumer devices, and also communicates over the internet with original content servers and, importantly, a central processing cloud, to maintain a store of content that consumers are predicted to want to download. | 08-28-2014 |
20140244779 | Efficient Longest Prefix Matching Techniques for Network Devices - A network address associated with a packet is obtained at a search engine of a network device. The search engine includes a plurality of Bloom filters that represent prefixes of respective lengths in the routing table. Respective Bloom filters are applied to respective prefixes of the network address to determine a set of one or more prefixes for which a match potentially exists in the routing table. A number of accesses to the memory are performed using prefixes in set of prefixes, beginning with a longest prefix and continuing in decreasing order of prefix lengths until a matching entry is found in the routing table, and routing information for the packet is retrieved. If the number of performed memory accesses exceeds a threshold, the routing table is adapted to reduce a number of memory accesses to be performed for subsequent packets associated with the network address. | 08-28-2014 |
20140244780 | Secure Archive - Storage apparatus ( | 08-28-2014 |
20140258439 | SHARED CLIENT CACHING - According to some embodiments, a method and apparatus are provided to determine if a requested resource is cached at a first client or at a server based on a received list. In a case that the requested resource is determined to be cached at the first client, a request is sent to the first client for the cached resource. Else, a request is sent to the server for the cached resource. The cached resource is received. | 09-11-2014 |
20140258440 | CONTENT DELIVERY NETWORK CACHE GROUPING - Content delivery networks (CDNs) deliver content objects for others is disclosed. End user computers are directed to an edge server for delivery of a requested content object by a universal resource indicator (URI). When an edge server does not have a copy of the content object from the URI, information is successively passed to ancestor servers within a hierarchy until the content object is found. There can be different hierarchies designated for different URIs or times at which requests are received. Once the content object is located in the hierarchical chain, the content object is passed back down the chain to the edge server for delivery. | 09-11-2014 |
20140280667 | SCALABLE DATA TRANSFER IN AND OUT OF ANALYTICS CLUSTERS - Embodiments of the invention relate to analytics clusters and to efficiently supporting read and write requests in the cluster. In one aspect, one or more compute nodes within a region of the cluster are designated to support the request, and based upon the designation, the request is directly communicated between a requesting agent external to the cluster and the supporting compute node(s). The direct communication mitigates the functionality of the head node(s) supporting the compute node(s). | 09-18-2014 |
20140280668 | METHODS AND SYSTEMS FOR PROVIDING RESOURCES FOR CLOUD STORAGE - Methods and apparatus for providing resources for cloud storage may include accessing physical storage capacity on a device, connected to a network cloud, including a virtual primary storage disk and at least one virtual secondary storage disk having access to the physical storage capacity. In addition, the methods and apparatus may include dynamically updating the available storage capacity of the virtual secondary storage disk for network cloud storage based upon usage of the physical storage capacity by the virtual primary storage disk and the virtual secondary storage disk. | 09-18-2014 |
20140280669 | Memory Sharing Over A Network - Memory is shared among physically distinct, networked computing devices. Each computing device comprises a Remote Memory Interface (RMI) accepting commands from locally executing processes and translating such commands into forms transmittable to a remote computing device. The RMI also accepts remote communications directed to it and translates those into commands directed to local memory. The amount of storage capacity shared is informed by a centralized controller, either a single controller, a hierarchical collection of controllers, or a peer-to-peer negotiation. Requests that are directed to remote high-speed non-volatile storage media are detected or flagged and the process generating the request is suspended such that it can be efficiently revived. The storage capacity provided by remote memory is mapped into the process space of processes executing locally. | 09-18-2014 |
20140280670 | MANAGEMENT MODULE FOR STORAGE DEVICE - The present invention discloses a management module for a storage device. The management module comprises a primary server and a secondary server. Each server comprises a network port configured to interface the server and a telecommunication network and a virtual bridge configured to selectively enable or disable data transfer to and from the network port. The virtual bridges of the primary and secondary servers are linked for enabling data transfer between said virtual bridges, the virtual bridge of the primary server is configured to disable data transfer while the virtual bridge of the secondary server is configured to enable data transfer and the virtual bridge of the primary server is further configured to maintain an IP address of the management module. | 09-18-2014 |
20140280671 | Communication Protocol - The invention relates to a specification for an internet enabled device or application, the specification comprises one or more functional interfaces, defining attributes or operating characteristics of said device or application, and said specification defines the overall capabilities of said device or application. The invention also relates to a functional interface which defines attributes or operating characteristics of said device or application, as well as a central storage repository for use in a network wherein said central storage repository stores a specification for each device and/or application and/or the at least one server the specification is comprised of one or more functional interfaces, and said central repository is easily accessible and a method of enabling communication between devices and/or applications and/or a server within a network the network comprising at least one client device and/or client application and at least one server. | 09-18-2014 |
20140280672 | Systems and Methods for Managing Communication Between Devices in an Electrical Power System - Systems and methods for managing communication between devices in an electric power generation and delivery system are disclosed. In certain embodiments, a method for managing communication between devices may include receiving a message including an identifier via a communications interface. In certain embodiments, the identifier may identify a particular publishing device. A determination may be made whether the message is a most recently received message associated with the identifier. If the message is the most recently received message, the message may be stored message in a message buffer associated with the identifier, and transmitted from a device using a suitable queuing methodology. | 09-18-2014 |
20140280673 | SYSTEMS AND METHODS FOR COMMUNICATING DATA STATE CHANGE INFORMATION BETWEEN DEVICES IN AN ELECTRICAL POWER SYSTEM - Systems and methods are presented for managing communication between devices in an electric power generation and delivery system. In certain embodiments, a method for managing communication messages performed by a network device included in an electric power generation and delivery system may include receiving a message including an identifier and data state information via a communications interface. A determination may be made that that the message represents a data state change associated with the identifier. The message may be stored in a message buffer associated with the identifier. Finally, the stored message may be transmitted from the message buffer to an intelligent electronic device. | 09-18-2014 |
20140280674 | LOW-LATENCY PACKET RECEIVE METHOD FOR NETWORKING DEVICES - When interfacing with a host, a networking device can handle a first data like Bulk Data Receive. The networking device can receive the first data and read a first queue entry from a receive queue in the host memory. In response to the read first queue entry, the networking device can write the first data to an unpinned memory in the host memory. The networking device can also handle a second data with a Receive Packet in Ring (RPIR) queue. The networking device can receive the second data and write the second data to a pinned memory in the host memory. The RPIR queue can be separate from or overlaid on the receive queue. High throughput and low-latency operation can be achieved. The use of a RPIR queue can facilitate the efficiency of resource utilization in the reception of data messages. | 09-18-2014 |
20140280675 | GENERATION OF PATHS THROUGH GRAPH-BASED DATA REPRESENTATION - Embodiments of the invention generally provide a method, a computing system, and a computer-readable medium configured to generate requests for payload data through a graph-based data representation. The computer-implemented method includes generating a first request for translation that specifies a first path configured to identify first payload data associated with a graph object. The computer-implemented method further includes transmitting the first request to a path evaluator for translation. The computer-implemented method also includes receiving a first translated path based on the first path and including an initial translated portion and a final untranslated portion, from the path evaluator. The computer-implemented method further includes receiving the first payload data from the path evaluator. | 09-18-2014 |
20140280676 | SYSTEM AND METHOD FOR INTERACTIVE SPATIO-TEMPORAL STREAMING DATA - System and method for providing a probabilistic order of tiles relative to a current section of a video that a user is viewing. A cache implementation uses this ordering to decide what tiles to evict from the cache, i.e. which tiles will probably not be accessed within a particular timeframe, but not when to evict (this is up to the cache implementation). A cache implementation can also use the prioritized list of the present embodiment to pre-fetch tiles. | 09-18-2014 |
20140280677 | TWO-FILE PRELOADING FOR BROWSER-BASED WEB ACCELERATION - A system and a method for accelerating delivery of a webpage by using a preloader file during a delay in fetching the web file are disclosed. When an end user makes a request through a client computer for a webpage, a Content Delivery Network (CDN) server sends the client a preloader file. The preloader file contains requests for resources that are likely to be part of the web file. The client downloads the resources, and the resources are saved in a browser cache. The preloader file also directs the client to request the webpage again. While the client is downloading the resources, the CDN server requests the web file from an origin server. The origin server composes the webpage and delivers the webpage to the CDN server. When the client makes a second request for the web file, the CDN server delivers the web file to the client. When the client renders the web file to display the webpage, the client can retrieve the resources from the browser cache. | 09-18-2014 |
20140280678 | COLLECTING AND DELIVERING DATA TO A BIG DATA MACHINE IN A PROCESS CONTROL SYSTEM - A device supporting big data in a process plant includes an interface to a communications network, a cache configured to store data observed by the device, and a multi-processing element processor to cause the data to be cached and transmitted (e.g., streamed) for historization at a unitary, logical centralized data storage area. The data storage area stores multiple types of process control or plant data using a common format. The device time-stamps the cached data, and, in some cases, all data that is generated or created by or received at the device may be cached and/or streamed. The device may be a field device, a controller, an input/output device, a network management device, a user interface device, or a historian device, and the device may be a node of a network supporting big data in the process plant. Multiple devices in the network may support layered or leveled caching of data. | 09-18-2014 |
20140280679 | SYSTEM AND METHOD FOR VIDEO CACHING IN WIRELESS NETWORKS - A method for delivering video data from a server in a content delivery network (CDN). Video preferences of active users of a cell are determined. Video data is cached at one or more base station nodes disposed in a radio access network (RAN), wherein the video data is cached in one or more micro-caches according to a caching policy that is based on the determined video preferences. A request is received for video data. If the cached video data includes the requested video data, the cached video data is served from the RAN cache. If the cached video data does not include the requested video data, the requested video is fetched from the CDN according to a scheduling approach that considers Quality of Experience (QoE). | 09-18-2014 |
20140280680 | DATA TRANSMISSION FOR TRANSACTION PROCESSING IN A NETWORKED ENVIRONMENT - Techniques are disclosed to transmit arbitrarily large data units for transaction processing in a networked environment. A request is received to store a data unit of a size exceeding an allocated memory address space of a transaction gateway component of the networked environment. A predefined store function, provided by a repository interface component, is invoked to store the data unit to a data repository component of the networked environment and without segmenting the data unit. A repository handle of the stored data unit is identified. A predefined load function, provided by the repository interface component, is invoked to load a portion of the stored data unit, based on the identified repository handle, where the portion is smaller than the stored data unit. | 09-18-2014 |
20140280681 | DISTRIBUTED STORAGE NETWORK FOR MODIFICATION OF A DATA OBJECT - In a dispersed storage network, data objects are dispersed storage error encoded into pluralities of sets of encoded data slices that are stored in a set of storage units. To recover a data object, a read threshold number of encoded data slices from each set of encoded data slices of a corresponding set of the plurality of sets of encoded data slices are required. Upon determining that an update is available for the set of storage units, a dispersed storage managing unit takes a first subset of storage units off line to perform the update. During the update, a remaining number of storage units of the set of storage units remain on line such that at least the read threshold number of encoded data slices are available for each set of the pluralities of sets of encoded data slices. | 09-18-2014 |
20140280682 | HYBRID CENTRALIZED AND AUTONOMOUS DISPERSED STORAGE SYSTEM STORAGE METHOD - A dispersed data storage method for execution by a dispersed storage (DS) unit. In various embodiments, the method begins when the DS unit receives a plurality of encoded data slices and associated metadata. The metadata is interpreted to determine storage instructions regarding the encoded data slices. When the storage instructions indicate, for example, a daisy chain storage dispersal approach, the DS unit locally stores first encoded data slices (e.g., the first encoded data slices of a set of encoded data slices) and forwards other encoded data slices to at least one other DS unit. In other exemplary embodiments, sequential and/or one-to-many dispersal approaches may be utilized. Further, the DS may employ a variety of criteria to solicit other DS units for storage of encoded data slices. | 09-18-2014 |
20140280683 | USING GROUPS OF USER ACCOUNTS TO DELIVER CONTENT TO ELECTRONIC DEVICES USING LOCAL CACHING SERVERS - The described embodiments electronically deliver content (e.g., digitally-encoded files) to an electronic device using groups of accounts. In the described embodiments, a content provider obtains a public address of the electronic device and at least one account identifier for the electronic device from a request for the content received from the electronic device. Next, the content provider uses the public address to identify a local caching server (LCS) on a local area network (LAN) to which the electronic device is connected and uses the account identifier to determine that an account associated with the LCS is associated with a group of accounts with which an account for the electronic device is also associated. The content provider then provides a local address of the LCS to the electronic device, which uses the local address to obtain the content from the LCS via the LAN without accessing a content delivery network outside the LAN. | 09-18-2014 |
20140280684 | INDEPENDENT ACTIONSCRIPT ANALYTICS TOOLS AND TECHNIQUES - Tools and techniques are provided to support presentation analytics, such as Flash or Flex analytics, independently of embedded JavaScript web analytics code used in web pages. A presentation analytics engine, which may be implemented in ActionScript, includes code for capturing information about user interaction with a multimedia presentation, code for dynamically generating a string or other data structure reflecting such captured information, and code for sending the data structure to an analytics server without using a getURL( ) call or embedded JavaScript. Functionality is also provided for tracking objects without object-specific code, for dynamically sending such tracking information, and for supporting a visual presentation analytics overlay report illustrating such information. The Flash presentation analytics may use the same visitor ID as standard JavaScript analytics, without synchronizing the two analytics codes. | 09-18-2014 |
20140280685 | PEER-TO-PEER TRANSCENDENT MEMORY - Various arrangements for utilizing memory of a remote computer system are presented. Two computer systems may allocate a portion of RAM accessible to a memory-access API. A first set of data from the first portion of the first memory of a first computer system may be determined to be moved to memory of another computer system. The first set of data from the first portion of the first memory may be transmitted for storage in the second portion of the second memory of a second computer system. Using the second memory-access API, the set of data may be stored in the second portion of the second memory. Using the first memory-access API, the set of data from the first portion of the first memory may be deleted. | 09-18-2014 |
20140280686 | Method, Apparatus and System for Enabling the Recall of Content of Interest for Subsequent Review - A method, apparatus and system for enabling the recall of content for subsequent review include communicating to a content provider an indication of interest in content displayed in proximity to a mobile communications device, wherein communication information of the mobile communications device is determined using information in the communicated indication of interest, in response to the communicated indication of interest, receiving at least one of the content of interest, content data of the content of interest and location information of the content of interest and storing the at least one of the content of interest, content data of the content of interest and location information of the content of interest in the mobile communications device. | 09-18-2014 |
20140289355 | AUTONOMOUS DISTRIBUTED CACHE ALLOCATION CONTROL SYSTEM - A node includes a processor that is configured to derive, based on a delivery tree for a content, a logical sub tree structure including a first layer node and second layer nodes lower than the first layer node; calculate first electric power information used for caching the content in the first layer node in the sub tree structure; compare the first electric power information to second electric power information calculated by the second layer nodes in the sub tree structure and used for caching the content in the second layer nodes, then calculate a threshold for a content request rate for each of the second layer nodes; provide control to set the calculated threshold to the second layer nodes; and determine possibility of a cache allocation of the content by comparing a measured content request rate with the threshold. | 09-25-2014 |
20140289356 | TERMINAL CONTROL SYSTEM, METHOD FOR CONTROLLING TERMINAL, AND ELECTRONIC DEVICE - There is provided a terminal control system including: a first terminal; a second terminal connected to the first terminal by short-distance wireless communication; and a server on a network connected to the second terminal via a communication link, in which the server includes: a storage unit which stores predetermined information to be detected by the first terminal and a sequence of processing commands to be executed by the second terminal in a manner such that the information and the processing commands are related to each other; and a terminal control unit sends the sequence of processing commands to the second terminal with reference to the storage unit so as to allow the second terminal to execute the processing commands in response to receiving the predetermined information from the first terminal via the second terminal. | 09-25-2014 |
20140289357 | DATA PROCESSING SYSTEM AND METHOD OF CONTROLLING ACCESS TO A SHARED MEMORY UNIT - A data processing system comprising at least a memory unit, a first client connected to the memory unit, and a second client connected to the memory unit is proposed. The first client may comprise a first memory access unit and an information unit. The first memory access unit may read data from or write data to the memory unit at a first data rate. The information unit may update internal data correlating with a minimum required value of the first data rate. The second client may comprise a second memory access unit and a data rate limiting unit. The second memory access unit may read data from or write data to the memory unit at a second data rate. The data rate limiting unit may limit the second data rate in dependence on the internal data. The first memory access unit may, for example, read data packets sequentially from the memory unit, and the information unit may update the internal data at least per data packet. A method of controlling access to a shared memory unit is also proposed. | 09-25-2014 |
20140297778 | EXECUTION CONTROL METHOD, STORAGE MEDIUM, AND EXECUTION CONTROL APPARATUS - An execution control method performed by a processor includes storing a first plurality of commands executed in the first computer and a first execution order in a memory; executing the first plurality of commands according to the first execution order when executed on the third computer; storing a second plurality of commands executed in the second computer and a second execution order in the memory; executing the second plurality of commands according to the second execution order when executed on the fourth computer; storing information generated by executing a command among the first plurality of commands and the second plurality of commands in the memory as configuration information each time the command is executed; and selecting a command among a first earliest command among unexecuted commands of the first plurality of commands and a second earliest command among unexecuted commands of the first plurality of commands, and executing the command. | 10-02-2014 |
20140297779 | METHOD AND APPARATUS FOR SENDING INFORMATION USING SHARING CACHE BETWEEN PORTABLE TERMINALS - A method for operating a receiving portable terminal in a mobile communication system includes receiving a first packet from a sending portable terminal, determining a fingerprint overlapping a fingerprint corresponding to at least one chunk of the first packet in a fingerprint set cache, determining a fingerprint set including the most redundant fingerprints, in the fingerprint set cache, determining at least one fingerprint to send, in the determined fingerprint set, sending the at least one determined fingerprint to the sending portable terminal, and receiving a second packet from the sending portable terminal. An apparatus includes a controller configured to determine at least one redundant fingerprint overlapping a fingerprint corresponding to at least one chunk of the first packet in a fingerprint set cache, determine a fingerprint set including the most redundant fingerprint in the fingerprint set cache, and determine at least one fingerprint to send in the determined fingerprint set. | 10-02-2014 |
20140304354 | SYSTEMS AND METHODS FOR RELIABLE REPLICATION OF AN APPLICATION-STATE, DISTRIBUTED REPLICATION TABLE - The present application is directed towards using a distributed hash table to track the use of resources and/or maintain the persistency of resources across the plurality of nodes in the multi-node system. More specifically, the systems and methods can maintain the persistency of resources across the plurality of nodes by the use of a global table. A global table may be maintained on each node. Each node's global table enables efficient storage and retrieval of distributed hash table entries. Each global table may contain a linked list of the cached distributed hash table entries that are currently stored on a node. | 10-09-2014 |
20140304355 | SYSTEMS AND METHODS FOR APPLICATION-STATE, DISTRIBUTED REPLICATION TABLE CACHE POISONING - The present application is directed towards invalidating (also referred to as poisoning) ASDR table entries that are determined to be inaccurate because of changes to a multi-node system. For example, when a node leaves or enters a multi-node system, the ownership of the entries in the ASDR table can change thus invalidating cached and replica entries. More specifically, the system and methods disclosed herein include searching an ASDR table for cached entries responsive to the system determining the multi-node system has changed. After finding a cached entry, the system may determine if the entry should be poisoned. The decision to poison the entry may be responsive to the creation time of the entry, the time when the change to the multi-node system occurred, and in the case of a replica, the owner of the replica's position in a replication chain relative to source of the replica. | 10-09-2014 |
20140304356 | Wireless Aggregator - Aggregator for communicating with one or more accessory devices having at least one receiver configured to communicate with one or more devices and collect data therefrom, a processor in communication with the at least one receiver, a first memory in communication with the processor and configured to store the collected data, and a second memory in communication with the processor and configured provide read-write capabilities to the aggregator. | 10-09-2014 |
20140304357 | SCALABLE OBJECT STORAGE USING MULTICAST TRANSPORT - Embodiments disclosed herein provide a scalable multicast transport. The multicast transport protocol provides effectively reliable multicast delivery while avoiding the overhead associated with point-to-point protocols. Additional embodiments disclosed herein relate to a scalable object storage system that uses a multicast transport. The object storage system assigns responsibility for providing storage services for a chunk to a negotiating group of storage servers in the cluster using a shared and distributed hash allocation table. The object storage system dynamically determines a rendezvous group of storage servers in the cluster to store the chunk using the multicast transport. Other embodiments, aspects and features are also disclosed. | 10-09-2014 |
20140304358 | SYSTEMS AND METHODS FOR THE EFFICIENT EXCHANGE OF HIERARCHICAL DATA - Systems and methods are disclose for facilitating the transfer of hierarchical data to a computer memory are provided. A disclosed method may include receiving an electronic document containing hierarchical data, memory layout information, and memory address information, wherein the memory address information comprises a base address. The data may be restructuring to conform with the memory layout of the computer memory when it is determined, based on the memory layout information, that a memory layout of the hierarchical data does not match the memory layout of the computer memory. Memory address information may be translated when it is determined that the base address is not available in the computer memory. The restructured hierarchical data may be loaded into the computer memory based on the translated memory address information. | 10-09-2014 |
20140304359 | SYSTEM AND METHOD FOR SPECIFYING BATCH EXECUTION ORDERING OF REQUESTS IN A STORAGE SYSTEM CLUSTER - A method for operating a computer data storage system is described. A plurality of requests are received from a client, each request of the plurality of requests having assigned a unique sequence number, each request being an input/output request to a data storage device. The plurality of requests is divided into a plurality of subsets of requests. A unique batch number is assigned to each subset of requests so that each subset of requests is assigned a unique batch number. A first subset of requests having a first batch number is executed in arbitrary order with respect to the sequence number of each request. A second subset of requests is executed in response to a second batch number after execution of all of the first subset of requests has completed. | 10-09-2014 |
20140310371 | CACHE AND DELIVERY BASED APPLICATION DATA SCHEDULING - A device receives configuration information that instructs the device about when to send content to a user device. The device also receives content from an application server at a first time, and stores the content. The device determines, based on the configuration information, that the content is to be sent to the user device, and sends the content to the user device based on the determination. The content is sent to the user device at a second time that is later than the first time. | 10-16-2014 |
20140310372 | METHOD, TERMINAL, CACHE SERVER AND SYSTEM FOR UPDATING WEBPAGE DATA - A method, terminal, cache server and system for updating webpage data are disclosed. In one aspect, the method includes obtaining an update identifier corresponding to latest released webpage update data, sending a first update request for obtaining the webpage update data to a cache server, wherein the first update request includes the update identifier. The method also includes receiving the webpage update data from the cache server based on the first update request and updating the current webpage data based on the webpage update data. | 10-16-2014 |
20140310373 | SYSTEM AND METHOD FOR POPULATING A CACHE USING BEHAVIORAL ADAPTIVE POLICIES - A method, system and program are disclosed for accelerating data storage in a cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies which populate the storage cache using behavioral adaptive policies that are based on analysis of clients-filers transaction patterns and network utilization, thereby improving access time to the data stored on the disk-based NAS filer (group) for predetermined applications. | 10-16-2014 |
20140310374 | CONTENT TRANSMITTING SYSTEM, METHOD FOR OPTIMIZING NETWORK TRAFFIC IN THE SYSTEM, CENTRAL CONTROL DEVICE AND LOCAL CACHING DEVICE - A content transmission system includes: a central control device to receive a content packet to be provided to a client device from a content server, store chunks divided from the received content packet together with corresponding chunk identifiers, check duplication of the divided chunks, and transmit the chunk identifier and flow information of a duplicate chunk to a local caching device instead of transmitting the content packet corresponding to the duplicate chunk; and the local caching device to: receive the chunk identifier and the flow information of the duplicate chunk from the central control device, and transmit the content packet corresponding to the received chunk identifier and previously stored to the client device. | 10-16-2014 |
20140317222 | Data Storage Method, Device and Distributed Network Storage System - A method, device and system disclosed used in storage technique, comprising: splitting a file of size M into k blocks, that is to say, each block is of size M/k; issuing the above k blocks across k different storage nodes in the distributed network storage system in a distributed manner; using the k blocks, constructing n−k independent blocks via linear coding method, and satisfying the property that any k of the n encoded blocks can be used to reconstruct the original data in the file, which means the linear coding method is a kind of Maximum-Distance Separable (MDS) code; distribute the n−k encoded blocks to the rest n−k different storage codes in the distributed network storage systems. | 10-23-2014 |
20140317223 | SYSTEM AND METHOD FOR PROVIDING VIRTUAL DESKTOP SERVICE USING CACHE SERVER - There are provided a system and method for providing a virtual desktop service using a cache server. A system for providing a virtual desktop service according to the invention includes a host server configured to provide a virtual desktop service to a client terminal using a virtual machine, a distributed file system configured to store data for the virtual machine, and a cache server that is provided for each host server group having at least one host server, and performs a read process or a write process of data using physically separate caches when the read process or write process of the data is requested from the virtual machine in the host server. | 10-23-2014 |
20140317224 | DISTRIBUTED STORAGE NETWORK FOR STORING A DATA OBJECT BASED ON STORAGE REQUIREMENTS - A distributed storage network (DSN) includes a user device and a plurality of DSN memories, wherein each of the DSN memories includes a plurality of storage units. The user device includes at least one network interface to the plurality of DSN memories and at least one processing module that is operable to determine one of the plurality of DSN memories for storing a data object based on a comparison of one or more storage requirements of the data object and one or more DSN attributes of the plurality of DSN memories. | 10-23-2014 |
20140325014 | CHANNEL SUBSYSTEM SERVER TIME PROTOCOL COMMANDS - A protocol for communicating with the timing facility used in a data processing network to provide synchronization is provided via the execution of a machine instruction that accepts a plurality of commands. The interaction is provided through the use of message request blocks and their associated message response blocks. In this way timing parameters may be determined, modified and communicated. This makes it much easier for multiple servers or nodes in a data processing network to exist as a coordinated timing network and to thus more cooperatively operate on the larger yet identical data files. | 10-30-2014 |
20140330921 | STORING RELATED DATA IN A DISPERSED STORAGE NETWORK - A method begins by each of a group of write requesting modules of a dispersed storage network (DSN) generating one or more sets of write requests regarding one of a group of portions of related data, sending a group of the one or more sets of write requests to DSN memory, and sending binding information to a binding module. The method continues with the binding module processing remaining phases of the group of the one or more sets of write requests for writing the related data into the DSN memory as a single set of write requests and notifying the write requesting modules of status of the writing the related data into the DSN memory at completion of the processing of the remaining phases such that the related data is made accessible as a single piece of data when the processing of the remaining phases is successful. | 11-06-2014 |
20140330922 | ASYMMETRIC DATA MIRRORING - Methods, systems, and products mirror data between local memory and remote storage. A write command is sent from a server to a remote storage device, and a timer is established. A current time of the timer is compared to a maximum time period. If the maximum time period expires without receipt of an acknowledgment to the write command, then a write error is assumed to exist to the remote storage device. | 11-06-2014 |
20140330923 | MULTI-WRITER REVISION SYNCHRONIZATION IN A DISPERSED STORAGE NETWORK - A method begins by a processing module of a computing device receiving a most current revision value for a data element, where a revision value for the data element is generated based on a current time of a storing device. The method continues with the processing module generating a new revision value for a currently revised version of the data element based on a current time of the computing device and comparing the current time of the new revision value with the current time of the most current revision value. When the current time of the new revision value precedes the current time of the most current revision value, the method continues with the processing module adjusting the new revision value to produce an adjusted revision value and facilitating storage of the currently revised version of the data element having the adjusted revision value. | 11-06-2014 |
20140330924 | Efficient Cache Validation and Content Retrieval in a Content Delivery Network - Some embodiments provide systems and methods for validating cached content based on changes in the content instead of an expiration interval. One method involves caching content and a first checksum in response to a first request for that content. The caching produces a cached instance of the content representative of a form of the content at the time of caching. The first checksum identifies the cached instance. In response to receiving a second request for the content, the method submits a request for a second checksum representing a current instance of the content and a request for the current instance. Upon receiving the second checksum, the method serves the cached instance of the content when the first checksum matches the second checksum and serves the current instance of the content upon completion of the transfer of the current instance when the first checksum does not match the second checksum. | 11-06-2014 |
20140337458 | MANAGING A LOCAL CACHE FOR AN ONLINE CONTENT-MANAGEMENT SYSTEM - The disclosed embodiments relate to techniques for managing a local cache on a computing device that stores content items for an online content-management system. These techniques generally operate by gathering information that is available on the computing device (such as information about user actions, information about which applications are executing, and information about the location of the computing device) and using this information to identify relevant content items that are likely to be accessed in the near future. This enables the system to perform cache-management operations at the local cache to facilitate rapidly accessing the relevant content items through the local cache. | 11-13-2014 |
20140337459 | CACHING ARCHITECTURE FOR PACKET-FORM IN-MEMORY OBJECT CACHING - One embodiment provides a caching system comprising a hash table, a network interface for receiving a sequence of network-level packets for caching, and a caching application module for storing the sequence of network-level packets in the hash table. The sequence of network-level packets is stored its original form without de-fragmentation. | 11-13-2014 |
20140337460 | SYSTEMS, DEVICES, AND METHODS FOR PROTECTING ACCESS PRIVACY OF CACHED CONTENT - Embodiments relate to systems, devices, and computer-implemented methods for preventing determination of previous access of sensitive content by receiving, from a user, a request for content at a device in an information centric network, where a cached version of the content is locally stored at the device; initiating a time delay based on a determination that the user has not previously requested the content; and transmitting the cached version of the content to the user after the time delay. | 11-13-2014 |
20140337461 | COLLECTOR MECHANISMS IN A CONTENT DELIVERY NETWORK - A computer-implemented method operable in a content delivery service (CDN), includes, by a collector system: receiving multiple event streams of event data, said multiple event streams comprising event data from a plurality of CD services in said CDN, each event of said event streams comprising: (i) a timestamp for said event, (ii) information relating to said event; and producing state data relating to information represented in said event data of said multiple event streams while being able to asynchronously respond to queries relating to said state data. | 11-13-2014 |
20140337462 | Delivering Identity Related Data - Method and apparatus for delivering from a storage node to an end device, a subset of data stored in the storage node and relates to an identity that uniquely identifies the identity module comprised in the end device. The storage node receives from the end device a request for the subset of the data, the request comprises the identity. The storage node identifies the data related to the identity and filters the identified data to obtain the requested subset of the data. The storage node further sends the subset of the data to the end device. | 11-13-2014 |
20140344391 | Content Delivery Framework having Storage Services - A framework supporting content delivery and comprising a plurality of devices, each device configured to run at least one content delivery (CD) service of a plurality of CD services, wherein the plurality of CD services comprise: collector services, reducer services, storage services, and control services; and wherein at least some of the plurality of devices run storage services, and wherein the storage services running on the at least some of the plurality of devices comprise at least one storage services network. At least one storage service is configured to provide persistent storage that is locally and/or globally addressable. | 11-20-2014 |
20140344392 | CONTENT DELIVERY SYSTEM, CACHE SERVER, AND CONTENT DELIVERY METHOD - A cache server includes an accumulation unit that accumulates a content(s) stored in a delivery server apparatus. The cache server, by using information included in a request message from a mobile terminal, determines which one of the cache servers arranged on a mobile network accumulates a content requested by the mobile terminal. The cache server reads a file corresponding to the requested content from the determined cache server and outputs the file to the accumulation unit. In addition, the cache server estimates a bandwidth of the mobile network based on a signal from the mobile terminal, reads the file corresponding to the requested content from the accumulation unit, and extracts a stream from the file. The cache server generates a stream by deleting at least part of frames from the stream so that a bit rate of the generated stream does not exceed the band. The cache server stores the generated stream in a packet and transmits the packet to the mobile terminal. | 11-20-2014 |
20140344393 | QUEUE PROCESSOR FOR DOCUMENT SERVERS - A configurable queue processor for document servers is described. The configurable queue processor strives to allocate server resources in an optimal manner such that document servers can process documents efficiently. In various embodiments, the facility includes a configurable queue processor for allocating document flows for handling documents, document transport module for transporting documents between network devices, such as printers, fax boards, and content servers and across local and wide-area networks; functionality for routing optimization with other communications networks, such as messaging services, telephony, and IP networks; and flexible document transport capabilities to workflow applications and multifunction devices (such as all-in-one print/scan/copy/fax/telephone/answering machine devices) and multifunction devices enhanced with video and video capture, messaging, email, network router and gateway capabilities. | 11-20-2014 |
20140344394 | Revision Deletion Markers - A method begins by receiving a delete data object request within a dispersed storage network (DSN). The method continues by determining a set of dispersed storage (DS) units within the DSN that store a set of encoded data slices associated with the data object. The method continues by determining a revision number of the set of encoded data slices. The method continues by sending a delete marker and write command to the set of DS units for deletion of the data object. The method continues by receiving at least one receive write acknowledgement from at least some DS units of the set of DS units to produce the deletion. The method continues when a write threshold is met, by sending a commit command to the DS unit storage set and receiving commit acknowledgments from the DS units and sending a finalize command to the set of DS units to delete the data object. | 11-20-2014 |
20140351362 | COMPUTER SYSTEM, DATA TRANSFER METHOD, AND DATA TRANSFER PROGRAM - Execution servers including a first execution server store data output by executing jobs to storage devices including semiconductor memory elements. A management server holds input/output information including identifiers identifying pieces of data, identifiers identifying execution servers which output the pieces of data, and identifiers identifying execution servers to execute jobs which receive the pieces of data. The first execution server sends a destination request including identifiers identifying the first execution server and a first piece of data to the management server. The management server determines an execution server to execute a job which receives the first piece of data to be a destination execution server of the first piece of data based on the input/output information and the received destination request, and outputs a transfer instruction for transferring the first piece of data from the storage device holding the first piece of data to the determined destination execution server. | 11-27-2014 |
20140351363 | WRITING DATA IN A DISTRIBUTED DATA STORAGE SYSTEM - Methods, systems, and apparatuses, including computer programs encoded on computer-readable media, for receiving a write request that includes data and a client address at which to store the data. The data is segmented into the one or more storage units. A storage unit identifier for each of the one or more storage units is computed that uniquely identifies content of a storage unit. A mapping between each storage unit identifier to a block server is determined. For each of the one or more storage units, the storage unit and the corresponding storage unit identifier is sent to a block server. The block server stores the storage unit and information on where the storage unit is stored on the block server for the storage unit identifier. Multiple client addresses associated with a storage unit with the same storage unit identifier are mapped to a single storage unit. | 11-27-2014 |
20140351364 | SYSTEM, METHOD, AND APPARATUS FOR USING A VIRTUAL BUCKET TO TRANSFER ELECTRONIC DATA - A system that enables a mobile communication device to transfer data to or from a computer system using communication data read from an NFC tag. The first device transfers the data which is temporarily held until the second device removes the data. Once the data is removed, the location where the data was temporarily held is emptied. | 11-27-2014 |
20140351365 | MANAGEMENT SERVER AND DATA MIGRATION METHOD - Computer system comprising a first primary storage apparatus and a first secondary storage apparatus and a second primary storage apparatus and a second secondary storage apparatus, a first virtual volume of the second primary storage apparatus is externally connected to a first primary volume of the first primary storage apparatus, a total cache-through mode is configured as a cache mode in a case where a read command is supplied by the first host apparatus, unique information for the first primary volume is configured for the first virtual volume, a path to the first primary volume is switched from the first host apparatus to a path via the first virtual volume, and a second primary volume in the second primary storage apparatus is configured to form a copy pair with a second secondary volume in the second secondary storage apparatus. | 11-27-2014 |
20140359044 | REMOTE MEMORY ACCESS FUNCTIONALITY IN A CLUSTER OF DATA PROCESSING NODES - A server apparatus comprises a plurality of server on a chip (SoC) nodes interconnected to each other through a node interconnect fabric. Each one of the SoC nodes has respective memory resources integral therewith. Each one of the SoC nodes has information computing resources accessible by one or more data processing systems. Each one of the SoC nodes configured with memory access functionality enabling allocation of at least a portion of said memory resources thereof to one or more other ones of the SoC nodes and enabling allocation of at least a portion of said memory resources of one or more other ones of the SoC nodes thereto based on a workload thereof. | 12-04-2014 |
20140359045 | Method and Apparatus for Cached Content Delivery to Roaming Devices - In one aspect, the method and apparatus disclosed herein enable a high-quality content viewing experience for users viewing user-specific content via their mobile devices operating within a mobile communication network, based on intelligently caching the content in the network using one or more distributed caches. For example, in one or more embodiments, a cache management server dynamically manages the distribution of user-specific content to one or more of the distributed caches, based on known or expected user locations, so that the content resides in the distributed cache or caches closest to the user locations. | 12-04-2014 |
20140359046 | PRELOADING OF SHARED OBJECTS - The present disclosure describes methods comprising generating a shared object in a shared memory of an application server, wherein the shared memory is a non-persistent memory, providing an instance key of the shared object for storage in a persistent memory upon occurrence of a first trigger event, and upon occurrence of a second trigger event, re-building the shared object in the shared memory of the application server, the re-building comprising: obtaining the stored instance key of the shared object, using the instance key to identify data stored on a database server associated with the shared object, and building the shared object in the shared memory of the application server using the identified data and systems adapted to implement these methods. | 12-04-2014 |
20140359047 | SECURE DATA TRANSFER PLATFORM FOR HYBRID COMPUTING ENVIRONMENT - A data transfer profile defines the transfer of data among different domains. The data transfer profile is processed to generate data transfer rules. Subsets of the rules are distributed to the different domains. A rule can specify a folder in a particular domain in which files stored in the folder will be transferred to another domain. | 12-04-2014 |
20140359048 | Caching in a Telecommunication Network - A net-work node ( | 12-04-2014 |
20140359049 | System And Method For Increasing Data Availability On A Mobile Device Based On Operating Mode - A system for a mobile device to provide access to a data collection, such as a user's data collection for example, without requiring either persistent storage of the complete data collection locally on the mobile device, or network access requests for each user data request from the mobile device. In an embodiment, the system employs a data probability function to predict the probability of the mobile device accessing specific types of user data based on the operating mode of the mobile device. The system executes as a background process to provide and store locally on the mobile device, the data most probable to be accessed at the mobile device. The data most likely to be accessed via the mobile device is available locally, thereby minimizing latency issues that occur when data requests cannot be fulfilled using data stored locally in the mobile device and network requests are performed. | 12-04-2014 |
20140365597 | Processing Element Data Sharing - A memory sharing method and system in a distributed computing environment. The method includes placing a first operator and a second operator within a processing element. The first operator is associated with a first host and the second operator associated with a second and differing host of a distributed computing system. Requests for usage of global data with respect to multiple processes are received from the first operator and the second operator. The global data is stored within a specified segment of a shared memory module that includes shared memory space being shared by the first operator and the second operator. The multiple processes are executed and results are generated by the first operator and the second operator with respect to the global data. | 12-11-2014 |
20140365598 | Method and System for Data Archiving - A data server, method and computer readable storage medium for receiving a current request relating to a data archive, determining a number of queued requests relating to the data archive present in a request queue, determining a waiting time for the current request based on the number of queued requests and adding the current request to the request queue after the waiting time has elapsed. | 12-11-2014 |
20140365599 | COMMUNICATION METHOD OF NODE OVERHEARING CONTENT IN CONTENT CENTRIC NETWORK AND NODE - A communication method of a node in a content centric network, includes overhearing a content transmitted from another node, caching the overheard content, and providing the cached content in response to receiving a packet requesting the cached content. | 12-11-2014 |
20140365600 | METHOD, SYSTEM AND SERVER OF REMOVING A DISTRIBUTED CACHING OBJECT - The present disclosure discloses a method, a system and a server of removing a distributed caching object. In one embodiment, the method receives a removal request, where the removal request includes an identifier of an object. The method may further apply consistent Hashing to the identifier of the object to obtain a Hash result value of the identifier, locates a corresponding cache server based on the Hash result value and renders the corresponding cache server to be a present cache server. In some embodiments, the method determines whether the present cache server is in an active status and has an active period greater than an expiration period associated with the object. Additionally, in response to determining that the present cache server is in an active status and has an active period greater than the expiration period associated with the object, the method removes the object from the present cache server. By comparing an active period of a located cache server with an expiration period associated with an object, the exemplary embodiments precisely locate a cache server that includes the object to be removed and perform a removal operation, thus saving the other cache servers from wasting resources to perform removal operations and hence improving the overall performance of the distributed cache system. | 12-11-2014 |
20140372549 | LOAD BALANCING INPUT/OUTPUT OPERATIONS BETWEEN TWO COMPUTERS - Methods, apparatus and computer program products implement embodiments of the present invention that include identifying, by a first computer, multiple network paths to a second computer, and splitting an input/output (I/O) request for a logical volume stored on the second computer into sub-requests. A probe request defining an association between the I/O request and the sub-requests is conveyed to the second computer, and each of the sub-requests is assigned to a respective one of the multiple network paths. Each of the sub-requests are conveyed to the second computer via the assigned respective one of the multiple network paths, and the sub-requests are received by the second computer via the multiple network paths. The second computer performs the sub-requests in response to the association, and a result of each of the sub-requests is conveyed to the first computer via the assigned respective one of the multiple network paths. | 12-18-2014 |
20140372550 | METADATA-DRIVEN DYNAMIC LOAD BALANCING IN MULTI-TENANT SYSTEMS - The disclosure generally describes computer-implemented methods, computer program products, and systems for providing metadata-driven dynamic load balancing in multi-tenant systems. A computer-implemented method includes: identifying a request related to a model-based application executing in a multi-tenant system associated with a plurality of application servers and identifying at least one object in the model-based application associated with the request. At least one application server is identified as associated with a locally-cached version of a runtime version of the identified object, and a determination of a particular one of the identified application servers to send the identified request for processing is based on a combination of the availability of a locally-cached version of the runtime version at the particular application server and the server's processing load. The request is then sent to the determined application server for processing. | 12-18-2014 |
20140372551 | PROVIDING STORAGE AND SECURITY SERVICES WITH A SMART PERSONAL GATEWAY DEVICE - Embodiments provide storage, security, and other services to smart personal devices (SPDs) in a personal area network (PAN) via a smart personal gateway device (SPGD). The SPGD caches and shares data among SPDs having support for heterogeneous communication modalities. The SPGD acts as an offline cache or other common storage location for the SPDs in the PAN. | 12-18-2014 |
20140379835 | PREDICTIVE PRE-CACHING OF CONTENT - Certain embodiments herein are directed to predictive pre-caching of content for user devices. A service provider system may receive predictive pre-cache information associated with a user from a user device. The service provider system may obtain content based at least in part on the predictive pre-cache information associated with the user. The service provider system may determine a non-congested time to transmit the obtained content. The service provider system may transmit the content to the user device at the non-congested time. | 12-25-2014 |
20140379836 | OFFLOADING NODE CPU IN DISTRIBUTED REDUNDANT STORAGE SYSTEMS - A network interface includes a host interface for communicating with a node, and circuitry which is configured to communicate with one or more other nodes over a communication network so as to carry out, jointly with one or more other nodes, a redundant storage operation that includes a redundancy calculation, including performing the redundancy calculation on behalf of the node. | 12-25-2014 |
20140379837 | System and Methods of Pre-Fetching Content in one or more Repositories - A method of pre-fetching content that includes determining a possible future use of a content and determining if the content is available in a cache repository. If the possible future use is determined and the content is determined to be unavailable in the cache repository, the method retrieves the content from a remote repository and provides the retrieved content to a content consumer. | 12-25-2014 |
20140379838 | System and Methods of Managing Content in one or more Networked Repositories During a Network Downtime Condition - A method of managing content in a network by a cache repository that includes receiving content from a content source; storing the content in the cache repository; sending the content to a remote repository for storage; and determining if a connection to the remote repository can be established in the network. If the connection to the remote repository can be established, the method includes retrieving the content from the remote repository; and if the connection to the remote repository cannot be established, the method retrieves the content from a backup repository. | 12-25-2014 |
20140379839 | METHOD AND AN APPARATUS FOR PERFORMING OFFLINE ACCESS TO WEB PAGES - The embodiments disclose a method and apparatus for offline access of web pages. The method includes: acquiring on a user terminal a local cache template of a first web page, wherein the local cache template has pre-stored one or more respective paths, each points to a respective designated Uniform Resource Locator (URL) location linked to the first web page, wherein each respective designated URL enables offline access to a corresponding second web page in the first web page; locally caching each of the corresponding second web page, wherein each of the corresponding second web page which corresponds to a respective path pointing to the respective designated URL pre-stored in the local cache template of the first web page, such that the corresponding second web page is to be locally loaded into a browser of the user terminal when the browser accesses the respective designated URL in the first web page. | 12-25-2014 |
20140379840 | PREDICTIVE PREFETCHING OF WEB CONTENT - This disclosure describes systems and methods for predictive prefetching. A server can be modified in accordance with the teachings hereof to predictively prefetch a second object for a client (referred to herein as the dependent object), given a request from the client for a first object (referred to herein as the parent object). When enough information about a parent object request is available, the predictive prefetching techniques disclosed herein can be used to calculate the likelihood that one or more dependent objects might be requested. This enables a server to prefetch them from local or remote storage device, from an origin server, or other source. | 12-25-2014 |
20140379841 | Web page content loading control method and device - The invention discloses a web page content loading control method and device. The method comprises: receiving a web page access request; according to the web page access request, reading corresponding pre-stored web page content locally and loading the same; according to the web page access request, obtaining web page content from a server and caching the obtained content locally; and after obtaining the web page content completely or partially, reading the cached content and updating currently loaded web page content. | 12-25-2014 |
20140379842 | SYSTEM AND METHOD FOR MANAGING PAGE VARIATIONS IN A PAGE DELIVERY CACHE - Embodiments disclosed herein provide a high performance content delivery system in which versions of content are cached for servicing web site requests containing the same uniform resource locator (URL). When a page is cached, certain metadata is also stored along with the page. That metadata includes a description of what extra attributes, if any, must be consulted to determine what version of content to serve in response to a request. When a request is fielded, a cache reader consults this metadata at a primary cache address, then extracts the values of attributes, if any are specified, and uses them in conjunction with the URL to search for an appropriate response at a secondary cache address. These attributes may include HTTP request headers, cookies, query string, and session variables. If no entry exists at the secondary address, the request is forwarded to a page generator at the back-end. | 12-25-2014 |
20140379843 | Providing Electronic Content to Residents of Controlled-Environment Facilities - Systems and methods for providing electronic content and applications to residents of controlled-environment facilities are disclosed. The portable computing device may be configured to determine that an external memory has been coupled to it. The external memory may include content requested by the resident and a key configured to allow the device to access the content to the exclusion of other devices associated with other residents. The portable computing device may retrieve the key from the external memory and allow the resident to view or play the content if the key matches a lock programmed within the device. If the resident attempts to insert a non-authorized external memory into the device, its contents may be erased and/or an alert may be generated. The content of the external memory may be transferred to the portable computing device and then the external memory may be locked so that it is unusable. | 12-25-2014 |
20140379844 | SYSTEM, METHOD AND STORAGE MEDIUM FOR MANAGING ITEMS WITHIN FILE DIRECTORY STRUCTURE - A file-mapping method and system can better manage the number of items (i.e., files, subdirectories, or a combination of them) within any single directory within a storage medium. The method and system can be used to limit the number of items within the directory, direct content and content components to different directories, and provide an internally recognizable name for the filename. When searching the storage medium, time is not wasted searching what appears to be a seemingly endless list of filenames or subdirectory names within any single directory. A client computer can have requests for content fulfilled quicker, and the network site can reduce the load on hardware or software components. While the method and system can be used for nearly any storage media, the method and system are well suited for cache memories used with web servers. | 12-25-2014 |
20140379845 | DISTRIBUTED DATA STORAGE - The present invention relates to a distributed data storage system comprising a plurality of storage nodes. Using unicast and multicast transmission, a server application may write data in the storage system. When writing data, at least two storage nodes are selected based in part on a randomized function, which ensures that data is sufficiently spread to provide efficient and reliable replication of data in case a storage node malfunctions. | 12-25-2014 |
20150012608 | WEB CONTENT PREFETCH CONTROL DEVICE, WEB CONTENT PREFETCH CONTROL PROGRAM, AND WEB CONTENT PREFETCH CONTROL METHOD - A web content prefetch control device includes a client connecting unit which receives a Web content acquisition request from a client, and a prefetch request from a prefetch processing unit which performs prefetch of Web content; a cache managing unit which stores Web content acquired as responses to the Web content acquisition request and to the prefetch request, and transmits the responses; a response replication unit which receives the responses from the cache managing unit, and replicates the response to the Web content acquisition request, out of the responses; and a prefetch connecting unit which transmits, to the prefetch processing unit, the replicated response to the Web content acquisition request. Communication between the client connecting unit and the client, communication between the client connecting unit and the prefetch processing unit, and communication between the prefetch connecting unit and the prefetch processing unit are each performed with a same communication protocol. | 01-08-2015 |
20150019673 | DISTRIBUTED CACHING IN A COMMUNICATION NETWORK - Example systems and methods of caching data in a communication network are presented. In one example, a data resource at an originating server of the communication network is partitioned into multiple data partitions. At the originating server, a unique address is assigned to each of the data partitions. Each of the data partitions is distributed to at least one of a plurality of proxy servers based on the unique addresses. Each of the proxy servers is configured to receive a read request for one of the data partitions stored at the proxy server, and to transmit the one of the data partitions to a source of the read request in response to the read request. | 01-15-2015 |
20150019674 | METHOD, APPARATUS, AND COMPUTER READABLE MEDIUM FOR FLEXIBLE CACHING OF RESOURCE ORIENTED WEB SERVICES - A cache management apparatus, method, and computer readable medium which manages caching of resources. The method includes analyzing a structure of a resource in a system which exposes resources to clients, generating a dependency graph of objects linked to a resource based on the analyzed structure of the resource, and managing caching of resources based on the generated dependency graphs. A generated dependency graph includes hierarchical dependency information with respect to the objects of the resource. | 01-15-2015 |
20150019675 | CACHING GEOGRAPHIC DATA ACCORDING TO SERVER-SPECIFIED POLICY - Caching or discarding geographic data received at a client computing device may be based on a caching policy for the geographic data. A caching policy may define conditions to process the geographic data at the client device based on several factors. For example, a current position of the client device or a position of a portion of a map displayed within a viewport of the device may cause the device to cache or discard the received geographic data. The device may determine a relationship between the viewport and the received geographic data, compare the determined relationship to the caching policy and cache or discard at least a portion of the received geographic data based on the comparison. | 01-15-2015 |
20150019676 | METHODS AND DEVICES FOR EXCHANGING DATA - The present invention relates to the exchange of data between a server and a receiving device. The exchange method comprises receiving, at the receiving device, a push message comprising pushed data from the server; storing received pushed data in a cache memory of the receiving device, the stored data being identified as being of push type; transmitting, from the receiving device to the server, a request for data comprising information about pushed data stored in the cache memory of the receiving device; and receiving, from the server, at the receiving device, a response to said request comprising requested data. | 01-15-2015 |
20150019677 | Systems and Methods for Browser-Based Games - Systems and methods are provided for browser-based games. For example, a data-loading request is received for acquiring resource data and user data related to operations of a next phase of a browser-based game; a present networking state is detected; in response to the present networking state corresponding to an online state, first resource data and first user data are requested from the network-side server; the browser-based game is loaded based on at least information associated with the first resource data and the first user data received from the network-side server; and the first resource data and the first user data are stored in a cache. | 01-15-2015 |
20150019678 | Methods and Systems for Caching Content at Multiple Levels - A cache includes an object cache layer and a byte cache layer, each configured to store information to storage devices included in the cache appliance. An application proxy layer may also be included. In addition, the object cache layer may be configured to identify content that should not be cached by the byte cache layer, which itself may be configured to compress contents of the object cache layer. In some cases the contents of the byte cache layer may be stored as objects within the object cache. | 01-15-2015 |
20150019679 | INCORPORATING WEB APPLICATIONS INTO WEB PAGES AT THE NETWORK LEVEL - A proxy server automatically includes web applications in web pages at the network level. The proxy server receives, from a client device, a request for a network resource at a domain and is hosted at an origin server. The proxy server retrieves the requested network resource. The retrieved network resource does not include the web applications. The proxy server determines that the web applications are to be installed within the network resource. The proxy server automatically modifies the retrieved network resource to include the web applications. The proxy server transmits a response to the client device that includes the modified network resource. The network resource may remain unchanged at the origin server. | 01-15-2015 |
20150026288 | METHOD FOR ACCELERATING WEB SERVER BY PREDICTING HYPERTEXT TRANSFER PROTOCOL (HTTP) REQUESTS AND WEB SERVER ENABLING THE METHOD - Provided is a method of improving performance of a web server by predicting a Hypertext Transfer Protocol (HTTP) request and the web server enabling the method, including transmitting, to an HTTP requester, at least one web content among web contents including static web contents and dynamic web contents in response to an HTTP request, selecting, from the web contents, a required web content to be additionally transmitted to the HTTP requester and a potential web content to be additionally transmitted to the HTTP requester, determining, among the potential web content, a web content to be preloaded, and storing, in a document cache, the required web content and the web content to be preloaded. | 01-22-2015 |
20150026289 | CONTENT SOURCE DISCOVERY - Systems and methods for discovering content sources and/or delivering content to applications resident on mobile devices are described. In some embodiments, the systems and methods transmit information identifying one or more applications resident on a mobile device to a server, receive, from the server, information associated with content items available for retrieval from a content server and associated with the identified one or more applications, and cause the mobile device to retrieve at least one of the content items available for retrieval from the content server. | 01-22-2015 |
20150032838 | SYSTEMS AND METHODS FOR CACHING AUGMENTED REALITY TARGET DATA AT USER DEVICES - Systems and methods are disclosed for transmitting, to user devices, data for potential targets predicted to be identified in an augmented reality application. One method includes receiving a request for target data related to at least one physical object within an image of a real-world environment captured at the device; identifying a current target representing the physical object within a virtual environment corresponding to the real-world environment; determining at least one potential future target to be identified at the device based on identified coincident target requests; and sending to the device target data for each of the current and potential future targets based on the determination, wherein the device presents the target data for the current target within the virtual environment displayed at the device and store the target data for the potential future target in a local memory of the device. | 01-29-2015 |
20150039713 | CONTENT CACHING - A gateway within a network intercepts a request by a client within the network for content associated with a server outside the network, the client having a direct connection with the server outside the network. The method further includes determining whether a copy of the requested content is available in a cache within the network. The method further includes, if the copy of the requested content is determined to be available in the cache within the network, transmitting a redirect response to the client to cause the cause to retrieve the copy of the requested client from the cache within the network. The method further includes if the copy of the requested content is determined not to be available in the cache within the network, permitting the intercepted content request by the client to be transmitted to the server outside the network to cause the requested content to be retrieved via the direct connection between the server outside the network and the client within the network. | 02-05-2015 |
20150039714 | MULTIMEDIA CACHE WITH DYNAMIC SEGMENTING - A method and system are proposed for storing streamed multimedia data in a multimedia cache. The method includes: receiving portions of a multimedia data stream item from a multimedia source; pre-storing a plurality of the multimedia data stream portions in a buffer in the order in which the portions are received; determining the temporal position of the pre-stored multimedia data portions in the multimedia data stream in order to identify consecutive sequences in the media data stream; rearranging the pre-stored multimedia data stream portions to form at least one temporally contiguous data stream portion and storing each contiguous data stream portion as a single segment in a cache file. The pre-storage and subsequent rearrangement or reorganisation of the received media data stream portions means that all data stream portions that are received out of order but are sequential within the media data item can be placed in the correct temporal sequence and also combined into a single data portion, thus facilitating both the subsequent storage of the media data in a cache and the retrieval of this data. | 02-05-2015 |
20150039715 | PUBLISHER-ASSISTED, BROKER-BASED CACHING IN A PUBLISH-SUBSCRIPTION ENVIRONMENT - Embodiments of the present invention provide an approach for a publisher-assisted, broker-based cache that can be utilized to reduce a volume of data (e.g., network traffic) delivered between a publisher and broker in a publication/subscription (pub/sub) environment. Specifically, in a typical embodiment, when a message is being generated on a publisher system, the publisher system will determine if the message includes a set of data that has a potential to be repeated in subsequent messages. Once such a set of data has been identified, the set of data will be associated/marked/tagged (e.g., in the message) with a unique identifier/cache key corresponding thereto (i.e., to yield a modified message). The modified message will be sent to a broker system, which will detect/locate the unique identifier, cache the corresponding data, and send the message along to any applicable subscriber systems. When a subsequent message that is supposed to contain the cached set of data is generated, the publisher system will instead substitute the unique identifier for the set of data to yield an abbreviated message and send the abbreviated message to the broker system. Upon receipt, the broker system will detect/locate the unique identifier, retrieve the corresponding set of data from the cache, replace the unique identifier with the set of data to yield a completed message, and then send the completed message to the applicable subscriber systems. | 02-05-2015 |
20150046557 | SYSTEM, METHOD AND APPARATUS FOR USING A VIRTUAL BUCKET TO TRANSFER ELECTRONIC DATA - A system that enables a mobile communication device to transfer data to or from a computer system using communication data read from an NFC tag. The first device transfers the data and is temporarily held until the second device removes the data. Once the data is removed, the location where the data was temporarily held is emptied. | 02-12-2015 |
20150052215 | WIRELESS SHARING OF DEVICE RESOURCES ALLOWING DEVICE STORAGE NEEDS TO BE WIRELESSLY OFFLOADED TO OTHER DEVICES - Systems and methods for wireless sharing of device resources How device storage needs to be wirelessly offloaded to other devices. In a method, which may be implemented on a system, storage is shared among devices by offloading storage needs of a first device to a second device among two devices coupled in a wireless network. In offloading the storage needs, data for use at the first device may be transmitted over the wireless network to be stored at the second device. | 02-19-2015 |
20150058435 | Fast Mobile Web Applications Using Cloud Caching - Methods and systems may provide for identifying a web application having a primary resource that references a secondary resource, wherein the primary resource contains a version identifier of the primary resource and a version identifier of the secondary resource. Additionally, a cached version of the primary resource and a cached version of the secondary resource may be created on a mobile device, and the version identifier of the primary resource may be used to determine whether the secondary resource is stale. In one example, it may be determined that staleness checking has been disabled in the secondary resource. Moreover, if the primary resource does not contain the version identifiers, cloud caching may be used. | 02-26-2015 |
20150058436 | STORAGE DEVICE AND DATA PROCESSING METHOD - A storage device according to an embodiment includes a plurality of memory nodes and a first connection unit. Each memory node includes nonvolatile memory and is connected to each other in two or more different directions. The first connection unit adds a first lifetime to a command which is externally supplied, and transmits the command including the first lifetime to a first memory node. A second memory node having received the command among the plural memory nodes, if the second memory node is not a destination of the command, subtracts the first lifetime added to the first command. The second memory node discards the command after the subtraction when the first lifetime after the subtraction is less than a threshold. The second memory node transfers the command after the subtraction to the adjacent memory node when the first lifetime after the subtraction is larger than the threshold. | 02-26-2015 |
20150058437 | METHOD FOR PROVIDING AUDIO CONTENTS TO A MOBILE CLIENT DEVICE - A computer implemented method for providing audio contents as a plurality of tracks to be played on a listener's mobile client device with Internet radio capabilities, the client device intended to be connected to the Internet. The method comprises: obtaining, as a listener's input into the listener's client device, a playlist definition; selecting, from a plurality of tracks, tracks meeting the playlist definition to form a playlist, wherein the playlist is formed by playlist entries that include track identifications referring to selected ones of the plurality of tracks. The plurality of tracks comprises (i) tracks present in a remote master media inventory, tracks present in an Internet-based cloud memory environment, and/or tracks present in a local media content inventory of the listener's client device; a hybrid engine software library generates personalized track playlists for a listener based on details of the listener's input, a current configuration of the system, the plurality of tracks, and current network conditions. | 02-26-2015 |
20150058438 | SYSTEM AND METHOD PROVIDING HIERARCHICAL CACHE FOR BIG DATA APPLICATIONS - The embodiments herein develop a system for providing hierarchical cache for big data processing. The system comprises a caching layer, a plurality of actors in communication with the caching layer, a machine hosting the plurality of actors, a plurality of replication channels in communication with the plurality of actors, a predefined ring structure. The caching layer is a chain of memory and storage capacity elements, configured to store a data from the input stream. The plurality of actors is configured to replicate the input data stream and forward the replicated data to the caching layer. The replication channels are configured to forward the replicated data from a particular actor to another actor. The predefined ring structure maps the input data to the replica actors. | 02-26-2015 |
20150058439 | APPARATUS AND METHOD FOR CACHING OF COMPRESSED CONTENT IN A CONTENT DELIVERY NETWORK - A content delivery network (CDN) edge server is provisioned to provide last mile acceleration of content to requesting end users. The CDN edge server fetches, compresses and caches content obtained from a content provider origin server, and serves that content in compressed form in response to receipt of an end user request for that content. It also provides “on-the-fly” compression of otherwise uncompressed content as such content is retrieved from cache and is delivered in response to receipt of an end user request for such content. A preferred compression routine is gzip, as most end user browsers support the capability to decompress files that are received in this format. The compression functionality preferably is enabled on the edge server using customer-specific metadata tags. | 02-26-2015 |
20150067088 | SCHEDULING AND EXECUTION OF DAG-STRUCTURED COMPUTATION ON RDMA-CONNECTED CLUSTERS - A server and/or a client stores a metadata hash map that includes one or more entries associated with keys for data records stored in a cache on a server, wherein the data records comprise a directed acyclic graph (DAG), and the directed acyclic graph is comprised of a collection of one or more nodes connected by one or more edges, each of the nodes representing one or more tasks ordered into a sequence, and each of the edges representing one or more constraints on the nodes connected by the edges. Each of the entries stores metadata for a corresponding data record, wherein the metadata comprises a server-side remote pointer that references the corresponding data record stored in the cache. A selected data record is accessed using a provided key by: (1) identifying potentially matching entries in the metadata hash map using the provided key; (2) accessing data records stored in the cache using the server-side remote pointers from the potentially matching entries; and (3) determining whether the accessed data records match the selected data record using the provided key. | 03-05-2015 |
20150067089 | METADATA DRIVEN DECLARATIVE CLIENT-SIDE SESSION MANAGEMENT AND DIFFERENTIAL SERVER SIDE DATA SUBMISSION - A method and system for managing declarative client-side session. The method includes caching data loaded by user interface (UI) associated with a client device operated by a user, and identifying one or more units of data and meta-information describing data to be accessed through the UI. Further, the method includes associating the one or more units of data with the identified meta-information, and constructing a declarative session in accordance to the association. Further, the method includes recording changes performed by the user on the one or more units of data, and submitting the recorded changes to a server in accordance to the declarative session. | 03-05-2015 |
20150067090 | STORING LOW RETENTION PRIORITY DATA IN A DISPERSED STORAGE NETWORK - A method begins by a processing module of a dispersed storage network (DSN) sending a set of low retention priority write requests to storage units of the DSN, where each low retention priority write request includes a low retention priority query. For each storage unit of the storage units that receives a low retention priority write request of the set of low retention priority write requests, the method continues with the processing module determining a low retention priority response regarding availability for storing low retention priority data based on current storage of low priority data objects and available memory for storing the low retention priority data. The method continues with the processing module sending the low retention priority response. When a threshold number of favorable low retention priority responses have been received, the method continues at the processing module facilitating storage of a low retention priority data object. | 03-05-2015 |
20150067091 | INTERCONNECT DELIVERY PROCESS - A method for enforcing data integrity in an RDMA data storage system includes flushing data write requests to a data storage device before sending an acknowledgment that the data write requests have been executed. An RDMA data storage system includes a node configured to flush data write requests to a data storage device before sending an acknowledgment that a data write request has been executed. | 03-05-2015 |
20150067092 | CONTENT DELIVERY NETWORK WITH DEEP CACHING INFRASTRUCTURE - Embodiments herein include methods and systems for use in delivering resources to a client device over a local network. An exemplary system comprises a plurality of caching devices operable to cache resources on behalf of a plurality of content providers, and a local caching device communicatively situated between an access network and the client device, wherein the access network is communicably situated between the plurality of caching devices and the local caching device. The local caching device is operable to retrieve a requested resource from at least one of the plurality of caching devices, deliver the requested resource to the client device over the local network, and store the requested resource for future requests by other client devices. | 03-05-2015 |
20150074218 | CLOUD ENTERPRISE APPLICATION SYSTEM. - A cloud enterprise application system is directly constructed in the cloud without installing to the individual use's computer devices. As the edition of the application is modified, and it is only needed to change the edition in the cloud structure. One enterprise has a dedicated database (sub-directory). All data generated by various application units are automatically stored in the dedicated sub-directory instead of the operation environments of the users. For one enterprise, all the application units performed by the enterprise are directed to the sub-directory of the enterprise. They directly access the data of the sub-directory specific to the enterprise. Thus the compatibility of the data between different application is high. The data files are not distributed in many different computers so that problems induced from data management are greatly reduced. Furthermore, the object of this system is to enterprises instead of individual users. | 03-12-2015 |
20150074219 | HIGH AVAILABILITY NETWORKING USING TRANSACTIONAL MEMORY - Techniques for facilitating high availability in a device (e.g., a network device) comprising redundant processing entities (e.g., one or more processors, one or more cores, etc.) and a transactional memory system. The transactional memory system comprises a memory that is shareable between the redundant processing entities and ensures consistency of information stored in the memory at the atomicity of a transaction. A first processing entity may operate in a first mode (e.g., active mode) while a second processing entity operates in a second mode (e.g., standby mode). Operational state information used by the active processing entity for performing a set of functions in the first mode may be stored in the shared memory. Upon a switchover, the second processing entity may start to operate in the first mode and commence performing the set of functions using the operational state information stored by the transactional memory system. | 03-12-2015 |
20150074220 | SOCIAL NETWORKING UTILIZING A DISPERSED STORAGE NETWORK - Social networking data is received at the dispersed storage processing unit, the social networking data associated with at least one of a plurality of user devices. Dispersed storage metadata associated with the social networking data is generated. A full record and at least one partial record are generated based on the social networking data and further based on the dispersed storage metadata. The full record is stored in a dispersed storage network. The partial record is pushed to at least one other of the plurality of user devices via the data network. | 03-12-2015 |
20150081831 | JOINING A DISTRIBUTED DATABASE - A method may include a device joining a distributed database in a distributed physical access control system. The method may include storing first data in a first memory area of a memory. The first memory area may be designated to store data for a consensus-based distributed database (DB). The first data is to be added to the consensus-based distributed DB that is distributed among other devices in a network. The method may include copying the first data to a second memory area of the memory of the device and adding the device to the network, receiving data from the other devices in the network and adding the received data to the consensus-based distributed DB by storing the received data in the first memory area, and adding the first data to the consensus-based distributed DB by copying the first data from the second memory area to the first memory area. | 03-19-2015 |
20150081832 | MANAGING SEED DATA - Embodiments of the invention provide systems and methods for managing seed data in a computing system (e.g., middleware computing system). A disclosed server computer may include a processor and a memory coupled with and readable by the processor and storing therein a set of instructions which, when executed by the processor, cause the processor to perform a method. The method may include obtaining first input data via a graphical interface. The first input data indicates a first memory storage location of seed data. The seed data comprises data to initialize an application for operation. The method further includes accessing the seed data from the first memory storage location based on the first input data. The method includes storing data based on the seed data to a second memory storage location. | 03-19-2015 |
20150081833 | Dynamically Generating Flows with Wildcard Fields - Some embodiments of the invention provide a switching element that receives a packet and processes the packet by dynamically generating a flow entry with a set of wildcard fields. The switching element then caches the flow entry and processes any subsequent packets that have header values that match the flow entry's non-wildcard match fields. In generating the flow, the switching element initially wildcards some of all of match fields and generates a new flow entry by un-wildcarding each match field that was consulted or examined to generate the flow entry. | 03-19-2015 |
20150081834 | INFORMATION PROCESSING SYSTEM AND METHOD - An information processing system includes a reception part and a process control part. The reception part receives a processing request including process identification information identifying a process and user identification information from an apparatus. The process control part, when the received user identification information is stored in correlation with the received process identification information, executes the process according to the process identification information based on the result of applying change information stored in correlation with the received user identification information to setting information stored in correlation with apparatus identification information identifying the apparatus and with the received process identification information. | 03-19-2015 |
20150081835 | METHOD AND APPARATUS FOR SPEEDING UP WEB PAGE ACCESS - Embodiments of the present invention disclose a method and apparatus for speeding up Web page access, pertaining to the network field. The method includes: acquiring a URL address initiated by a user; judging whether the URL address is stored in a preset cache database, where the cache database stores a plurality of mapping relationships between URL addresses and cache data; and when it is judged that the URL address is stored in the preset cache database, acquiring cache data corresponding to the URL address from the cache database, processing the cache data, and rendering the Web page. According to the embodiments of present invention, logic for implementing the cache database is added at the browser end. In this way, no matter a Web server or a proxy server complies with the HTTP. | 03-19-2015 |
20150089013 | INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD - An information processing apparatus includes: an image memory that stores therein an image that is to be displayed on a terminal device; a drawing unit that draws a processing result from software into the image memory; a detecting unit that detects an update area containing an update between frames in the image; a splitting unit that splits, the image in the update area; a creating unit that creates, wait insertion data by inserting a wait between each of the pieces of the split data; a changing unit that changes the wait; an acquiring unit that acquires, the available bandwidth and a display update speed that indicates display intervals of the wait insertion data for each wait; and a selecting unit that selects, when a wait in which the available bandwidth is increased and the display update speed is improved, the wait. | 03-26-2015 |
20150089014 | SYSTEMS AND METHODS FOR CACHE-BASED CONTENT DELIVERY - Methods, systems, and computer-readable media for cache-based management of non-linear content delivery are generally described. In some embodiments, content to be presented using consumer devices may be cached at a cache device of the consumer device. A cache policy server may transmit cache messages instruction the consumer devices which content to cache, which cache device to store the content, where to locate the content, and how much of the content to cache. When it is time to play the content at the consumer device, the content may be played back from the cache. | 03-26-2015 |
20150100659 | CACHING MECHANISM FOR DETERMINING VALIDITY OF CACHED DATA USING WEB APPLICATION BUSINESS LOGIC - Systems and methods are provided for a caching mechanism that determines validity of cached data using web application business logic. An example system includes a web container that receives a first request to return one or more generated data located in a data cache, and a web application including one or more data validity arbiters and business logic determining the validity of the one or more generated data. The system may further include a business logic caching mechanism that requests the validity of the one or more generated data from the one or more data validity arbiters, wherein the one or more data validity arbiters utilize the business logic to determine the validity of the one or more generated data. | 04-09-2015 |
20150100660 | SYSTEMS AND METHODS FOR CACHING CONTENT WITH NOTIFICATION-BASED INVALIDATION - Described herein are systems, devices, and methods for content delivery on the Internet. In certain non-limiting embodiments, a caching model is provided that can support caching for indefinite time periods, potentially with infinite or relatively long time-to-live values, yet provide prompt updates when the underlying origin content changes. In one approach, an origin server can annotate its responses to content requests with tokens, e.g., placing them in an appended HTTP header or otherwise. The tokens can drive the process of caching, and can be used as handles for later invalidating the responses within caching proxy servers delivering the content. Tokens may be used to represent a variety of kinds of dependencies expressed in the response, including without limitation data, data ranges, or logic that was a basis for the construction of the response. | 04-09-2015 |
20150100661 | SYSTEMS AND METHODS FOR MAPPING AND ROUTING BASED ON CLUSTERING - Unique identifiers (IDs) associated with a plurality of nodes may be provided. Nodes clustered within a community may be assigned numerically proximate unique IDs. A number of partitions associated with a plurality of machines may be determined. The unique IDs may be segmented into divisions based on the number of partitions. The unique IDs may be mapped to the plurality of machines based on the divisions. | 04-09-2015 |
20150100662 | UTILIZING MULTIPLE DATA STRUCTURES FOR SLICE STORAGE - A method includes a dispersed storage (DS) processing module receiving a slice access request that includes a slice name. The method continues by obtaining one or more revision numbers for the slice access request. The method continues for each combination of revision number and the slice name, by performing a deterministic function on the combination to produce a slice location table index value. The method continues by accessing a slice location table utilizing the slice location table index value to obtain a slice location. The method continues by accessing a slice utilizing the slice location. The method continues by generating a slice access response based on the accessing of the slice and sending the slice access response to a requesting entity. | 04-09-2015 |
20150100663 | COMPUTER SYSTEM, CACHE MANAGEMENT METHOD, AND COMPUTER - A computer system comprising: a server on which an application operates; and a storage system that stores data used by the application, the server including an operating system for controlling the server, the operating system including a cache driver for controlling a cache, the cache driver storing access management information for managing the number of accesses to a partial storage area of a volume provided by the storage system, and the cache driver being configured to: manage the number of accesses to the partial storage area of the volume by using the first access management information; replace the storage area to which the number of accesses is to be managed based on a predetermined replacement algorithm; and control arrangement of data in the server cache based on the first access management information. | 04-09-2015 |
20150100664 | SYSTEMS AND METHODS FOR CACHING CONTENT WITH NOTIFICATION-BASED INVALIDATION WITH EXTENSION TO CLIENTS - Described herein are systems, devices, and methods for content delivery on the Internet. In certain non-limiting embodiments, a caching model is provided that can support caching for indefinite time periods, potentially with infinite or relatively long time-to-live values, yet provide prompt updates when the underlying origin content changes. In one approach, an origin server can annotate its responses to content requests with tokens, e.g., placing them in an appended HTTP header or otherwise. The tokens can drive the process of caching, and can be used as handles for later invalidating the responses within caching proxy servers delivering the content. This caching and invalidation model can be extended out to clients, such that clients may be notified of invalid data and obtain timely updates. | 04-09-2015 |
20150106469 | ELECTRONIC DEVICE WITH DATA CACHE FUNCTION AND RELATED METHOD - An electronic device with a data cache function includes a memory and a processor. The processor caches video data to the memory when a webpage is opened to play a network video, divides the cached video data into a plurality of sub videos, transfers the plurality of sub-videos to an external storage equipment for storing from the memory, and joins the plurality of sub-videos from the external storage equipment and the cached video data in the memory to form a joined video. | 04-16-2015 |
20150106470 | A CACHING DEVICE AND METHOD THEREOF FOR INTEGRATION WITH A CLOUD STORAGE SYSTEM - A network attached storage device and method for performing network attached storage operations with cloud storage services are provided. The device includes at least one network controller for communicating with a plurality of clients over a local area network (LAN) and with the cloud storage service (CSS) over a wide area network (WAN); a cache memory for locally caching data of the CSS in the device; and a virtual cloud drive (VCD) for enabling the plurality of clients to perform file-based operations on data stored in the CSS using at least one file sharing protocol. | 04-16-2015 |
20150106471 | Data Processing Method, Router, and NDN System - A data processing method, a router, and an NDN system are disclosed. The method may include obtaining a priority attribute of the data when data is received, setting a life cycle attribute for the data according to a correspondence between the priority attribute and the life cycle attribute, and storing, in a local cache, the data having the life cycle attribute. | 04-16-2015 |
20150113089 | METHOD AND APPARATUS FOR FLEXIBLE CACHING OF DELIVERED MEDIA - Various methods are described for selecting an access method for flexible caching in DASH. One example method may comprise causing a request for at least one of a primary representation for a segment and one or multiple alternative representations for the segment to be transmitted to a caching proxy. The method of this example embodiment may further comprise causing the caching proxy to respond with at least one of the primary representation or the alternative representation based on the caching status at a caching proxy. In some example embodiments, the caching proxy is configured to determine whether the request enables an alternative representation to be included in a response. Furthermore, the method of this example embodiment may comprise receiving at least one of the primary representation or the alternative representation for the segment from the caching proxy. Similar and related example methods, example apparatuses, and example computer program products are also provided. | 04-23-2015 |
20150120856 | METHOD AND SYSTEM FOR PROCESSING NETWORK TRAFFIC FLOW DATA - Network traffic flow records received from a network probe are recorded in multiple sets of buckets of different granularity, optimized for the purpose of almost instant analysis and display as well as for longer term report generation. The flow data is pre-processed and stored redundantly in parallel in multiple bucketized data base tables of different time window sizes. Denormalized tables keyed on different combinations of traffic flow attributes are precomputed and stored in parallel tables redundantly to facilitate a near real time display of summarized network traffic data, and a capability to rapidly generate reports for different monitoring periods. | 04-30-2015 |
20150120857 | INFORMATION PROCESSING SYSTEM AND METHOD OF PROCESSING INFORMATION - An information processing system including at least one computer includes a receiving unit that receives data input by an apparatus connected through a network and specifying information of specifying a storage destination of the data or other data generated based on the other data from a plurality of candidate storage destinations, the data and the specifying information being received from the apparatus; an intermediation unit that provides an interface common to the plurality of candidate storage destinations and sends the data or the other to the storage destination designated in a request received through the common interface; and a requesting unit that requests the intermediation unit through the common interface to send the data received by the receiving unit or the other data generated based on the data to the storage destination specified based on specifying information. | 04-30-2015 |
20150120858 | SYSTEM FOR PREFETCHING DIGITAL TAGS - Systems and methods described herein can take advantage of the caching abilities of the browser and the idle time of the user to prefetch tag libraries of one or more tags for execution in a subsequent content page. For example, these systems and methods can provide the ability to prefetch and not execute a tag library on a content page before it is required so the tag library is cached in the browser. When the browser hits the page that uses the tag library, the tag library can be quickly retrieved from memory and executed. | 04-30-2015 |
20150120859 | COMPUTER SYSTEM, AND ARRANGEMENT OF DATA CONTROL METHOD - A computer system include a service server, a storage server and a management server, wherein the service server includes a operating system, wherein the operating system includes a cache driver, wherein the storage server manages a plurality of tiered storage areas each having an access performance different from one another, wherein the management server includes an alert setting information generation part for generating alert setting information for the service servers to transmit alert information notifying a trigger to change an arrangement of data in accordance with a state of the service, and a control information generation part for generating cache control information including a first command for controlling an arrangement of cache data on a storage cache and tier control information including a second command for controlling an arrangement of the data on the plurality of tiered storage areas. | 04-30-2015 |
20150120860 | SYSTEM AND METHOD FOR ATOMIC FILE TRANSFER OPERATIONS OVER CONNECTIONLESS NETWORK PROTOCOLS - A system for atomic file transfer operations over connectionless network protocols includes a processor and a memory coupled to the processor. The memory contains program instructions executable by the processor to implement an operating system including a system call interface for sending one or more data files to another system over a network via a connectionless network protocol. In response to an invocation of the system call by an application, the operating system is configured to send the one or more data files to the other system over the network without the application copying contents of the data files into application address space. | 04-30-2015 |
20150120861 | METHOD AND DEVICE FOR OBTAINING CONTENTS OF PAGE, APPLICATION APPARATUS AND MOBILE TERMINAL - A method and a device for obtaining contents of a page, an application apparatus and a mobile terminal are disclosed. The method includes: sending a request for obtaining the page to a page server by a first website loading module; intercepting the request for obtaining the page by an intercepting module; sending the request for obtaining the page to a proxy module by a second website loading module; obtaining the contents of the page by the proxy module based on the request for obtaining the page; returning the contents of the page to the second website loading module by the proxy module; and returning the contents of the page by the second website loading module. With the technical solution, the loading speed may be increased, the loading time may be reduced and the loading efficiency may be improved. | 04-30-2015 |
20150127764 | STORAGE DEVICE CONTROL - A method includes receiving a write request on at least one storage device; detecting a predetermined block of data within the write request; setting a first short code within a translation table if the predetermined block of data is detected; and writing the write request into the at least one storage device if the predetermined block of data is not detected. | 05-07-2015 |
20150127765 | CONTENT NODE SELECTION USING NETWORK PERFORMANCE PROFILES - A communication system exchanges communications between end user devices, content delivery nodes (CDN) of a content delivery system, and a control system that selects CDNs of the content delivery system. The control system receives a domain name lookup request issued by an end user device for retrieving content cached by one or more CDNs of the content delivery system. The control system associates the end user device with a network performance profile to select a CDN of the content delivery system. The control system transfers a network address associated with the selected CDN for receipt by the end user device responsive to the domain name lookup request. | 05-07-2015 |
20150127766 | METHOD AND NODE ENTITY FOR ENHANCING CONTENT DELIVERY NETWORK - The present invention provides a method and a caching node entity for ensuring at least a predetermined number of a content object to be kept stored in a network, comprising a plurality of cache nodes for storing copies of content objects. The present invention makes use of ranking states values, deletable or non-deletable, which when assigned to copies of content objects are indicating whether a copy is either deletable or non-deletable. At least one copy of each content object is assigned the value non-deletable. The value for a copy of a content object changing from deletable to non-deletable in one cache node of the network, said copy being a candidate for the value non-deletable, if a certain condition is fulfilled. | 05-07-2015 |
20150134766 | DATA TRAFFIC SWITCHING AMONG COMPUTING DEVICES IN STORAGE AREA NETWORK (SAN) ENVIRONMENTS - Data traffic switching among computing device in a SAN environment is disclosed herein. According to an aspect, a method may be implemented at an NPV device that is associated with multiple computing devices positioned behind the NPV device in a SAN. The method may also include receiving zoning information associated with the computing devices. The method may also include determining, based on the zoning information a map for switching data traffic among the computing devices. Further, the method may include switching the data traffic among the computing devices based on the determined map. | 05-14-2015 |
20150134767 | ACCELERATED DELIVERY OF MEDIA CONTENT VIA PEER CACHING - An example method includes monitoring client devices to identify a subset of client devices actively connected to an internet gateway server, and maintaining a record of media data chunks cached at each client device of the subset of client devices. The method includes receiving a request from a first client device for a media data item stored at a media server device, and determining that a first target portion of the media data item is cached at a second client device actively connected to the internet gateway server. The method includes instructing the first client device to establish a peer-to-peer connection with the second client device, to request, and to receive the first target portion of the media data item from the second client device. The method includes retrieving and sending the remainder of the media data item to the first client device. | 05-14-2015 |
20150134768 | SYSTEM AND METHOD FOR CONDITIONAL ANALYSIS OF NETWORK TRAFFIC - Embodiments that are described herein provide improved methods and systems for analyzing network traffic. The disclosed embodiments enable an analytics system to perform complex processing to only new, first occurrences of received content, while refraining from processing duplicate instances of that content. In a typical embodiment, the analytics results regarding the first occurring content are reported and cached in association with the content. For any duplicate instance of the content, the analytics results are retrieved from the cache without re-processing of the duplicate content. When using the disclosed techniques, the system still processes all first occurring content but not duplicate instances of content that was previously received and processed. In the embodiments described herein, input data comprises communication packets exchanged in a communication network. | 05-14-2015 |
20150134769 | DATA SHUNTING METHOD, DATA TRANSMISSION DEVICE, AND SHUNTING NODE DEVICE - Embodiments of the present invention relate to a data shunting method, a data transmission device and a shunting node device, the data shunting method provided in the embodiments of the present invention includes: acquiring the number of to-be-transmitted shunted data packets which are cached in the shunting node device; when the number of the to-be-transmitted shunted data packets is less than a first threshold value, transmitting shunted data to the shunting node device, otherwise, not transmitting the shunted data to the shunting node device. The data shunting method provided in the embodiments of the present invention enables the data transmission device to provide the shunting node device with an appropriate shunted data rate. | 05-14-2015 |
20150142908 | COMPUTER DEVICE AND MEMORY MANAGEMENT METHOD THEREOF - A memory management method includes sharing a memory space of a memory component via a network through a host operating system, mounting the shared memory space via the network through a virtual machine, monitoring an memory utilization of a virtual memory of the virtual machine, and allocating a storage block to the virtual machine in a condition that the memory utilization of the virtual memory of the virtual machine is greater than an upper bound. As such, a capacity of the virtual memory of the virtual machine is increased. | 05-21-2015 |
20150142909 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR MANAGING STORAGE SYSTEM - A method for managing a storage system, an associated apparatus, and an associated computer program product are provided, wherein the storage system includes a plurality of network storage devices, and the method includes the steps of: utilizing a broker module to receive a command from a client device; and utilizing the broker module to publish the command to a primary node and a secondary node in the storage system, to control the primary node and the secondary node perform a same operation corresponding to the command, and utilizing the broker module to receive acknowledgement from the primary node and acknowledgement from the secondary node, wherein the primary node represents at least one network storage device utilized as a primary responder within the plurality of network storage devices, and the secondary node represents at least one network storage device utilized as a secondary responder within the plurality of network storage devices. | 05-21-2015 |
20150142910 | FRACTIONAL PRE-DELIVERY OF CONTENT TO USER DEVICES - Systems and methods for delivering fractions of content to user devices before the content is selected or requested (e.g., a pre-delivery of content) are described. In some embodiments, the systems and methods receive an indication that content is available for pre-delivery from a content server to a user device over a network, determine a fraction (e.g., size) of the content available for pre-delivery that satisfies one or more predicted content playback conditions, and causes the determined fraction of the content available for pre-delivery to be delivered to the user device. | 05-21-2015 |
20150142911 | NETWORK INTERFACE CARD HAVING OVERLAY GATEWAY FUNCTIONALITY - In one embodiment, a server includes a virtualization platform providing one or more virtual machines (VMs), the virtualization platform including: logic configured to provide support for the one or more VMs, and logic configured to provide a virtual switch, the virtual switch being configured to provide switching functionality across a network to network traffic received from and/or sent to the one or more VMs, a network interface card (NIC) including a plurality of network ports including multiple Peripheral Component Interconnect express (PCIe) ports, a multi-lane PCIe interface configured to communicate with the server, an Ethernet controller configured to communicate with the network, logic configured to provide overlay network gateway functionality to network traffic received from and/or sent to the network, and logic configured to provide overlay network functionality to network traffic received from and/or sent to the one or more VMs, and a NIC driver configured to interface/support the NIC. | 05-21-2015 |
20150149578 | STORAGE DEVICE AND METHOD OF DISTRIBUTED PROCESSING OF MULTIMEDIA DATA - Provided are a storage device and a method of distributed processing of multimedia data. In the method, the storage device stores multimedia data, initiates an interface configured to share data between a host and the storage device, receives a multimedia data request from the host, processes the multimedia data based on the received multimedia data request, and transmits the processed multimedia data to the host through the interface. | 05-28-2015 |
20150149579 | INTELLIGENT CLIENT CACHE MASHUP FOR THE TRAVELER - Information is collected regarding an event in a computer system that includes a group of client application caches that each temporarily store information associated with one of a group of client applications. A set of rules is stored at one or more of the group of client application caches. Each rule triggers the event in another one of the group of client application caches in response to receipt of a message from a client application associated with the respective client application cache. Another message directed to another specified client application cache is generated for each rule that matches a first received message at a first client application cache. The generated message directs the other specified client application cache to collect and cache specified information from a server associated with the other specified client application cache. | 05-28-2015 |
20150149580 | System And Method For Selectively Caching Hot Content In a Content Distribution Network - A method includes altering a request interval threshold when a cache-hit ratio falling below a target, receiving a request for content, providing the content when the content is in the cache, when the content is not in the cache and the time since a previous request for the content is less than the request interval threshold, retrieving and storing the content, and providing the content to the client, when the elapsed time is greater than the request interval threshold, and when another elapsed time since another previous request for the content is less than another request interval threshold, retrieving and storing the content, and providing the content to the client, and when the other elapsed time is greater than the other request interval threshold, rerouting the request to the content server without caching the content. | 05-28-2015 |
20150296016 | RESOURCE FENCING FOR VLAN MULTI-TENANT SYSTEMS - A storage system has a plurality of nodes which are grouped into a plurality of cluster systems each having multiple nodes, each cluster system being logically partitioned into a plurality of namespaces, each namespace including a collection of data objects, each cluster system having multiple tenants, each tenant being a grouping of namespaces, each cluster system having a plurality of capabilities, at least some of the capabilities being bound to the tenants. A node in the cluster system comprises: a memory, and a controller operable to bind each capability to one of a plurality of IP networks so that each capability is bound to only one of the IP networks and has a destination IP address of the IP network to which the capability is bound. It is permissible for one or more capabilities to be bound to the same IP network. Each IP network has one corresponding network interface. | 10-15-2015 |
20150296018 | SYSTEM FOR THE MANAGEMENT OF OUT-OF-ORDER TRAFFIC IN AN INTERCONNECT NETWORK AND CORRESPONDING METHOD AND INTEGRATED CIRCUIT - A system to manage out-of-order traffic in an interconnect network has initiators that provide requests through the interconnect network to memory resource targets and provide responses back through the interconnect network. The system includes components upstream the interconnect network to perform response re-ordering, which include memory to store responses from the interconnect network and a memory map controller to store the responses on a set of logical circular buffers. Each logical circular buffer corresponds to an initiator. The memory map controller computes an offset address for each buffer and stores an offset address of a given request received on a request path. The controller computes an absolute write memory address where responses are written in the memory, the response corresponding to the given request based on the given request offset address. The memory map controller also performs an order-controlled parallel read of the logical circular buffers and routes the data read from the memory to the corresponding initiator. | 10-15-2015 |
20150296040 | Caching Predefined Data for Mobile Dashboard - Embodiments provide a data caching mechanism based on a user's request (query) to a remote database, and the corresponding response (query result) received therefrom. As part of a database query, the user can define cache parameter(s). When a viable communications link becomes available to access the remote database, volumes of relevant data are returned as a query result and cached locally according to those predefined cache parameter(s). Embodiments are particularly suited to allow a mobile device to interact with data of a remote database in an efficient and reliable manner. The mobile device's small form factor, may preclude local storage of large volumes of remotely stored data. However, this can be compensated for by selectively storing data in the cache memory according to user-specified parameters, allowing the mobile device to continue to have access to relevant information in the event that communication with the remote database is degraded or lost. | 10-15-2015 |
20150301981 | PRE-BUFFERING OF CONTENT DATA ITEMS TO BE RENDERED AT A MOBILE TERMINAL - The present invention relates to methods and devices for pre-buffering one or more content data items to be rendered at a mobile terminal. In a first aspect of the present invention, a mobile terminal ( | 10-22-2015 |
20150304271 | Address resolution protocol buffer and buffering method thereof - An Address Resolution Protocol, ARP, cache is provided, including a network interface module, configured to send an Internet Protocol, IP, data package to the searching module for IP address searching; and to send an acquired Media Access Control, MAC, address to the searching module after the IP address searching fails; a searching module, configured to search, according to the IP data package sent from the network interface module, the ARP cache module for the IP address; and to store the IP address and the MAC address sent from the network interface module in the ARP cache module after the IP address searching fails; and the ARP cache module, configured to provide the IP address to the searching module for the IP address searching; to store the IP address in the ARP cache module after the IP address searching fails. | 10-22-2015 |
20150304443 | METHODS, CIRCUITS, DEVICES, SYSTEMS AND ASSOCIATED COMPUTER EXECUTABLE CODE FOR CACHING CONTENT - Disclosed are methods, circuits, devices, systems and associated computer executable code for caching content. According to embodiments, a client device may be connected to the internet or other distributed data network through a gateway network. As initial portions of client requested content enters the gateway network, the requested content may be characterized and compared to content previously cached on a cache integral or otherwise functionally associated with the gateway network. In the event a match is found, a routing logic, mechanism, circuitry or module may replace the content source server with the cache as the source of content being routed to the client device. In the event the comparison does not produce a match, as content enters the network a caching routine running on processing circuitry associated with the gateway network may passively cache the requested content while routing the content to the client device. | 10-22-2015 |
20150304444 | Distributed Caching - There is provided a method of providing cached content in a telecommunications network. The method comprises requesting content from a caching system (S | 10-22-2015 |
20150304445 | SYSTEM AND METHODS THEREOF FOR DELIVERY OF POPULAR CONTENT USING A MULTIMEDIA BROADCAST MULTICAST SERVICE - Multimedia content, live as well as on-demand, is typically delivered over a network responsive to a request by a user device from a content source and is provided point-to-point. Certain multimedia cache systems are designed to identify popular content and provide such content from locations that are in proximity to the user device, thereby reducing load on the overall network. The system and methods identify user devices capable of receiving content using a multimedia broadcast multicast service (MBMS) and evolved MBMS (eMBMS) delivering popular content by redirecting the content delivery from content caches or the content source to a MBMS/eMBMS thereby reducing overall load of a network. | 10-22-2015 |
20150312165 | TEMPORAL BASED COLLABORATIVE MUTUAL EXCLUSION CONTROL OF A SHARED RESOURCE - The present invention relates to a temporal base method of mutual exclusion control of a shared resource. The invention will usually be implemented by a plurality of host computers sharing a shared resource where each host computer will read a reservation memory that is associated with the shared resource. Typically a first host computer will perform and initial read of the reservation memory and when the reservation memory indicates that the shared resource is available, the first host computer will write to the reservation memory. After a time delay, the host computer will read the reservation memory again to determine whether it has won access to the resource. The first host computer may determine that it has won access to the shared resource by checking that data in the reservation memory includes an identifier corresponding to the first host computer. | 10-29-2015 |
20150312341 | SAVING VIDEO CLIPS ON A STORAGE OF LIMITED SIZE BASED ON PRIORITY - Methods and systems are described for storing video content collected by a home automation system. According to at least one embodiment, an apparatus for accessing video content collected by a home automation system includes a processor, a memory in electronic communication with the processor, and instructions stored in the memory which are executable by a processor to receive video content associated with an event, determine a priority level of the video content based on at least one predetermined criteria, and store the video content for a period of time based on the priority level. | 10-29-2015 |
20150312366 | UNIFIED CACHING OF STORAGE BLOCKS AND MEMORY PAGES IN A COMPUTE-NODE CLUSTER - A method includes, in a plurality of compute nodes that communicate with one another over a communication network, running one or more Virtual Machines (VMs) that access storage blocks stored on non-volatile storage devices coupled to at least some of the compute nodes. One or more of the storage blocks accessed by a given VM, which runs on a first compute node, are cached in a volatile memory of a second compute node that is different from the first compute node. The cached storage blocks are served to the given VM. | 10-29-2015 |
20150312367 | EFFICIENT CACHING IN CONTENT DELIVERY NETWORKS BASED ON POPULARITY PREDICTIONS - A method for caching objects at one or more cache servers of a content delivery network (CDN) includes: determining, by a processor, attributes of objects of a set of objects; calculating, by the processor, an efficiency metric for each object of the set of objects based on the attributes of each object, wherein the attributes of each object include an expected future popularity associated with the object; selecting, by the processor, a subset of objects from the set of objects for caching based on calculated efficiency metrics; and caching the subset of objects at the one or more cache servers. | 10-29-2015 |
20150312369 | CHECKPOINTS FOR MEDIA BUFFERING - Embodiments of techniques, apparatuses and systems associated with media playback are disclosed. In some embodiments, a computing device may receive information about a media file, the information representative of one or more of a plurality of elapsed time points corresponding to buffer checkpoints in playback of the media file. The computing device may determine an amount of the media file that has been cached in a buffer, and may determine, prior to commencing playback of a portion of the media file between two of the elapsed time points, whether the cached amount includes the portion. In response to determining that the cached amount does not include the portion, the computing device may increase the cached amount to include the portion before commencing playback of the portion. Other embodiments may be described and/or claimed. | 10-29-2015 |
20150319245 | COMPUTER SYSTEM - A computer system includes a server that issues an access request with a virtual volume among the plurality of virtual volumes allocated in a plurality of storage apparatuses as an access target and measures a latency in each access path connecting each storage apparatus and the server, and a plurality of control units which are disposed in each storage apparatus and control the I/O of data. Among the plurality of control units, a main control unit that controls the migration of a migration source virtual volume selects, as a migration destination storage apparatus, one storage apparatus connected to an access path from which was obtained a second measured value, which is a measured value that is smaller than a first measured value indicating a latency in an access path connecting a migration source storage apparatus, which includes the migration source virtual volume, and the server, and indicates a latency in an access path connecting a storage apparatus, other than the migration source storage apparatus, and the server, and allocates a migration destination virtual volume in the migration destination storage apparatus. | 11-05-2015 |
20150326685 | DISTRIBUTED CACHE FOR STATE TRANSFER OPERATIONS - A network arrangement that employs a cache having copies distributed among a plurality of different locations. The cache stores state information for a session with any of the server devices so that it is accessible to at least one other server device. Using this arrangement, when a client device switches from a connection with a first server device to a connection with a second server device, the second server device can retrieve state information from the cache corresponding to the session between the client device and the first server device. The second server device can then use the retrieved state information to accept a session with the client device. | 11-12-2015 |
20150326687 | ADAPTIVE DOWNLOADING OR STREAMING TO CONSERVE MOBILE DEVICE OR NETWORK RESOURCES - Embodiments of the present disclosure include techniques for optimization on downloading/streaming activities of media and/or other files (e.g., on a local client, or a local proxy on a mobile device). An example of such downloading/streaming is a user accessing media content including video and/or audio content using a mobile device such as a smart phone, a tablet, or a “phablet,” etc. | 11-12-2015 |
20150334043 | DIFFERENTIAL CACHE FOR REPRESENTATIONAL STATE TRANSFER (REST) API - System and method of differential cache control. Different parts of a representation are controlled by different cache expiration times. A differential control scheme may adopt a hierarchical control structure in which a subordinate level control policy can override its superordinate level control policies. Different parts of the representation can be updated to a cache separately. Differential cache control can be implemented by programming a cache control directive in HTTP/1.1. Respective cache expiration time and their control scopes can be specified in a response header and/or response document provided by a server. | 11-19-2015 |
20150334203 | OPTIMIZING DISTRIBUTED DATA ANALYTICS FOR SHARED STORAGE - Methods, systems, and computer executable instructions for performing distributed data analytics are provided. In one exemplary embodiment, a method of performing a distributed data analytics job includes collecting application-specific information in a processing node assigned to perform a task to identify data necessary to perform the task. The method also includes requesting a chunk of the necessary data from a storage server based on location information indicating one or more locations of the data chunk and prioritizing the request relative to other data requests associated with the job. The method also includes receiving the data chunk from the storage server in response to the request and storing the data chunk in a memory cache of the processing node which uses a same file system as the storage server. | 11-19-2015 |
20150339259 | ACCESS CONTROL FOR WIRELESS MEMORY - The specification and drawings present a new apparatus and method for access control for wireless memory. A memory controller communicating with a plurality of interfaces (at least one interface comprising a radio component for wirelessly communicating with a plurality of UE) can dynamically manage an access control to the memory by the UE and/or other users through any of the interfaces based on the preconfigured rules taking into consideration the identity of the least one interface and/or the determined directionality of the UE relative to the radio component. | 11-26-2015 |
20150341461 | Prefetching of Video Resources for a Network Page - Disclosed are various embodiments for prefetching of objects referenced on a network page. An encoded network page referring to at least one item is retrieved. The same item is included on a second network page. In response to an indication of user interest in the item on the first network page, at least an initial portion of a video resource associated with the indicated item and included on the second network page is retrieved. In response to a user selection of the same item, the retrieved initial portion of the video resource is rendered on the second network page. | 11-26-2015 |
20150350323 | INTELLIGENT DISK SPACE MANAGER - Disclosed herein is a technique for managing disk space in user devices. A disk space manager is configured to manage the disk space by requesting clients running on the user device to free up disk space. The clients receive the requests and respond to the requests by deleting their own data to free up the requested disk space. | 12-03-2015 |
20150350365 | PROBABILITY BASED CACHING AND EVICTION - Some embodiments set forth probability based caching, whereby a probability value determines in part whether content identified by an incoming request should be cached or not. Some embodiments further set forth probability based eviction, whereby a probability value determines in part whether cached content should be evicted from the cache. Selection of the content for possible eviction can be based on recency and/or frequency of the content being requested. The probability values can be configured manually or automatically. Automatic configuration involves using a function to compute the probability values. In such scenarios, the probability values can be computed as a function of any of fairness, cost, content size, and content type as some examples. | 12-03-2015 |
20150350367 | DISTRIBUTED CACHE SYSTEM FOR OPTICAL NETWORKS - Caching techniques are described. An example network device positioned between an optical line terminal (OLT) and a service provider device includes a hot cache, a wide cache controller, and a control unit. The control unit is configured to receive, from a first service delivery platform, a request for digital content, and determine whether the requested digital content is stored on the hot cache. The control unit is further configured to, when the requested digital content is not stored on the hot cache, determine, using the wide cache controller, whether the requested digital content is stored on a wide cache of a second service delivery platform, receive, from the second service delivery platform, the requested digital content, and responsive to the request received from the first delivery platform, send the received digital content to the first delivery platform. | 12-03-2015 |
20150350368 | NETWORK-OPTIMIZED CONTENT DELIVERY FOR HIGH DEMAND NON-LIVE CONTENTS - A method, apparatus and computer-readable storage medium distribute a non-live content stream in a network. An initial meta-file is transmitted in response to a request for the content, which identifies a division of the content stream into blocks, and available sources for delivery of the blocks. The initial meta-file can identify a first multicast and a second multicast server, assigning a first and second portion of the blocks for delivery using the first and second multicast source server, respectively. The first and second portions are transmitted using the first and second multicast source servers, respectively. The first and second portions correspond to distinct non-overlapping portions of the non-live content stream. The initial meta-file can also identify a unicast source server, assigning a third portion of the blocks for delivery using the unicast source server, the third portion being transmitted by the unicast source server. | 12-03-2015 |
20150358407 | Remote Storage System and Method Using Network Attached Storage (NAS) Device - A remote storage system and method using an NAS device is provided, which enable a terminal device to upload or download storage information on the NAS device through a network. The remote storage system using the NAS device comprises: an NAS device configured to store first information uploaded by a terminal device or store second information to be downloaded by the terminal device, the first information and the second information being called storage information; a first component connected between the terminal device and the NAS device, and configured to forward the storage information; and a second component connected to the first component and configured to store user information corresponding to the NAS device, and allocates, to the NAS device according to the user information, an account and a domain name address that are used for forwarding the storage information. | 12-10-2015 |
20150358418 | METHOD FOR OPERATING A CACHE ARRANGED ALONG A TRANSMISSION PATH BETWEEN CLIENT TERMINALS AND AT LEAST ONE SERVER, AND CORRESPONDING CACHE - First type cache adapted to be arranged between a client terminal and at least one server, which:
| 12-10-2015 |
20150365495 | DATA SOURCE MOVEMENT PROCESSING METHOD, PACKET FORWARDING METHOD, AND APPARATUS THEREOF - The present invention discloses a data source movement processing method, a packet forwarding method, and an apparatus thereof, which are applied to an information centric network (ICN). The data source movement processing method includes: when a target container from a first container enters a second container, registering a route of the target container in the second container; communicating with a resolution system, so that the resolution system updates an access container of the target container from the first container to the second container. In the data source movement processing method according to the embodiment of the present invention, a data source movement is supported without introducing frequent and cumbersome route updates and without changing a content name of content, thereby maintaining persistence of the content name, enhancing feasibility of an ICN architecture, and improving user experience. | 12-17-2015 |
20150373108 | DYNAMIC PROXIMITY BASED NETWORKED STORAGE - A computer implemented method of storing data in at least one mobile node according to mobile node location may include identifying a first qualified mobile node and determining a first geographic position of the first qualified mobile node. The method may include determining a user geographic position of a user device, determining whether the first geographic position is within a first proximity relative to the user device, and causing storage of a first data portion in the first qualified mobile node in response to determining that the first geographic position is within the first proximity. | 12-24-2015 |
20150373110 | DATA COMMUNICATIONS SYSTEM FOR AN AIRCRAFT - A data communications system for an aircraft comprising a plurality of line-replaceable units and a data network configured according to an ARINC standard defining ports, and interconnecting the plurality of line-replaceable units, wherein the line-replaceable units communicate via the ports of the data network. | 12-24-2015 |
20150373111 | CONFIGURATION INFORMATION ACQUISITION METHOD AND MANAGEMENT COMPUTER - A management computer configured to manage one or more storage apparatuses managing a plurality of resources transmits requests to the one or more storage apparatuses to, based on one or more basic information pieces for identifying the one or more storage apparatuses, acquire as a priority configuration information on, out of the plurality of resources, resources logically closer to a host computer of the one or more storage apparatuses. The management computer receives the configuration information corresponding to the requests from the one or more storage apparatuses, and incorporates the received configuration information into configuration management information for managing the configuration information on the resources managed by the one or more storage apparatuses. | 12-24-2015 |
20150373114 | STORAGE ABSTRACTION LAYER AND A SYSTEM AND A METHOD THEREOF - Embodiments of the present invention are directed to a storage abstraction layer that is a concatenation of a plurality of storage devices that is accessible by a computing device. The plurality of storage devices includes at least one attached storage of the computing device, at least one cloud storage, or a combination thereof. The storage abstraction layer is presented as an application programming interface (API) to applications running on the computing device to allow each application to store and retrieve data as if it was using a single storage, regardless of where each of the plurality of storage devices is located and the type of each of the plurality of storage devices. Access to individual objects or files on this layer is done transparently such that underlying implementation details are hidden from the calling application. | 12-24-2015 |
20150373139 | METHOD, SYSTEM AND DEVICES FOR CONTENT CACHING AND DELIVERING IN IP NETWORKS - A content request is sent ( | 12-24-2015 |
20150373150 | SERVER, CLIENT, SYSTEM AND METHOD FOR PRELOADING BROWSED PAGE IN BROWSER - The present invention relates to the technical field of network data communication, and discloses a server, a client, a system and a method for preloading a browsed page in a browser. The server includes: a link extraction module configured to extract, from a currently browsed page on a client, links included in the page; a page downloading module configured to download pages corresponding to the links from websites; a page compression module configured to compress the downloaded pages to generate page compression packages corresponding to the links and store the page compression packages into a storage module; the storage module configured to store the page compression packages corresponding to the links; a communication transceiving module configured to receive a request transmitted by the client for loading a page of a link in the page currently browsed and transmit the page compression package corresponding to the link to be loaded to the client. The present invention can solve the technical problems of resulting in a lot of useless download traffic in the client, wasting the network traffic and increasing the operation load of the client. | 12-24-2015 |
20150379157 | PRE-CACHING WEB CONTENT FOR A MOBILE DEVICE - A web service for pre-caching web content on a mobile device includes receiving a request from the mobile device for first web content, fetching the first web content, determining second web content to pre-fetch based upon the first web content, fetching the second web content, and causing the second web content to be stored in a content cache on the mobile device responsive to the request for the first web content. Pre-caching web content in this manner provides web content to the mobile device that the user of the mobile device is likely to access. Pre-caching of additional web content prior to receiving an explicit request improves web browsing performance of the mobile device. | 12-31-2015 |
20150381678 | MANAGING CONTENT ON AN ISP CACHE - One embodiment of the present invention sets forth a method for updating content stored in a cache residing at an internet service provider (ISP) location that includes receiving popularity data associated with a first plurality of content assets, where the popularity data indicate the popularity of each content asset in the first plurality of content assets across a user base that spans multiple geographic regions, generating a manifest that includes a second plurality of content assets based on the popularity data and a geographic location associated with the cache, where each content asset included in the manifest is determined to be popular among users proximate to the geographic location or users with preferences similar to users proximate to the geographic location, and transmitting the manifest to the cache, where the cache is configured to update one or more content assets stored in the cache based on the manifest. | 12-31-2015 |
20150381725 | SERVICE PLAN TIERING, PROTECTION, AND REHYDRATION STRATEGIES - A storage system stores objects and copies of objects on the storage system and other storage systems external to the storage system. The storage system stores the copies in storage pools of volumes, which are organized into one or more tiers. The configuration settings of each tier and each pool within the tier may be configured according to a user's preferences. In one example, the number of copies of data content and the number copies of metadata associated with the data content an individual pool stores may be specified. When objects are migrated between tiers, the objects are stored among the storage pools of the tiers. If the number of data content copies or metadata copies is increased, the data or metadata is copied from a determined copy source and if the number of copies decreases, the data is removed from the volumes in the pools. | 12-31-2015 |
20150381726 | MAINTENANCE OF A FABRIC PRIORITY DURING SYNCHRONOUS COPY OPERATIONS - A primary storage controller receives a write command from a host, wherein Fibre Channel frames corresponding to the write command have a priority indicated by the host. The primary storage controller performs a synchronous copy operation to copy data written by the write command from the primary storage controller to a secondary storage controller, wherein Fibre Channel frames corresponding to the synchronous copy operations have an identical priority to the priority indicated by the host. | 12-31-2015 |
20150381732 | TECHNIQUES FOR MANAGING CONTENT ITEMS ASSOCIATED WITH PERSONAS OF A MULTIPLE-PERSONA MOBILE TECHNOLOGY PLATFORM - A method and system for managing content items associated with personas of a multiple-persona mobile technology platform (MTP) are provided. The method includes receiving a request to perform an action on a content item associated with a first persona of a plurality of personas defined in the MTP, wherein the request is generated by the first persona; identifying at least a second persona of the plurality of personas defined in the MTP is linked to the content item; performing the requested action, when the at least second persona is not linked to the content item; and managing a link between the first persona and the content item, when the at least second persona is linked to the content item. | 12-31-2015 |
20150381756 | Centralized Content Enablement Service for Managed Caching in wireless network - Systems, methods, and instrumentalities are provided to implement content caching. An entity running an external application (EA) may establish a connection between the EA and a centralized cloud controller (CCC) to access a service on the CCC. The EA may receive credentials for access to the service. The connection between the EA and the CCC may be established over a first interface. The EA may send to the service on the CES a query for an available small cell network (SCN) storage. The EA may receive from the service on the CES reply comprising a link to an allocated SCN storage. The EA may ingest one or more contents using the link in the allocated SCN storage. A wireless transmit/receive unit (WTRU) may receive the cached content from an edge server in a small cell network. | 12-31-2015 |
20150381757 | PROXY-BASED CACHE CONTENT DISTRIBUTION AND AFFINITY - A distributed caching hierarchy that includes multiple edge routing servers, at least some of which receiving content requests from client computing systems via a load balancer. When receiving a content request, an edge routing server identifies which of the edge caching servers the requested content would be in if the requested content were to be cached within the edge caching servers, and distributes the content request to the identified edge caching server in a deterministic and predictable manner to increase the likelihood of increasing a cache-hit ratio. | 12-31-2015 |
20160006808 | ELECTRONIC SYSTEM WITH MEMORY NETWORK MECHANISM AND METHOD OF OPERATION THEREOF - An electronic system includes: a network; a memory device, coupled to the network; a host processor, coupled to the network and the memory device, providing a transaction protocol including cut through. | 01-07-2016 |
20160006828 | Embedded network proxy system, terminal device and proxy method - Disclosed is an embedded network proxy system, including a receiving module, a processing module and a storage module. By configuring information of a preferred website, a wireless network is connected using the Third Generation (3G) protocol, the Fourth Generation (4G) protocol and even Wireless Fidelity (WiFi) in advance, and related contents of the preferred website are downloaded in a local storage module. Thus, even the network signal is poor or a user is in an offline state, the user can browse website information by visiting the locally stored related contents of the preferred website as well. Also disclosed is an operating method for an embedded network proxy system, and disclosed is a terminal device including the embedded proxy system. | 01-07-2016 |
20160006830 | METHOD FOR OPERATING A CACHE ARRANGED ALONG A TRANSMISSION PATH BETWEEN CLIENT TERMINALS AND AT LEAST ONE SERVER, AND CORRESPONDING CACHE - A cache arranged between client terminals and at least one server, said cache being configured to receive, from client terminals, requests for at least a first representation of a segment of a multimedia content available in a plurality of representations, comprising:
| 01-07-2016 |
20160006831 | REMOTE ACCESS OF MEDIA ITEMS - Methods and systems that facilitate the downloading of media items to a first network device from a second network device are disclosed. A plurality of media items are identified Media item metadata associated with the plurality of media items is obtained from the second network device and stored on the first network device. Media item content data associate with a first subset of the plurality of media items is obtained from the second network device and stored on the first network device. In this manner, only media item metadata associate with a second subset of the plurality of media items is stored on the first network device. | 01-07-2016 |
20160012008 | COMMUNICATION SYSTEM, CONTROL APPARATUS, COMMUNICATION METHOD, AND PROGRAM | 01-14-2016 |
20160013998 | Collecting ad Uploading Data from Marine Electronics Device | 01-14-2016 |
20160014027 | CACHING DATA IN AN INFORMATION CENTRIC NETWORKING ARCHITECTURE | 01-14-2016 |
20160014075 | System and Method for Managing Page Variations in a Page Delivery Cache | 01-14-2016 |
20160014201 | INFORMATION PROCESSING SYSTEM, NETWORK STORAGE DEVICE, AND NON-TRANSITORY RECORDING MEDIUM | 01-14-2016 |
20160014202 | GLOBAL MANAGEMENT OF TIERED STORAGE RESOURCES | 01-14-2016 |
20160021185 | SCALABLE APPROACH TO MANAGE STORAGE VOLUMES ACROSS HETEROGENOUS CLOUD SYSTEMS - There are provided a system and a computer program product for managing heterogeneous cloud data storage systems. A computing system defines rules that govern a plurality of heterogeneous cloud data storage systems. The computing system receives complete data from a user's computer. The computing system splits the complete data. The computing system stores the split data according to the defined rules into the plurality of heterogeneous cloud data storage systems. | 01-21-2016 |
20160021186 | SCALABLE APPROACH TO MANAGE STORAGE VOLUMES ACROSS HETEROGENOUS CLOUD SYSTEMS - There is provided a method for managing heterogeneous cloud data storage systems across heterogeneous cloud computing systems. The method comprises: defining rules that govern storing of data in one or more of a plurality of heterogeneous cloud data storage systems; receiving complete data from a user's computer; splitting the complete data; and storing the split data according to the defined rules into the plurality of heterogeneous cloud data storage systems. | 01-21-2016 |
20160021187 | VIRTUAL SHARED STORAGE DEVICE - In a cluster computing environment, multiple computing devices may be configured to share same storage devices to perform different portions of one or more computing tasks. The storage devices may be communicatively coupled to the computing devices via a network so that each of the multiple computing devices may retrieve data from or write data to the shard storage devices. | 01-21-2016 |
20160021206 | METHOD AND ROUTER FOR SENDING AND PROCESSING DATA - The application provides a method and a router for sending and processing data. In the method, a first router receives a first Internet Protocol IP data packet sent by a network server, where the first IP data packet carries a first data adding instruction; and the first router searches, according to the first data adding instruction, a first local cache for to-be-added data that is indicated by the first data adding instruction, adds, to the first IP data packet, the to-be-added data that is indicated by the first data adding instruction, to form a second IP data packet, and sends the second IP data packet. Because a router can add data to an IP data packet according to a data adding instruction, flexibility of data combinations is improved, and because the IP data packet can support multiple times of data adding, the flexibility of data combinations is further improved. | 01-21-2016 |
20160021209 | ODATA OFFLINE CACHE FOR MOBILE DEVICE - A server system may include a request handler and a storage. The request handler may receive at least one request from a program on a user side. The storage may include a first cache and a second cache, storing data in format directly compatible to the program. The first cache stores only data matching to a server. If the at least one request corresponds to a change to the data from the program, then the second cache stores the at least one request and the request handler sends the at least one request to the server for updating the change. | 01-21-2016 |
20160028605 | SYSTEMS AND METHODS INVOLVING MOBILE LINEAR ASSET EFFICIENCY, EXPLORATION, MONITORING AND/OR DISPLAY ASPECTS - Certain systems and methods herein are directed to features of accessing and/or improving building system efficiency and supporting linear asset networks, including aspects involving IoT (the Internet of things). For example, some embodiments may include ways to measure occupant comfort, ways to conserve energy in heating and cooling linear asset networks, measure the efficiency of linear assets for energy and water delivery and consumption, improve machine efficiency by increasing maintenance effectiveness and many others. The safe fusion of sensor data from human devices, machines, linear assets and space provides a new correlated collection of data for analysis and optimization of building control systems. Innovations herein may pertain, inter alia, to water, gases, liquids, and buildings including commercial, homes, industrial and transportation-oriented spaces such as ships, trains, airplanes, mobile homes. | 01-28-2016 |
20160028787 | SYSTEM AND METHOD FOR USING A STREAMING PROTOCOL - An initialization vector (IV) is employed to decrypt a block of a stream that has been encrypted with Cypher Block Chaining (CBC) encryption, without requiring decryption of previous blocks within the stream. For example, a listener who accesses a distribution point to retrieve encrypted content authenticates himself to an application server that regulates access to encrypted content on the distribution point, and responsively receives a key. The listener then requests access to a reference point within the encrypted content stream somewhere after its beginning (e.g., using preview clips). The distribution point relates the reference point to a corresponding block of the encrypted stream, and identifies an IV previously used for encryption of that block. The distribution point provides the associated encrypted block of content and the IV to the listener to enable mid-stream rendering of the encrypted content, without requiring the listener to decrypt previous blocks within the encrypted stream. | 01-28-2016 |
20160028818 | RELIABLE TRANSFER OF DATA FROM AN IMAGE CAPTURING DEVICE TO A REMOTE DATA STORAGE - A method for transferring data from a data capturing device (DCD) comprises: establishing a first communication link between a first user device and the DCD. The method also includes: the DCD capturing data intended to be communicated to the first user device; and notifying the first user device of an availability of captured data for transfer to the first user device. The method further includes: receiving a response from the first user device; and when the response indicates that the captured data should be sent directly to the first device, forwarding the captured data to the first user device. However, when the response includes remote storage connectivity data, which indicates that the captured data should be sent to the remote storage, the method includes establishing a data communication session with the remote storage using the remote storage connectivity data and transferring the captured data directly to the remote storage. | 01-28-2016 |
20160028830 | RURAL AREA NETWORK DEVICE - Some embodiments of this disclosure operate a network device in conjunction with a social networking system. The operations can include establishing a network island by providing network connectivity in a local region via the network device; connecting the network device to an intermittent network channel that is not continuously active; when the intermittent network channel is active, receiving a content item via the intermittent network channel, wherein the content items is not destined for a specific device in the network island; and caching the content item in a cache storage of the network device such that the content item is available to be accessed by any computing device within the network island. | 01-28-2016 |
20160028847 | ESTABLISHING CACHES THAT PROVIDE DYNAMIC, AUTHORITATIVE DNS RESPONSES - Embodiments are directed to establishing caches that provide authoritative domain name system (DNS) answers to DNS requests. In one scenario, a computer system establishes a cache that stores authoritative DNS answers to DNS queries. The cache corresponds to a specified DNS zone that includes authoritative DNS answers for a subset of DNS queries. The cache is configured to store the authoritative DNS answers for at least a specified period of time during which the authoritative DNS answers are updatable. The cache then receives an update indicating that at least one cached DNS answer is out-of-date and the computer system purges the out-of-date DNS answer from the cache, ensuring that the cache continually provides authoritative DNS answers for DNS queries assigned to the specified DNS zone. | 01-28-2016 |
20160028848 | AGGREGATED DATA IN A MOBILE DEVICE FOR SESSION OBJECT - A method, a device and a system for providing access to a mobile device for a session object to aggregated data associated with a session are provided. The method includes populating data records of a data repository of a data management system from an external data system; generating first information in the data records stored in the data repository; caching the first information on a caching server; creating an application link to be displayed in a mobile device, wherein the application link enables the access to the cached first information; providing an access authorization to the mobile device; retrieving the cached first information from the caching server; displaying the cached first information in a user interface of the mobile device; generating second information dynamically; displaying the second information in the user interface; providing an evaluation for the job candidate; and deactivating the application link after the session takes place. | 01-28-2016 |
20160036903 | ASYNCHRONOUS PROCESSING OF MESSAGES FROM MULTIPLE SERVERS - Systems and methods for asynchronous processing of messages that are received from multiple servers. An example method may comprise: receiving, by a first processing thread, in a non-blocking mode, a plurality of sub-application layer protocol packets from a plurality of servers; processing one or more sub-application layer protocol packets received from a first server of the plurality of servers, to produce a first application layer message; writing the first application layer message to a message queue; processing one or more sub-application layer protocol packets received from a second server of the plurality of servers, to produce a second application layer message; writing the second application layer message to the message queue; and reading, by two or more processing threads of a processing thread pool, two or more application layer messages including the first application layer message and the second application layer message from the message queue, to produce two or more memory data structures based on the read application layer messages. | 02-04-2016 |
20160036934 | WEB REDIRECTION FOR CACHING - This specification generally relates to using redirect messages to implement caching. One example method includes receiving from a client a first request for a network resource, the first request including an original location of the network resource; determining that a response to the first request is to be cached; sending a redirect response to the client including a cache location for the network resource; receiving a second request for the network resource from the client, the second request including the cache location; in response to receiving the second request for the network resource from the client: determining that the network resource has not been previously cached; retrieving the network resource from the original location; caching the retrieved network resource in a location associated with the cache location for the network resource; and sending the retrieved network resource to the client. | 02-04-2016 |
20160036936 | Web Redirection for Caching - This specification generally relates to using redirect messages to implement caching. One example method includes receiving from a client a first request for a network resource, the first request including an original location of the network resource; determining that a response to the first request is to be cached; sending a redirect response to the client including a cache location for the network resource; receiving a second request for the network resource from the client, the second request including the cache location; in response to receiving the second request for the network resource from the client: determining that the network resource has not been previously cached; retrieving the network resource from the original location; caching the retrieved network resource in a location associated with the cache location for the network resource; and sending the retrieved network resource to the client. | 02-04-2016 |
20160036937 | MEMORY SYSTEM ALLOWING HOST TO EASILY TRANSMIT AND RECEIVE DATA - According to one embodiment, a memory system includes a non-volatile semiconductor memory device, a control unit, a memory as a work area, a wireless communication module, and an extension register. The control unit controls the non-volatile semiconductor memory device. The extension register is provided in the memory and has a data length by which a wireless communication function of the wireless communication module can be defined. The control unit causes the non-volatile semiconductor memory device to store, as a file, an HTTP request supplied from a host, causes the extension register, based on a first command supplied from the host, to register an HTTP transmission command transmitted together with the first command, and causes the wireless communication module to transmit the HTTP request stored in the non-volatile semiconductor memory device based on the transmission command registered in the extension register. | 02-04-2016 |
20160036938 | CLUSTERED CACHE APPLIANCE SYSTEM AND METHODOLOGY - A method, system and program are disclosed for accelerating data storage by providing non-disruptive storage caching using clustered cache appliances with packet inspection intelligence. A cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies provides low-latency access and redundancy in responding to both read and write requests for cached files, thereby improving access time to the data stored on the disk-based NAS filer (group). | 02-04-2016 |
20160044077 | POLICY USE IN A DATA MOVER EMPLOYING DIFFERENT CHANNEL PROTOCOLS - Techniques and mechanisms described herein facilitate the transmission of a data stream to a networked storage system. According to various embodiments, a request to transfer the data stream to the networked storage system may be received at a data mover located at a client device. The data mover may be configured to transfer data to the networked storage system via two different communications protocol interfaces. The data stream may be transmitted to the networked storage system over a network via a first one of the interfaces when a first characteristic associated with the data stream meets a designated criterion. | 02-11-2016 |
20160044126 | PROBABILISTIC LAZY-FORWARDING TECHNIQUE WITHOUT VALIDATION IN A CONTENT CENTRIC NETWORK - A network node can use reputation values to determine when to forego validating a cached Content Object's authenticity. During operation, the network node can receive an Interest over a Content Centric Network (CCN). If the Content Store includes a matching Content Object that satisfies the Interest, the node obtains the cached Content Object. The node then determines whether the Interest includes a validation token that is to be used to validate the Content Object's authenticity. If so, the node determines a reputation value for the Content Object, such that the reputation value indicates a likelihood that validation of the Content Object's authenticity will be successful. If the network node determines that the reputation value exceeds a predetermined threshold, the node returns the Content Object without validating the Content Object's authenticity. | 02-11-2016 |
20160044127 | IDENTIFYING AND CACHING CONTENT FOR OFFLINE USE - In one embodiment, a method includes identifying candidate content associated with a user of a computing device, selecting, from the candidate content, cache content to be stored in cache storage of the computing device for access by the user when the computing device does not have network connectivity, and storing the cache content in the cache storage of the computing device. The cache content may be based on information associated with a user node that represents the user in a social graph. The cache content may include entities liked by the user, friends of the user, and/or entities of interest to the user. The cache content includes web pages accessed by the user and/or web pages referenced by content created by the user. The cache content may include information related to past, current, and/or predicted actions of the user, such as social network posts, travel itineraries, and geographic locations. | 02-11-2016 |
20160044130 | DIGITAL KEY DISTRIBUTION MECHANISM - The present invention relates to a method for distributing digital keys. The method includes the steps of a first database storing a plurality of keys relating to a plurality of products; for each product, transferring keys from the first database to a corresponding cache in a second database; in response to a request for a key for a product, retrieving and distributing a key from the corresponding cache; and refreshing the corresponding cache by transferring further keys from the first database to the corresponding cache. A system for distributing digital keys is also disclosed. | 02-11-2016 |
20160044143 | DATA STRUCTURE AND ASSOCIATED MANAGEMENT ROUTINES FOR TCP CONTROL BLOCK (TCB) TABLE IN NETWORK STACKS - A method for transport layer lookup involves receiving a first incoming transport layer packet, and searching a pointer cache for a first matching transport layer data structure including state information corresponding to the first incoming packet. The pointer cache includes pointer cache lines, each of which stores at least one pointer to a subset of global transport layer data structures. The method further involves returning the state information corresponding to the first incoming packet using the first matching transport layer data structure when a pointer cache hit occurs, receiving a second incoming transport layer packet, searching the pointer cache for a second matching transport layer data structure including state information corresponding to the second incoming packet, and searching the plurality of global transport layer data structures in main memory to obtain the matching second transport layer data structure, when a pointer cache miss occurs. | 02-11-2016 |
20160050257 | INTERFACING WITH REMOTE CONTENT MANAGEMENT SYSTEMS - A content management system interface at a local computer device is configured to receive user file commands from a file manager and translate the user file commands into content management commands for sending to the remote content management system via a network interface. The content management system interface can further be configured to receive remote file information from the remote content management system via the network interface and translate the remote file information into user file information for the file manager. | 02-18-2016 |
20160050292 | LOCAL WEB RESOURCE ACCESS - A router can be configured to serve web resources in response to expected web resource requests without requesting such resources from a corresponding remote web server. In an example, this functionality can be achieved by the router, wherein the router is configured to: receive a web resource access request from a terminal device, the web resource access request including a target uniform resource locator (URL); perform a search in a data structure containing a plurality of local URLs according to the target URL; identify a local URL of the plurality of URLs corresponding to the target URL according to the performed search; acquire a local version of a target web resource corresponding to the target URL from locally stored web resources according to the identified local URL; and communicate the local version of the target web resource to the terminal device. | 02-18-2016 |
20160055118 | PACKET BUFFER WITH DYNAMIC BYPASS - A write queue, for queuing a packet in a traffic manager coupled to a memory device, is selected from among a preemptable write queue configured to queue packets that are candidates for being retrieved from the traffic manager before the packets are written to the memory device and a non-preemptable write queue configured to queue packets that are not candidates for being retrieved from the traffic manager before the packets are written to the memory device. The packet is written to the selected write queue. A read request is generated for retrieving the packet from the memory device, and it is determined whether the packet is queued in the preemptable write queue. If the packet is queued in the preemptable write queue, the packet is extracted from the preemptable write queue for retrieving the packet from the traffic manager before the packet is written to the memory device. | 02-25-2016 |
20160057224 | DISTRIBUTED STORAGE OVER SHARED MULTI-QUEUED STORAGE DEVICE - A method for data storage includes, in a system that includes one or more storage controllers, multiple servers and multiple multi-queue storage devices, assigning in each storage device server-specific queues for queuing data-path storage commands exchanged with the respective servers. At least some of the data-path storage commands are exchanged directly between the servers and the storage devices, not via the storage controllers, to be queued and executed in accordance with the corresponding server-specific queues. | 02-25-2016 |
20160057228 | APPLICATION EXECUTION PROGRAM, APPLICATION EXECUTION METHOD, AND INFORMATION PROCESSING TERMINAL DEVICE THAT EXECUTES APPLICATION - A non-transitory computer-readable storage medium storing therein an application execution program for causing a computer to execute a process including: associating an external address outside of a terminal device with an application stored in a memory in the terminal device; booting an internal web server to which the external address is assigned, the internal web server being formed in the terminal device; causing a browser to access the internal web server at the external address and acquire the application stored in the memory; and causing the browser to execute the application and access data in a data storage region in the terminal device associated with the external address. | 02-25-2016 |
20160057244 | Business Web Applications Lifecycle Management with Multi-tasking Ability - Technical solutions for managing business application life cycle with multi-tasking ability are provided. In some implementations, a method includes: at an enterprise data processing application: (A) activating a first application page, which includes: loading a first data set from a first data source, and causing the first data set to be displayed on the first application page; (B) switching from the first application page to a second application page, by: deactivating, without closing, the first application page, including: causing the first data set to be stored in a temporary storage; and activating a second application page; and (C) switching from the second application page back to the first application page, by: deactivating, without closing, the second application page; and re-activating the first application page, including: loading the first data set from the temporary storage, and causing the first data set to be displayed on the first application page. | 02-25-2016 |
20160057245 | GLOBALLY DISTRIBUTED VIRTUAL CACHE FOR WORLDWIDE REAL-TIME DATA ACCESS - A globally distributed virtual cache is configured to provide storage resources for users around the globe. A user of the virtual cache uses a computing device to access data that is stored in storage centers included within the virtual cache. Those storage centers may be surface-based, atmosphere-based, or space-based. When the user accesses the same data repeatedly, the virtual cache migrates that data to a storage center that is closer to the user, thereby reducing latencies associated with accessing that data. When the user attempts to communicate with another user also coupled to the virtual cache, the virtual buffers data that is exchanged between those users to facilitate real-time or near real-time communication between those users. | 02-25-2016 |
20160063091 | APPARATUS, SYSTEM AND METHOD FOR THE EFFICIENT STORAGE AND RETRIEVAL OF 3-DIMENSIONALLY ORGANIZED DATA IN CLOUD-BASED COMPUTING ARCHITECTURES - A cloud based storage system and methods for uploading and accessing 3-D data partitioned across distributed storage nodes of the system. The data cube is processed to identify discrete partitions thereof, which partitions may be organized according to the x (e.g., inline), y (e.g., crossline) and/or z (e.g., time) aspects of the cube. The partitions are stored in unique storage nodes associated with unique keys. Sub-keys may also be used as indexes to specific data values or collections of values (e.g., traces) within a partition. Upon receiving a request, the proper partitions and values within the partitions are accessed, and the response may be passed to a renderer that converts the values into an image displayable at a client device. The request may also facilitate data or image access at a local cache, a remote cache, or the storage partitions using location, data, retrieval, and/or rendering parameters. | 03-03-2016 |
20160065675 | METHOD AND SYSTEM FOR ROUTING DATA FLOWS IN A CLOUD STORAGE SYSTEM - A system, method, and computing device for allowing storage services with a cloud storage system are provided. The method includes dynamically selecting a best route between a cloud storage system (CSS) and a computing device, wherein the CSS is geographically remote from the computing device; and establishing, based on the selected best route, a data flow between the CSS and the computing device, wherein the data flow is established to allow at least a storage service related to the CSS. | 03-03-2016 |
20160072865 | ACTIVE OFFLINE STORAGE MANAGEMENT FOR STREAMING MEDIA APPLICATION USED BY MULTIPLE CLIENT DEVICES - A system, method and computer program product for storing streaming media content includes: receiving streaming content, at a first mobile computing device, from a content service provider over a communications network; and determining, by a secondary mobile computing device specific details of a use of the content currently being received and buffered at the first device. The second device obtains, using the determined specific details, the content expected to be consumed by the first device to a local memory storage device at the secondary device, and stores the expected content for subsequent consumption. The system and method provides for an awareness of data usage of an account instance on the secondary device; storing a set of data locally on a secondary device based on usage of a primary device; and enabling the downloading of a set of data to the secondary device via a local connection to the primary device. | 03-10-2016 |
20160072885 | ARRAY-BASED COMPUTATIONS ON A STORAGE DEVICE - An instruction from an application server to perform a computation is received at a network-attached storage (NAS) device. The computation uses arrays of data that are stored by the NAS device as inputs. The instruction includes remote procedure calls that identify operations that are included in the computation, including a first remote procedure call that will cause the NAS device to perform a read operation on a first file containing an array of data to be used as an input for the computation, and a second remote procedure call that will cause the NAS device to perform an array operation using the array of data. The operations are executed on the NAS device to produce a result that is stored in a second file in a location in a file system managed by the NAS device and accessible to the application server. | 03-10-2016 |
20160072886 | SENDING INTERIM NOTIFICATIONS TO A CLIENT OF A DISTRIBUTED FILESYSTEM - The disclosed embodiments disclose techniques for sending interim notifications to a client of a distributed filesystem. Two or more cloud controllers collectively manage distributed filesystem data that is stored in one or more cloud storage systems; the cloud controllers ensure data consistency for the stored data, and each cloud controller caches portions of the distributed filesystem. During operation, a cloud controller receives a client request to access a file. The cloud controller determines that it will need to contact at least one of another peer cloud controller or a cloud storage system to service the request, and sends an interim notification to the client to notify the client that the request is pending. | 03-10-2016 |
20160072900 | METHOD AND SYSTEM FOR THE GENERATION OF CONTEXT AWARE SERVICES BASED ON CROWD SOURCING - A context aware service generation method and system that stores a list of events, stores one or more dates and times each corresponding to an event, receives user data indicating a state of the user, determines whether a current date and time corresponds with a date and time stored in the memory, determines an event in response to determining that the current date and time corresponds with the date and time, determines a user context based on the user data, and generates a subset of services based on the user context and the event. The method uses crowd sourcing technique to generate services to the users. | 03-10-2016 |
20160072910 | Caching of Machine Images - Technology is described for reducing computing instance launch times. A computing instance that is expected to be launched in a computing service environment during a defined time period may be identified. A machine image associated with the computing instance may be determined to be cached in the computing service environment using a launch time prediction model to reduce a launch time for launching the computing instance as compared to not caching the machine image. At least one physical host in the computing service environment that is available to cache the machine image may be selected to lower the launch time of the computing instance as predicted by the launch time prediction model. The machine image may be stored in the physical host in order to minimize the launch time for launching the computing instance in the computing service environment, using the processor. | 03-10-2016 |
20160077764 | DISTRIBUTED RAID OVER SHARED MULTI-QUEUED STORAGE DEVICES - A method for data storage includes, in a system that includes multiple servers and multiple storage devices, holding in a server a definition of a stripe that includes multiple memory locations on the storage devices, to be used by the servers for storing multiple data elements and at least a redundancy element calculated over the data elements. One or more of the data elements in the stripe are modified by the server, by executing in the storage devices an atomic command, which updates the redundancy element to reflect the modified data elements only if a current redundancy element stored in the storage devices reflects the multiple data elements prior to modification of the data elements, and storing the modified data elements in the storage devices only in response to successful completion of the atomic command. | 03-17-2016 |
20160080255 | METHOD AND SYSTEM FOR SETTING UP ROUTING IN A CLUSTERED STORAGE SYSTEM - Methods and systems for setting up routing in a clustered storage system are provided. The method includes generating a global routing data structure having a plurality of default routes for a clustered storage system having a plurality of nodes; creating a logical interface for a virtual storage system presented to a client system for using storage space at the clustered storage system managed by one of the plurality of nodes; examining the global routing data structure by the plurality of nodes for adding a route for the logical interface when a gateway address of the route is on a same subnet as the logical interface; and storing the route in a routing data structure for the node that manages the logical interface for the virtual storage system. | 03-17-2016 |
20160080517 | MECHANISM AND METHOD FOR COMMUNICATING BETWEEN A CLIENT AND A SERVER BY ACCESSING MESSAGE DATA IN A SHARED MEMORY - A mechanism and method for accessing message data in a shared memory by at least one client, includes an allocation of data in the shared memory, the memory configured in a plurality of buffers, and accessing the data by a client or a server without locking or restricting access to the data. | 03-17-2016 |
20160085718 | NVM EXPRESS CONTROLLER FOR REMOTE ACCESS OF MEMORY AND I/O OVER ETHERNET-TYPE NETWORKS - A method and system for enabling Non-Volatile Memory express (NVMe) for accessing remote solid state drives (SSDs) (or other types of remote non-volatile memory) over the Ethernet or other networks. An extended NVMe controller is provided for enabling CPU to access remote non-volatile memory using NVMe protocol. The extended NVMe controller is implemented on one server for communication with other servers or non-volatile memory via Ethernet switch. The NVMe protocol is used over the Ethernet or similar networks by modifying it to provide a special NVM-over-Ethernet frame. | 03-24-2016 |
20160088112 | CONTEXTUAL ROUTING DEVICE CACHING - A routing device capable of performing application layer data caching is described. Application data caching at a routing device can alleviate the bottleneck that an application data host may experience during high demands for application data. Requests for the application data can also be fulfilled faster by eliminating the network delays for communicating with the application data host. The techniques described can also be used to perform analysis of the underlying application data in the network traffic transiting though a routing device. | 03-24-2016 |
20160088113 | CHOREOGRAPHED CACHING - A routing device capable of performing application layer data caching is described. Application data caching at a routing device can alleviate the bottleneck that an application data host may experience during high demands for application data. Requests for the application data can also be fulfilled faster by eliminating the network delays for communicating with the application data host. The techniques described can also be used to perform analysis of the underlying application data in the network traffic transiting though a routing device. | 03-24-2016 |
20160088116 | METHODS AND SYSTEMS FOR CACHING DATA COMMUNICATIONS OVER COMPUTER NETWORKS - A computer-implemented method and system for caching multi-session data communications in a computer network. | 03-24-2016 |
20160088117 | CONTENT REPLACEMENT AND REFRESH POLICY IMPLEMENTATION FOR A CONTENT DISTRIBUTION NETWORK - A method for replacing, refreshing, and managing content in a communication network is provided. The method defines an object policy mechanism that applies media replacement policy rules to defined classes of stored content objects. The object policy mechanism may classify stored content objects into object groups or policy targets. The object policy mechanism may also define metric thresholds and event triggers as policy conditions. The object policy mechanism may further apply replacement policy algorithms or defined policy actions against a class of stored content objects. The media replacement policy rules are enforced at edge content storage repositories in the communication network. A computing device for carrying out the method, and a method for creating, reading, updating, and deleting policy elements and managing policy engine operations, are also provided. | 03-24-2016 |
20160094419 | SYSTEMS AND METHODS FOR MONITORING GLOBALLY DISTRIBUTED REMOTE STORAGE DEVICES - Methods and systems are described for remotely monitoring a plurality of distributed remote storage devices. An example computer implemented method includes locally collecting monitoring data for one of the plurality of distributed remote storage devices, and periodically sending at least one of an aggregate of the locally recorded monitoring data and a summary of the locally recorded monitoring data to a remote location. The remote location includes at least one of another one of the plurality of distributed remote storage devices, at least one central server, and a set of the plurality of distributed remote storage devices. | 03-31-2016 |
20160094675 | METHOD AND SYSTEM FOR REMOTE MEETINGS - In one embodiment, a client device determines that a client device display screen is displaying a video image as enlarged, compares received regions of a received video image with regions of the displayed video image, determines that the compared regions of the received video image are different from the regions of the displayed video image, and stores received video frames comprising the received video image in a cache memory. Related systems, apparatus, and methods are also described. | 03-31-2016 |
20160100009 | CLOUD PROCESS FOR RAPID DATA INVESTIGATION AND DATA INTEGRITY ANALYSIS - A system and method for rapid data investigation and data integrity analysis is disclosed. A data set is received by a server computer from one or more client computers connected with the server computer via a communications network, and the data set is stored in a distributed storage memory. One or more analytical processes are executed on the data set from the distributed storage memory to generate statistics based on each of the analytical processes, and the statistics are stored in a random access memory, the random access memory being accessible by one or more compute nodes, which generate a graphical representation of at least some statistics stored in the random access memory. The graphical representation of at least some statistics is then formatted for transmission to and display by the one or more client computers. | 04-07-2016 |
20160110310 | CACHE MANAGEMENT - A computer-implemented method, computer program product and computing system for receiving, on a second computing device, a read request from a user for web content local to the second computing device. An invalidation token is received for the web content local to the second computing device. The invalidation token includes a last modified timestamp for the web content local to the second computing device. The invalidation token is processed to determine if the web content local to the second computing device is substantially similar to web content local to a first computing device. If the web content local to the second computing device is substantially similar to the web content local to a first computing device, the web content local to the second computing device is provided to the user. If the web content local to the second computing device is not substantially similar to the web content local to the first computing device, the web content local to the first computing device is obtained and provided to the user. | 04-21-2016 |
20160112511 | PRE-FETCH CACHE FOR VISUALIZATION MODIFICATION - Various technologies pertaining to modifying visualizations are described herein. A client computing device requests a visualization from a server computing device, and the server computing device constructs the visualization responsive to receipt of the request. The server computing device further identifies anticipated transformations for the visualization, and transmits the visualization and the transformations to the client computing device. The client computing device displays the visualization, and responsive to receipt of a request to modify the visualization, executes a transformation provided by the server computing device to update the visualization. | 04-21-2016 |
20160112534 | HIERARCHICAL CACHING FOR ONLINE MEDIA - A method include receiving, at a first cache device, a request to send a first asset to a second device; determining whether the first asset is stored at the first cache device; when the determining whether the first asset is stored at the first cache device indicates that first asset is not stored at the first cache device, obtaining, at the first cache device, the first asset, performing a comparison operation based on an average inter-arrival time of the first asset with respect to the first cache device and a characteristic time of the first cache device, the characteristic time of the first cache device being an average period of time assets cached at the first cache device are cached before being evicted from the first cache device, and determining whether or not to cache the obtained first asset at the first cache device based on the comparison; and sending the obtained first asset to the second device. | 04-21-2016 |
20160119425 | DISTRIBUTED PROCESSING SYSTEM - The server device includes a cache, and identification information of data used in a previously executed transaction. The server device compares identification information of data to be used in a transaction received from a client, with the identification information held by it. When the comparison result shows a mismatch, the server device executes the transaction after updating the cache by using data acquired from a persistent storage device, while when the comparison result shows a match, the server device executes the transaction without updating the cache. Then, the server device determines whether optimistic exclusion succeeded or failed, and in the case of failure, re-executes the transaction after updating the data in the cache by using the data acquired from the persistent storage device. | 04-28-2016 |
20160119426 | NETWORKED DATA PROCESSING APPARATUS - A networked data processing apparatus includes a first communication interface adapted for transmitting and receiving commands and/or status messages related to a plurality of remotely located network devices connected via the interface, and further includes a first data storage for non-volatile storage of raw data received from the remote network devices. A processing unit of the apparatus is adapted for processing raw data retrieved from the first data storage ( | 04-28-2016 |
20160119443 | SYSTEM AND METHOD FOR MANAGING APPLICATION PERFORMANCE - A system and method for managing application performance includes a storage controller including a memory containing machine readable medium comprising machine executable code having stored thereon instructions for performing a method of managing application performance and a processor coupled to the memory. The processor is configured to execute the machine executable code to receive storage requests from a plurality of first applications via a network interface, manage QoS settings for the storage controller and the first applications, and in response to receiving an accelerate command associated with a second application from the first applications, increase a first share of a storage resource allocated to the second application, decrease unlocked second shares of the storage resource of the first applications, and lock the first share. The storage resource is a request queue or a first cache. In some embodiments, the second application is a throughput application or a latency application. | 04-28-2016 |
20160127189 | SYSTEM AND METHOD FOR DISTRIBUTING HEURISTICS TO NETWORK INTERMEDIARY DEVICES - A policy distribution server provides, on a subscription basis, policy updates to effect desired behaviors of network intermediary devices. The policy updates may specify caching policies, and may in some instances, include instructions for data collection by the network intermediary devices. Data collected in accordance with such instructions may be used to inform future policy updates distributed to the network intermediary devices. | 05-05-2016 |
20160127467 | TECHNIQUES FOR STORING AND DISTRIBUTING METADATA AMONG NODES IN A STORAGE CLUSTER SYSTEM - Various embodiments are generally directed to techniques for reducing the time required for a node to take over for a failed node or to boot. An apparatus includes an access component to retrieve a metadata from a storage device coupled to a first D-module of a first node during boot, the metadata generated from a first mutable metadata portion and an immutable metadata portion, and the first metadata specifying a first address of a second D-module of a second node; a replication component to contact the second data storage module at the first address; and a generation component to, in response to failure of the contact, request a second mutable metadata portion from a N-module of the first node and generate a second metadata from the second mutable metadata portion and the immutable metadata portion, the second mutable metadata portion specifying a second address of the second D-module. | 05-05-2016 |
20160127493 | CACHING METHODS AND SYSTEMS USING A NETWORK INTERFACE CARD - A computing device having a host memory and a host processor for executing instructions out of the host memory; and a network interface card interfacing with the computing device are provided. When there is a cache hit for a read request, the network interface card processes the read request by obtaining data stored from one or both of the host memory and a storage device that the network interface card accesses without involving the host processor and when there are is a cache miss, then the read request is processed by the host processor. | 05-05-2016 |
20160127498 | METHOD AND APPARATUS FOR PROVIDING INFORMATION TO AN APPLICATION IN A MOBILE DEVICE - A method, computer readable medium, and an apparatus for providing information to an application of a mobile endpoint device are disclosed. For example, the method receives a request for the information from the application of the mobile endpoint device, and provides the information to the application from a cache when the application is deemed to be a non-critical application. | 05-05-2016 |
20160150049 | Efficiently Caching Data at a Client Device - A merchant system computes various probabilities that visitors to a Web site will request individual Web pages of the Web site. The computed probabilities are then utilized to cache Web pages having the highest probabilities of being requested to a client device. The probability data may include aggregate probability data that defines the probability that any visitor to the Web site will request a Web page, customer segment probability data that defines the probability that customers in a particular customer segment will visit the Web pages, and/or customer-specific probability data that defines the probability that a specific customer of the Web site will visit the Web pages. Only Web pages having a computed probability greater than a caching threshold may be cached at the client device. Additionally, the Web pages may also be cached at the client device based upon the actual interaction with the Web site by a visitor. | 05-26-2016 |
20160156732 | Web Page Pre-loading Optimization | 06-02-2016 |
20160156733 | CONTENT PLACEMENT IN HIERARCHICAL NETWORKS OF CACHES | 06-02-2016 |
20160156734 | CONTROL SYSTEM AND METHOD FOR CACHE COHERENCY | 06-02-2016 |
20160164995 | Content Engine for Mobile Communications Systems - An exemplary content engine includes a content gateway configured to analyze and route content requests to a content server. The content server can be a cache server or a mobile content server. The cache server can be configured to receive and store cacheable web content from a controller that is configured to receive the cacheable web content from at least one cacheable web content provider, such as a web server, and route the content to the cache server. The mobile content server can be configured to receive, from the controller, and store the digital media content. The controller can be further configured to receive the digital media content from at least one external content server and route the content to the mobile content server. The content gateway can be further configured to receive non-cacheable web content from at least one non-cacheable web content provider. | 06-09-2016 |
20160164997 | System And Method For Routing Content Based On Real-Time Feedback - A method includes receiving at a cache server a content request from a client system, determining that the cache server is overloaded in response to receiving the content request, and in response to determining that the cache server is overloaded, returning to the client system a domain redirection response including a load status of the cache server. | 06-09-2016 |
20160164998 | METHOD AND SYSTEM FOR ADAPTIVE PREFETCHING - A cache server prefetches one or more web pages from an origin server prior to those web pages being requested by a user. The cache server determines which web pages to prefetch based on a graph associated with a prefetch module associated with the cache server. The graph represents all or a portion of the web pages at the origin server using one or more nodes and one or more links connecting the nodes. Each link has an associated transaction weight and user weight. The transaction weight represents the importance of the link and associated web page to the origin server and may be used to control the prefetching of web pages by the cache server. The user weight may be used to change a priority associated with a request for a web page. The user weight and transaction weight may change based on criteria associated with the origin server. | 06-09-2016 |
20160173381 | NAS OFF-LOADING OF NETWORK TRAFFIC FOR SHARED FILES | 06-16-2016 |
20160173405 | LISP STRETCHED SUBNET MODE FOR DATA CENTER MIGRATIONS | 06-16-2016 |
20160173598 | CLIENTLESS SOFTWARE DEFINED GRID | 06-16-2016 |
20160173602 | CLIENTLESS SOFTWARE DEFINED GRID | 06-16-2016 |
20160173634 | CACHING IN A CONTENT DELIVERY FRAMEWORK | 06-16-2016 |
20160173636 | NETWORKING BASED REDIRECT FOR CDN SCALE-DOWN | 06-16-2016 |
20160173639 | APPLICATION-DRIVEN CDN PRE-CACHING | 06-16-2016 |
20160182337 | Maximizing Storage Controller Bandwidth Utilization In Heterogeneous Storage Area Networks | 06-23-2016 |
20160182672 | Dynamic Content Caching System | 06-23-2016 |
20160185222 | ON BOARD VEHICLE MEDIA CONTROLLER - The present disclosure describes a microprocessor executable network controller operable to cache media intended for a vehicle occupant in response to a vehicle state change, vehicle function, change in vehicle location, actual or expected change in a signal parameter associated with a selected channel and/or request by the vehicle occupant and/or a signal source. | 06-30-2016 |
20160191606 | IOS DEVICE BASED WEBPAGE BLOCKING METHOD AND DEVICE - An iOS device-based webpage blocking method, being applied to an iOS device comprising application programs and system components, the method comprising: an application program conducts sub-classing on a system default uniform resource locator (URL) caching object to obtain control of a network request; matching the URL character string parsed from the request message with a link character string; if the matching is successful, then generating pseudo response data and displaying according to the pseudo response data, thus blocking webpage advertisements or malicious webpages, reducing occupying of system resources and network resources, improving system operation speed and speed and smoothness of user network access, and lowering device power consumption. The present invention solves the problem in the prior art of resources being occupied by webpage advertisements or malicious webpages. | 06-30-2016 |
20160191646 | A DISTRIBUTED HEALTH-CHECK METHOD FOR WEB CACHING IN A TELECOMMUNICATION NETWORK - A distributed health-check method for web caching in a telecommunication network, wherein a plurality of web caching nodes are coordinated to monitor a set of origin servers where web content is generated. The method includes associating to each user of the telecommunication network requesting the web content buckets as logical containers for holding the web content requested; generating a list of users of the telecommunication network requesting the web content; and performing the plurality of web caching nodes a number of health-checks to the set of origin servers to download the requested web content. A filtering of the set of origin servers is performed for grouping in different areas of interest and the number of health-checks are performed by a limited number of caching nodes receiving the web content requests. The limited number of caching nodes selected belonging to a specific area of interest of the set of origin servers monitoring them. | 06-30-2016 |
20160191647 | Method, device and computer storage medium for implementing interface cache dynamic allocation - A method for implementing interface cache dynamic allocation is disclosed in the present invention. The method includes: setting, in advance or when a system is running, the corresponding relationship between a free cache block and an interface required to be accessed in the application, and then sending data packets inputted from the interface to the cache block; when the system is running, if the interface required to be accessed needs to be increased, revoked or modified, adjusting the corresponding relationship between the changed interface and the corresponding cache block in real time. A device and computer storage medium for implementing the method are also disclosed in the present invention. | 06-30-2016 |
20160191648 | LOCATION AND RELOCATION OF DATA WITHIN A CACHE - In one embodiment, a computer system includes a cache having one or more memories and a metadata service. The metadata service is able to receive requests for data stored in the cache from a first client and from a second client. The metadata service is further able to determine whether the performance of the cache would be improved by relocating the data stored in the cache. The metadata service is further operable to relocate the data stored in the cache when such relocation would improve the performance of the cache. | 06-30-2016 |
20160191650 | METHOD AND SYSTEM FOR DYNAMIC CONTENT PRE-CACHING - Method, system, and programs for dynamic content pre-caching. In one example, a first piece of information representing usage data of a user deice is obtained. A second piece of information representing a connection condition of the user device is also obtained. A third piece of information representing a pre-caching instruction is then received from a remote device. The third piece of information is generated based on the first and second pieces of information. One or more pieces of content are then fetched from a remote content source based on the third piece of information. | 06-30-2016 |
20160191652 | DATA STORAGE METHOD AND APPARATUS - The disclosure discloses a method and device for storing data. In the method, a request message is initiated to a network side device, and network data to be cached are acquired; and one or more cache entity objects are selected for the network data from a cache entity object set, and acquired first-type network data are directly stored into the one or more cache entity objects, or, serialized second-type network data are stored into the one or more cache entity objects. According to the technical solutions provided by the disclosure, dependence on a network can be further reduced, and traffic of a network and electric quantity of mobile terminals can be saved. | 06-30-2016 |
20160191673 | APPLICATION SERVICE DELIVERY THROUGH AN APPLICATION SERVICE AVATAR - Some embodiments include a method of operating an avatar server. The method can include implementing an application service avatar in an avatar server that has at least an intermittent network access to an application service server for providing an application service to client applications. The avatar server can establish a service group by maintaining profiles of one or more end-user devices connected to the avatar server to access the application service. The avatar server can provide a localized application service by emulating at least a subset of functionalities provided by the application service to the end-user devices, for example, by locally processing, at least partially, a service request from at least one of the end-user devices at the avatar server. The avatar server can asynchronously communicate with the application service server to complete the service request. | 06-30-2016 |
20160197780 | METHOD AND APPARATUS FOR TRANSMITTING CONFIGURATION INFORMATION | 07-07-2016 |
20160197986 | HOST-SIDE CACHE MIGRATION | 07-07-2016 |
20160198014 | TECHNIQUES FOR PREDICTIVE NETWORK RESOURCE CACHING | 07-07-2016 |
20160198016 | TECHNIQUES FOR NETWORK RESOURCE CACHING USING PARTIAL UPDATES | 07-07-2016 |
20160205166 | HTML streaming | 07-14-2016 |
20160205189 | PROACTIVE MONITORING AND DIAGNOSTICS IN STORAGE AREA NETWORKS | 07-14-2016 |
20160205191 | ASYMMETRIC DATA MIRRORING | 07-14-2016 |
20160205209 | CONTENT PRE-RENDER AND PRE-FETCH TECHNIQUES | 07-14-2016 |
20160255132 | DISTRIBUTING CONTENT ITEMS TO USERS | 09-01-2016 |
20160255150 | STORING DATA IN A DISPERSED STORAGE NETWORK | 09-01-2016 |
20160378666 | CLIENT VOTING-INCLUSIVE IN-MEMORY DATA GRID (IMDG) CACHE MANAGEMENT - A client application cache access profile is created that documents accesses over time to data cached within an in-memory data grid (IMDG) cache by each of a set of client applications that utilize the IMDG. A new data request is received from one of the set of client applications that includes a client-application data caching vote that specifies whether the requesting client application wants the newly-requested data cached. In response to an IMDG cache data miss related to the new data request, a determination is made as to whether to cache the newly-requested data based upon analysis of the client application cache access profile of the client application from which the new data request was received, IMDG system performance cache costs of caching the newly-requested data, and the client-application data caching vote. The newly-requested data is cached within the IMDG cache in response to determining to cache the newly-requested data. | 12-29-2016 |
20160380848 | Packet Copy Management For Service Chain Processing Within Virtual Processing Systems - Systems and methods are disclosed to provide packet copy management for service chain processing within virtual processing systems. A packet manager virtual machine (VM) controls access to shared memory that stores packet data for packets being processed by service chain VMs operating within a virtual processing environment. For certain embodiments, the packet manager VM is configured to appear as a destination NIC (network interface controller), and virtual NICs (vNICs) within the service chain VMs are configured to process packet data using pointers to access the packet data within the shared memory. Once packet data is processed by one service chain VM, the next service chain VM within the service chain is able to access the processed packet data within the shared memory through the packet manager VM. Once all service chain processing has completed, the resulting packet data is available from the shared memory for further use or processing. | 12-29-2016 |
20170235973 | DETERMINATION OF DATA OBJECT EXPOSURE IN CLOUD COMPUTING ENVIRONMENTS | 08-17-2017 |
20170237681 | CLOUD COMPUTE SCHEDULING USING A HEURISTIC CONTENTION MODEL | 08-17-2017 |
20180024965 | IMAGE PROCESSING APPARATUS | 01-25-2018 |
20180026935 | HYBRID ACCESS DNS OPTIMIZATION FOR MULTI-SOURCE DOWNLOAD | 01-25-2018 |
20180027089 | SYSTEMS AND METHODS FOR CACHING CONTENT WITH NOTIFICATION-BASED INVALIDATION | 01-25-2018 |
20190149586 | MANAGING CONTENT ON AN ISP CACHE | 05-16-2019 |