Entries |
Document | Title | Date |
20080201444 | FILE SHARING SYSTEM AND FILE SHARING METHOD - A system includes a client device to which mount information representing a server device associated with the client is set; a first server corresponding to the mount information set to the client device; a second server communicably connected to the first server; a first disk device controlled by the first server; and a second disk device controlled by the second server. When the client device sends a request to register a data file to the first server, the first server stores the data file in the first disk device and sends a command to make a tag file including information on a location of the data file to the second server. When receiving the command to make the tag file from the first server, the second server makes the tag file and stores it in the second disk device. | 08-21-2008 |
20080294745 | Method and System for Community Data Caching - A cache module ( | 11-27-2008 |
20080301255 | Dynamically Matching Data Service Capabilities to Data Service Level Objectives - A method, system, and computer program product are provided for matching a storage dependent device to a storage subsystem. Storage requirements are identified for a storage dependent device that is coupled to a network. Additionally, a set of storage subsystems that are coupled to the network are identified. A determination is made as to whether at least one storage subsystem in the set of storage subsystems meets the storage requirements of the storage dependent device. An identified storage subsystem is formed by identifying the at least one storage subsystem that meets the storage requirements of the storage dependent device. Responsive to forming the identified storage subsystem, at least one storage subsystem is coupled to the storage dependent device, wherein the storage dependent device utilizes storage capabilities of the identified storage subsystem. | 12-04-2008 |
20080301256 | SYSTEM INCLUDING A FINE-GRAINED MEMORY AND A LESS-FINE-GRAINED MEMORY - A data processing system includes one or more nodes, each node including a memory sub-system. The sub-system includes a fine-grained, memory, and a less-fine-grained (e.g., page-based) memory. The fine-grained memory optionally serves as a cache and/or as a write buffer for the page-based memory. Software executing on the system uses n node address space which enables access to the page-based memories of all nodes. Each node optionally provides ACID memory properties for at least a portion of the space. In at least a portion of the space, memory elements are mapped to locations in the page-based memory. In various embodiments, some of the elements are compressed, the compressed elements are packed into pages, the pages are written into available locations in the page-based memory, and a map maintains an association between the some of the elements and the locations. | 12-04-2008 |
20080313301 | NETWORK-BASED STORAGE SYSTEM CAPABLE OF ALLOCATING STORAGE PARTITIONS TO HOSTS - A network-based storage system comprises one or more block-level storage servers that connect to, and provide disk storage for, one or more host computers. In one embodiment, the system is capable of subdividing the storage space of an array of disk drives into multiple storage partitions, and allocating the partitions to host computers on a network. A storage partition allocated to a particular host computer may appear as local disk drive storage to user-level processes running on the host computer. | 12-18-2008 |
20090083393 | Data synchronous system for synchronizing updated data in a redundant system - A data synchronous system synchronizes, between servers each having a shared memory, data which are stored on the respective shared memories. The system includes a data writer which writes data into the shared memory in one of the servers and then generates write state information on the write state of data written in the shared memory; and a data communicator which reads out the written data and positional information about a position on the shared memory of the written data on the basis of the write state information, and transfers the read data and positional information from the one server to another or some other servers. | 03-26-2009 |
20090307329 | ADAPTIVE FILE PLACEMENT IN A DISTRIBUTED FILE SYSTEM - In a distributed system that includes multiple machines, a scheduler attempts to schedule a task on a machine that is not currently overloaded with work. If a task is scheduled on a machine that does not yet have copies of the portions of the data set on which the task needs to operate, then that machine obtains copies of those portions from other machines that already have them. Whenever a “source” machine ships a copy of a portion to another “destination” machine in the distributed system, the destination machine persistently stores that copy on the destination machine's persistent storage mechanism. The copy also remains on the source machine. Thus, portions of the data set are automatically replicated whenever those portions are shipped between machines of the distributed system. Each machine in the distributed system has access to “global” information that indicates which machines have which portions of the data set. | 12-10-2009 |
20100011085 | COMPUTER SYSTEM, CONFIGURATION MANAGEMENT METHOD, AND MANAGEMENT COMPUTER - To manage the configuration of a data archiving system without increasing the load on the data archiving system while keeping the performance of computers and the load on storage subsystems balanced, there is provided a computer system, including: a plurality of data archiving servers; a storage subsystem which provides storage extents to the plurality of data archiving servers; and a management computer. The management computer manages data archiving server performance management information, which holds information about performance of the plurality of data archiving servers, and storage utilization information, which holds information about load on the storage extents. The management computer changes the association between the data archiving storage extents and the storage extents based on a data archiving server performance management information and a storage utilization information. | 01-14-2010 |
20100036931 | PROVIDING A RELIABLE BACKING STORE FOR BLOCK DATA STORAGE - Techniques are described for managing access of executing programs to non-local block data storage. In some situations, a block data storage service uses multiple server storage systems to reliably store copies of network-accessible block data storage volumes that may be used by programs executing on other physical computing systems, and at least some stored data for some volumes may also be stored on remote archival storage systems. A group of multiple server block data storage systems that store block data volumes may in some situations be co-located at a data center, and programs that use volumes stored there may execute on other computing systems at that data center, while the archival storage systems may be located outside the data center. The data stored on the archival storage systems may be used in various ways, including to reduce the amount of data stored in at least some volume copies. | 02-11-2010 |
20100131611 | Fault-tolerance mechanism optimized for peer-to-peer network - A peer-to-peer network including a set of nodes distributed among a set of processing devices and arranged in a circular form in such a way that each node has a unique successor node. Each node has a memory to store data associated with keys and, on reception of a request containing a key, provides data associated with the key. Each data item stored in the memory of a first node is duplicated in the memory of a second node, different from said first node. The second node is chosen from among the nodes deployed on the set of processing devices different from the processing device on which the first node is deployed. | 05-27-2010 |
20100185745 | Method and System for Community Data Caching - A cache module ( | 07-22-2010 |
20100268789 | NETWORK CACHING FOR MULTIPLE CONTEMPORANEOUS REQUESTS - A live caching system is described herein that reduces the burden on origin servers for serving live content. In response to receiving a first request that results in a cache miss, the system forwards the first request to the next tier while “holding” other requests for the same content. If the system receives a second request while the first request is pending, the system will recognize that a similar request is outstanding and hold the second request by not forwarding the request to the origin server. After the response to the first request arrives from the next tier, the system shares the response with other held requests. Thus, the live caching system allows a content provider to prepare for very large events by adding more cache hardware and building out a cache server network rather than by increasing the capacity of the origin server. | 10-21-2010 |
20100325235 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING SYSTEM CONTROL METHOD, CAPABLE OF PROVIDING, REGARDLESS OF EXECUTION/NON-EXECUTION OF AN APPLICATION, DATA USABLE BY THE APPLICATION TO OTHER INFORMATION PROCESSING APPARATUS - A CPU executes a communication partner search process for searching for a communication partner (another game machine). The CPU confirms received data content. If identification information included in the received data matches, application identification information saved in a wireless communication module is compared with application identification information included in the received data. When the pieces of application identification information match, a notice that another game machine having exchange data corresponding to the matched application identification information is found is given to a main body. Then, giving/receiving of exchange data is executed to/from another game machine. | 12-23-2010 |
20110145358 | SHARED JAVA JAR FILES - Techniques are disclosed for sharing programmatic modules among isolated virtual machines. A master JVM process loads data from a programmatic module, storing certain elements of that data into its private memory region, and storing other elements of that data into a “read-only” area of a shareable memory region. The master JVM process copies loaded data from its private memory region into a “read/write” area of the shareable memory region. Instead of re-loading the data from the programmatic module, other JVM processes map to the read-only area and also copy the loaded data from the read/write area into their own private memory regions. The private memory areas of all of the JVM processes begin at the same virtual memory address, so references between read-only data and copied data are preserved correctly. As a result, multiple JVM processes start up faster, and memory is conserved by avoiding the redundant storage of shareable data. | 06-16-2011 |
20110179134 | Managing Hardware Resources by Sending Messages Amongst Servers in a Data Center - Systems and methods to manage workloads and hardware resources in a data center or cloud. In one embodiment, a method includes a data center having a plurality of servers in a network. The data center provides a virtual machine for each of a plurality of users, each virtual machine to use a portion of hardware resources of the data center. The hardware resources include storage and processing resources distributed onto each of the plurality of servers. The method further includes sending messages amongst the servers, some of the messages being sent from a server including status information regarding a hardware resource utilization status of that server. The method further includes detecting a request from the virtual machine to handle a workload requiring increased use of the hardware resources, and provisioning the servers to temporarily allocate additional resources to the virtual machine, wherein the provisioning is based on status information provided by one or more of the messages. | 07-21-2011 |
20110302266 | METHOD AND SYSTEM FOR COMMUNITY DATA CACHING - A cache module ( | 12-08-2011 |
20120042032 | Adaptive Private Network Asynchronous Distributed Shared Memory Services - A highly predicable quality shared distributed memory process is achieved using less than predicable public and private internet protocol networks as the means for communications within the processing interconnect. An adaptive private network (APN) service provides the ability for the distributed memory process to communicate data via an APN conduit service, to use high throughput paths by bandwidth allocation to higher quality paths avoiding lower quality paths, to deliver reliability via fast retransmissions on single packet loss detection, to deliver reliability and timely communication through redundancy transmissions via duplicate transmissions on high a best path and on a most independent path from the best path, to lower latency via high resolution clock synchronized path monitoring and high latency path avoidance, to monitor packet loss and provide loss prone path avoidance, and to avoid congestion by use of high resolution clock synchronized enabled congestion monitoring and avoidance. | 02-16-2012 |
20120066337 | TIERED STORAGE INTERFACE - The cloud storage services are extended with a cloud storage service access protocol that enables users to specify a desired storage tier for each data stream. In response to receiving storage tier specifiers via the protocol, the cloud storage service performs storage operations to identify target storage devices having attributes matching those associated with the requested storage tier. The cloud storage service stores a data stream from the storage client in the identified target storage device associated with the desired storage tier. Storage tiers can be defined based on criteria including capacity costs; access latency; availability; activation state; bandwidth and/or transfer rates; and data replication. The cloud storage service protocol allows data streams to be transferred between storage tiers, storage devices to be activated or deactivated, and data streams to be prefetched and cached. The cloud storage services may charge storage clients based on storage tier use and associated operations. | 03-15-2012 |
20120072527 | CONTENT DELIVERY NETWORK CACHE GROUPING - One or more content delivery networks (CDNs) that deliver content objects for others is disclosed. Content is propagated to edge servers through hosting and/or caching. End user computers are directed to an edge server for delivery of a requested content object by a universal resource indicator (URI). When a particular edge server does not have a copy of the content object from the URI, information is passed along a hierarchy (to a parent server, grandparent server, and, eventually, an origin server) until the content object is found. The origin server may be hosted in the CDN or at a content provider across the Internet. Once the content object is located in the hierarchical chain, the content object is passed back down the chain to the edge server for delivery. Optionally, the various servers in the chain may cache or host the content object as it is relayed. | 03-22-2012 |
20120079057 | ACCELERATION AND OPTIMIZATION OF WEB PAGES ACCESS BY CHANGING THE ORDER OF RESOURCE LOADING - A method for acceleration of access to a web page. The method comprises receiving a web page responsive to a request by a user; analyzing the received web page for possible acceleration improvements; generating a modified web page of the received web page using at least one of a plurality of acceleration techniques; providing the modified web page to the user, wherein the user experiences an accelerated access to the modified web page resulting from the execution of the at least one of a plurality of acceleration techniques; and storing the modified web page for use responsive to future user requests. | 03-29-2012 |
20120084386 | SYSTEM AND METHOD FOR SHARING NETWORK STORAGE AND COMPUTING RESOURCE - Under a community environment, a system and a method for sharing network storage and computing resource are disclosed. In particular, the method employs some available-to-shared computer resources of a group member to share with others. The member may decide a specific storage space to be shared, or the unused computing resource of its processor for others. According to one of the embodiments, the method for storing includes a step of synchronizing the group information and a resource allocation table with records of storage nodes after logging on a community server. Next, a distributed storing process is performed to distribute an object across several nodes within a community group. The method for computing includes a step of generating a computing request from a user-end system. A distributed computation is accomplished since some computing nodes within a community group are allocated for the request according to a computing resource allocation table. | 04-05-2012 |
20120084387 | METHOD AND PROGRAM FOR SUPPORTING SETTING OF ACCESS MANAGEMENT INFORMATION - In order to limit host computers permitted to control a storage area from host computers, provided is a storage management computer coupled to one or more host computers for providing services and one or more storage systems, in which the storage management computer judges whether the host computer is permitted to control the storage area based on data indicating the configuration information of the storage system and service management information for managing the services provided by the host computers, and in case of which it is judged that the host computer is permitted to control the storage area, the storage management computer sets access control data to permit the host computer to control the storage area. | 04-05-2012 |
20120110113 | Cooperative Caching Method and Contents Providing Method Using Request Apportioning Device - The present invention relates to a cooperative caching method and a contents providing method using a request apportioning device. While collecting and controlling allocation history information on respective cache servers in a cache cluster, server load information, and threshold load management information including an object service threshold load and a cooperative threshold load, the request apportioning device uses allocation history information and server load information to check a load level of a cache server (first cache server) having first contents from among the cache servers in the first cache cluster, and uses threshold load management information to determine whether there is a cache server that is less than the object service threshold load from among the first cache servers, and when the first cache server that is less than the object service threshold load is not found, it determines whether there is a cache server (second cache server) that is less than the cooperative threshold load from among the first cache servers, and when the second cache server is found, it selects a cache server (third cache server) from among the second cache servers. When the first cache server that is less than the object service threshold load is not found, the request apportioning device uses allocation history information and server load information to select a cache server (fourth cache server) that is less than the object service threshold load in the first cache cluster, allow cooperative caching for the contents A between the third cache server and the fourth cache server, and provide the contents A. | 05-03-2012 |
20120185555 | METHOD FOR GENERATING UNIVERSAL OBJECTS IDENTIFIERS IN DISTRIBUTED MULTI-PURPOSE STORAGE SYSTEMS - A computer implemented method and system for generating secure universal object identifiers on a multipurpose storage system is disclosed. According to one embodiment, a system comprises a client system in communication with a network. An application server is in communication with the network. A storage cluster is in communication with the network. The storage cluster has a plurality of storage nodes. The client system stores a data object via the application server. The application server generates an object identifier assigned to the data object. The application server stores the data object on a storage node of the plurality of storage nodes. The data object is moved to another application server without moving contents of the data object in the storage cluster. | 07-19-2012 |
20120254342 | Method for Providing Access to Data Items from a Distributed Storage System - A method for providing access to data items from a distributed storage system is provided. Each data item is replicated across a plurality of storage nodes. Data items are read from the distributed storage system by selecting between a first reading mode, comprising attempting to read the data item from a set of the storage nodes to check for data item consistency across at least a quorum of the set of nodes, and a second reading mode, comprising reading the data item from at least one of the storage nodes. The reading mode is selected according to at least one detected characteristic of system status of the distributed storage system. The second reading mode is selected when the detected characteristic indicates a higher likelihood of data item consistency, and the first reading mode is selected when the detected characteristic indicates a lower likelihood of data item consistency. | 10-04-2012 |
20120254343 | CONTENT DELIVERY NETWORK CACHE GROUPING - Content delivery networks (CDNs) deliver content objects for others is disclosed. End user computers are directed to an edge server for delivery of a requested content object by a universal resource indicator (URI). When an edge server does not have a copy of the content object from the URI, information is successively passed to ancestor servers within a hierarchy until the content object is found. There can be different hierarchies designated for different URIs or times at which requests are received. Once the content object is located in the hierarchical chain, the content object is passed back down the chain to the edge server for delivery. | 10-04-2012 |
20120290677 | Dynamic Cache Selection Method and System - Node, computer software and method for selecting a resource that is available at multiple caches connected in a communication network. The method includes receiving from a user a request for the resource; identifying one or more caches of the multiple caches that store the resource; determining a total cost associated with a path between the user and each cache of the one or more caches storing the resource, the total cost including a static cost C | 11-15-2012 |
20120311068 | DISTRIBUTING MULTI-MEDIA CONTENT TO A PLURALITY OF POTENTIAL ACCESSING DEVICES - A method begins by a dispersed storage (DS) processing module encoding a data segment of multi-media content using a dispersed storage error coding function to produce a set of encoded data slices and partitioning the set of encoded data slices into a first sub-set of encoded data slices and a second sub-set of encoded data slices, wherein the first sub-set of encoded data slices include less than a decode threshold number of encoded data slices. The method continues with the DS processing module distributing the first sub-set of encoded data slices to a plurality of potential accessing devices and when accessing information from a device of the plurality of potential accessing devices is received, sending at least one of the encoded data slices of the second sub-set of encoded data slices to the device such that the device has the decode threshold number of encoded data slices. | 12-06-2012 |
20120331088 | SYSTEMS AND METHODS FOR SECURE DISTRIBUTED STORAGE - Systems and methods are provided for directing a client computing device to data portions stored on a plurality of storage locations. A registration/authentication server receives a request from a client computing device to retrieve portions of data stored at multiple storage locations. The registration/authentication server provides pointers to available storage locations to the client computing device based on criteria, whereupon the client computing device may retrieve the data portions and reconstitute a desired data set. | 12-27-2012 |
20130018978 | CONTENT DELIVERY NETWORK WITH DEEP CACHING INFRASTRUCTURE - Embodiments herein include methods and systems for use in delivering resources to a client device over a local network. An exemplary system comprises a plurality of caching devices operable to cache resources on behalf of a plurality of content providers, and a local caching device communicatively situated between an access network and the client device, wherein the access network is communicably situated between the plurality of caching devices and the local caching device. The local caching device is operable to retrieve a requested resource from at least one of the plurality of caching devices, deliver the requested resource to the client device over the local network, and store the requested resource for future requests by other client devices. | 01-17-2013 |
20130046846 | Methods and systems for remote data storage utilizing content addresses - In one general aspect, various embodiments are directed to a method of writing a data block to a memory comprising receiving an electronic write request from an application. A content address of a first data block considering the value for the first data block. A mapping of the first data block to the content address may be written to a logical end of the local block map. The mapping may also be written to a remote block map. If the content address is not present at a local data storage, the value of the first data block may be written to the local data storage at a first location and metadata associating the content address with the first location may be written to the local data storage. | 02-21-2013 |
20130073669 | PEER-TO-PEER DATA MIGRATION - Examples are disclosed for peer-to-peer data migration between nodes coupled via one or more peer-to-peer communication links. | 03-21-2013 |
20130086200 | Live Logical Partition Migration with Stateful Offload Connections Using Context Extraction and Insertion - An approach is provided in which a migration agent receives a message to migrate a virtual machine from a first system to a second system. The first system extracts hardware state data stored in a native format from a memory area located on first system's network adapter. The hardware state data is utilized by the first system's network adapter to process data packets generated by the virtual machine. Next, the virtual machine is migrated to the second system, which includes copying the extracted hardware state data from the first system to the second system. In turn, the second system configures a corresponding second network adapter by writing the copied hardware state data to a memory located on the second network adapter. | 04-04-2013 |
20130110966 | COMPUTER SYSTEM AND MANAGEMENT SYSTEM THEREFOR | 05-02-2013 |
20130110967 | INFORMATION SYSTEM AND METHOD FOR MANAGING DATA IN INFORMATION SYSTEM | 05-02-2013 |
20130124668 | DYNAMIC STREAMING DATA DISPATCHER - A method includes receiving, by a computing device, a plurality of data streams from plurality of sources, distributing the data streams to a plurality of sinks on multiple hosts, receiving load information indicating a load on at least one of the plurality of sinks and adjusting the distribution of the data stream accordingly and instructing the plurality of sinks to write the data streams to a distributed data store. | 05-16-2013 |
20130138764 | METHOD AND SYSTEM FOR VIRTUAL MACHINE DATA MIGRATION - A machine implemented method and system for a network executing a plurality of virtual machines accessing storage devices via a plurality of switches is provided. A management application determines a plurality of paths between a computing system executing the virtual machines and a storage device. Each path includes at least one switch that is configured to identify traffic related to a virtual machine. One of the paths is selected based on a path rank. The selected path is then used for transmitting data for migrating the virtual machine from a first storage device location to a second storage device location. A switch in the virtual network receives virtual machine data and is configured to differentiate between virtual machine data and other network traffic. The switch prioritizes transmission of virtual machine data compared to standard network traffic or non-virtual machine data. | 05-30-2013 |
20130151651 | PREDICTIVE CACHING OF GAME CONTENT DATA - Technologies are generally described for reducing lag time via, predictive caching in cloud-based gaming. In one example, a cloud-based gaming system may identify game paths that can be taken during real-time game play and may break down the game paths into subsets of path segments a player can select. The system may determine a probability of the player taking a subset of the path segments based on real-time actions by the player and a game history of the current player and past players. The system may assign probabilities of being selected to the subsets of path segments and may render the subsets of path segments based on their respective probabilities. The system may transmit the rendered game content data for the subsets of path segments to a game client for caching on the local cache so that the game content data may be available when needed during real-time game play. | 06-13-2013 |
20130166671 | NODE CONTROLLER AND METHOD OF CONTROLLING NODE CONTROLLER - A node controller includes: a reception processor configured to receive a packet and to generate a read request or write data and a write request for requesting to write the write data, according to a destination and a type of the packet; a collected data processor configured to collect the received packet, to generate collected data according to the collected packet, and to generate a collected data write request for requesting to write the collected data; a switch configured to output the write data and the write request received from the reception processor or output the collected data and the collected data write request received from the collected data processor; and a memory controller configured to write the write data to a memory and to write the collected data to the memory in accordance with the collected data write request received from the switch. | 06-27-2013 |
20130204961 | CONTENT DISTRIBUTION NETWORK SUPPORTING POPULARITY-BASED CACHING - A content delivery network may provide content items to requesting devices using a popularity-based distribution hierarchy. A central analysis system may determine popularity data for a content item stored in a first caching device. At a later time, the central analysis system may determine that a change in the popularity data is beyond a threshold value. The central analysis system may then transmit an instruction to move the content item from the first caching device to a second caching device in a different tier of caching devices than the first caching device. The central analysis system may update a content index to indicate that the content item has been moved to the second caching device. A user device may then be redirected to request the content item directly from the second caching device. | 08-08-2013 |
20130212210 | RULE ENGINE MANAGER IN MEMORY DATA TRANSFERS - A rule engine manager in-memory data transfer system includes a rule engine manager cluster, a first memory cache coupled to the rule engine manager cluster, a data server cluster coupled to the rule engine manager cluster and a second memory cache coupled to the data server cluster. | 08-15-2013 |
20130238743 | Adaptive Private Network Asynchronous Distributed Shared Memory Services - A highly predicable quality shared distributed memory process is achieved using less than predicable public and private internet protocol networks as the means for communications within the processing interconnect. An adaptive private network (APN) service provides the ability for the distributed memory process to communicate data via an APN conduit service, to use high throughput paths by bandwidth allocation to higher quality paths avoiding lower quality paths, to deliver reliability via fast retransmissions on single packet loss detection, to deliver reliability and timely communication through redundancy transmissions via duplicate transmissions on high a best path and on a most independent path from the best path, to lower latency via high resolution clock synchronized path monitoring and high latency path avoidance, to monitor packet loss and provide loss prone path avoidance, and to avoid congestion by use of high resolution clock synchronized enabled congestion monitoring and avoidance. | 09-12-2013 |
20130246555 | CONTENT DEVLIERY NETWORK CACHE GROUPING - One or more content delivery networks (CDNs) that deliver content objects for others is disclosed. Content is propagated to edge servers through hosting and/or caching. End user computers are directed to an edge server for delivery of a requested content object by a universal resource indicator (URI). When a particular edge server does not have a copy of the content object from the URI, information is passed to another server, the ancestor or parent server to find the content object. There can be different parents servers designated for different URIs. The parent server looks for the content object and if not found, will go to another server, the grandparent server, and so on up a hierarchy within the group. Eventually, the topmost server in the hierarchy goes to the origin server to find the content object. The origin server may be hosted in the CDN or at a content provider across the Internet. Once the content object is located in the hierarchical chain, the content object is passed back down the chain to the edge server for delivery. Optionally, the various servers in the chain may cache or host the content object as it is relayed. | 09-19-2013 |
20130246556 | SYSTEM AND METHOD FOR SUPPORTING INTRA-NODE COMMUNICATION BASED ON A SHARED MEMORY QUEUE - A system and method can support intra-node communication based on a shared memory queue. The shared memory queue can be associated with a shared memory, to which one or more communication peers are attached. The shared memory queue operates to allocate one or more message buffers in the shared memory that contains a first message from a sender to a receiver, and can send the first message to the receiver by linking the one or more message buffers with another message queue. Optionally, a second message buffer may be created, and the message can be sent to the receiver by copying the message to the second message buffer and linking it with another message queue. Additionally, the shared memory queue operates to receive a second message from another sender by delinking one or more message buffers associated with said second message. | 09-19-2013 |
20130262616 | SYSTEM AND METHOD OF SHARING CONTENT BY USING PLURALITY OF STORAGES - A system and method of sharing content by using a plurality of storages is provided. A mobile communication terminal includes a storage information collecting unit collecting a plurality of pieces of storage information about the plurality of storages connected to the mobile communication terminal, a User Interface (UI) generating unit dividing the plurality of storages according to attributes that are previously configured, based on the plurality of pieces of storage information, and generating a storage share setting screen with respect to the plurality of storages, a display unit displaying the storage share setting screen, and a storage setting unit activating sharing of content stored in the plurality of storages, for each of the plurality of storages. | 10-03-2013 |
20130290470 | VIRTUAL STORAGE APPLIANCE GATEWAY - Methods and apparatuses for operating a storage system are provided. In one example, a storage system includes a storage server and a virtual storage appliance (VSA) implemented in a virtual machine. The storage server provides access to a first shared namespace of data. The VSA is operatively connected to the storage server system over a network connection and provides access to a second shared namespace of data over the network connection. The second shared namespace is defined by a policy and includes a subset of the first shared namespace. The VSA also replicates data of a third shared namespace of data at the VSA making the third shared namespace available at the VSA when the network connection is unavailable. The third namespace is defined by the policy and includes a subset of the second shared namespace. | 10-31-2013 |
20130290471 | MANAGING TRANSFER OF DATA FROM A SOURCE TO A DESTINATION MACHINE CLUSTER - In a method for managing transfer of data from a source machine cluster to a destination machine cluster, information relevant to the transfer of data from the source machine cluster to the destination machine cluster is accessed. In addition, a data transfer operation that substantially optimizes the transfer of the data based upon the accessed information is determined. Furthermore, the determined data transfer operation is implemented to transfer the data from the source machine cluster to the destination machine cluster. | 10-31-2013 |
20130290472 | MODIFICATION OF SMALL COMPUTER SYSTEM INTERFACE COMMANDS TO EXCHANGE DATA WITH A NETWORKED STORAGE DEVICE USING AT ATTACHMENT OVER ETHERNET - A process executed by a computing device uses commands having a first format to exchange data through a network with a storage device configured to execute commands having a second format. A storage device controller identifies a command type associated with a command received from the process and identifies one or more physical memory addresses associated with the command. The storage device controller identifies a command having a second format associated with the received command and generates a network request including the command having the second format, the one or more physical memory addresses, a device identifier associated with the storage device and a tag. The network request is transmitted through a network to the storage device which executes the command having the second format. For example, an AoE request including an ATA command is generated from a received SCSI command. | 10-31-2013 |
20130297719 | PORT POOLING - In one embodiment, methods and systems for port pooling are described. An interface may communicate with at least one physical server. The at least one physical server may host a plurality of virtual servers and be connectable via a plurality of gateway ports to a storage area network (SAN). A virtual server manager configured to arrange the plurality of gateway ports in a plurality of port pools, define a virtual server group including a plurality of virtual servers, associate each virtual server with one or more port pools, the one or more port pools defining available gateway ports for access by the particular virtual server; and provide configuration instructions to allow the particular virtual server to communicate with the SAN through the available gateway ports. | 11-07-2013 |
20130311595 | REAL-TIME CONTEXTUAL OVERLAYS FOR LIVE STREAMS - A system and method for contextualizing and live-updating overlay data for live media streams is disclosed herein. Overlays can be generated in real-time and in response to live events. The overlays can be transmitted to a recipient of a live media stream independently of the live media stream. Overlay data can thus be modified and added to overlays in near-real-time as events occur during a live broadcast without having to modify the live media stream. The overlays can also be contextualized to provide relevant information and context for the live media stream recipient. Such context can include providing a history of the broadcast, and other pertinent information such as incorporating location-based information, demographic information, and other information associated with potential viewers. | 11-21-2013 |
20130332558 | USING LOGICAL BLOCK ADDRESSES WITH GENERATION NUMBERS AS DATA FINGERPRINTS FOR NETWORK DEDUPLICATION - The technique introduced here involves using a block address and a corresponding generation number as a “fingerprint” to uniquely identify a sequence of data within a given storage domain. Each block address has an associated generation number which indicates the number of times that data at that block address has been modified. This technique can be employed, for example, to determine whether a given storage server already has the data, and to avoid sending the data to that storage server over a network if it already has the data. It can also be employed to maintain cache coherency among multiple storage nodes. | 12-12-2013 |
20130339472 | METHODS AND SYSTEMS FOR NOTIFYING A SERVER WITH CACHE INFORMATION AND FOR SERVING RESOURCES BASED ON IT - The present invention relates to the notification of a server device with the availability of resources in cache memories of a client device and to the serving of digital resources in such a client-server communication system. The notifying method comprises: obtaining a first list of resources available in the cache memories of the client device; filtering the first list according to filtering criteria relating to a resource parameter, to obtain a filtered list of fewer resources available in the client device or splitting the first list according to splitting criteria relating to a resource parameter, to obtain a plurality of sub-lists of resources available in the client device; and notifying the server device with data structures representing the filtered list or sub-lists of resources. | 12-19-2013 |
20130346540 | Storing and Moving Data in a Distributed Storage System - A system, computer-readable storage medium storing at least one program, and a computer-implemented method for identifying a storage group in a distributed storage system into which data is to be stored is presented. A data structure including information relating to storage groups in a distributed storage system is maintained, where a respective entry in the data structure for a respective storage group includes placement metrics for the respective storage group. A request to identify a storage group into which data is to be stored is received from a computer system. The data structure is used to determine an identifier for a storage group whose placement metrics satisfy a selection criterion. The identifier for the storage group whose placement metrics satisfy the selection criterion is returned to the computer system. | 12-26-2013 |
20140006545 | Systems and Methods for Providing Replicated Data from Memories to Processing Clients | 01-02-2014 |
20140012940 | Systems, Methods and Apparatus for a Virtual Machine Cache - A virtual machine cache provides for maintaining a working set of the cache during a transfer between virtual machine hosts. In response to the transfer, a previous host retains cache data of the virtual machine, which is provided to the new host of the virtual machine. The cache data may be transferred via a network transfer. | 01-09-2014 |
20140040417 | STORING A STREAM OF DATA IN A DISPERSED STORAGE NETWORK - A processing module of a computing device alternatingly sends a stream of data to a first or second processing device. When receiving the stream of data, the first processing device performs a first portion of a dispersed storage error encoding function on the received stream of data to produce a plurality of sets of a threshold number of slices and writes the plurality of sets of the threshold number of slices into first memory of a dispersed storage network (DSN). When not receiving the stream of data, the first processing device reads the plurality of sets of the threshold number of slices from the first memory, performs a second portion of the dispersed storage error encoding function using the plurality of sets of the threshold number of slices to produce a plurality of sets of redundancy slices, and writes the plurality of sets of redundancy slices into second DSN memory. | 02-06-2014 |
20140052813 | Method and system for identifying storage device - The disclosure provides a method for identifying a storage device, which includes: obtaining, by a master control server, disk information of a storage device through a storage server; determining, by the master control server, that there is a storage device matching a device identifier according to the disk information, and entering a monitoring state; otherwise, creating a device identifier for the storage device and entering the monitoring state. The disclosure also provides a system for identifying a storage device. Through the method and the system, the storage devices are uniformly identified so as to facilitate unified management of the storage devices. | 02-20-2014 |
20140067991 | DISTRIBUTED STORAGE - Systems and methods are described for providing a distributed storage system. A distributed storage system includes a control server coupled to a network, the control server maintaining a policy, a host directory, and a file directory, and a plurality of hosts coupled to the network, each of the plurality of hosts containing a storage device and an agent configured to communicate with the control server, wherein each of the plurality of hosts is configured to contribute a portion of the storage device thereof to collectively form a distributed virtual disk configured to store files, wherein the portion of the storage device on each of the plurality of hosts is configured based on the policy, wherein the host directory contains information about the plurality of the hosts on the distributed storage system, and wherein the file directory contains information about the files stored on the distributed storage system. | 03-06-2014 |
20140067992 | COMPUTER PRODUCT, COMMUNICATION NODE, AND TRANSMISSION CONTROL METHOD - A computer-readable recording medium stores a program causing a first node to execute a process including identifying among nodes in a system, a second node that has data identical to data in the first node; comparing a first effect level representing a degree to which performance of the system is affected by communication between the first node and a transmission destination node of the data, and a second effect level representing a degree to which the performance is affected by communication between the second node and the transmission destination node, by referring to a storage device that stores effect levels respectively representing a degree to which the performance of the system is affected by communication between the transmission destination node and each node among the nodes; and transmitting based on a comparison result, the data to the transmission destination node by controlling a communicating unit that communicates with the nodes. | 03-06-2014 |
20140115091 | MACHINE-IMPLEMENTED FILE SHARING METHOD FOR NETWORK STORAGE SYSTEM - A machine-implemented file sharing method for a network storage system is provided. The network storage system at least includes a first storage device, a second storage device and a network cloud. The first storage device and second storage device are in communication with the network cloud. The machine-implemented file sharing method includes the following steps. Firstly, a state of a target file of the second storage device to be retrieved by a user of the first storage device is marked as a freeze state. If it is determined the user of the second storage device is to modify the target file, a file access expediting operation on the target file is performed and a file access notice signal is issued to the user of the first storage device to expedite the retrieval of the target file. | 04-24-2014 |
20140122639 | CONCLUSIVE WRITE OPERATION DISPERSED STORAGE NETWORK FRAME - A method begins by a processing module generating a payload of a dispersed storage network frame regarding a conclusive write request operation by generating one or more slice name fields of a payload to include one or more slice names corresponding to one or more write commit responses of a write request operation, wherein the conclusive write request operation is a conclusive phase of the write request operation. The method continues with the processing module generating one or more slice revision numbering fields of the payload, wherein each slice revision numbering field includes a slice revision number corresponding to an associated slice name of the one or more slice names. The method continues with the processing module generating a protocol header of the DSN frame by generating a payload length field of the protocol header to include a payload length and generating remaining fields of the protocol header. | 05-01-2014 |
20140136647 | ROUTER AND OPERATING METHOD THEREOF - An exemplary embodiment provides a router including: a number calculating unit configured to count an accumulated request number corresponding to a plurality of previously requested contents and a request number of a request signal for an arbitrary content which is currently input; a probability calculating unit configured to calculate an arbitrary probability value for the arbitrary content based on the accumulated request number and the request number; and a policy determining unit configured to determine whether to store the arbitrary content which is provided to an arbitrary terminal which transmits the request signal from an arbitrary content server, based on the arbitrary probability value and a set reference probability value. | 05-15-2014 |
20140149535 | METHOD FOR TRANSMITTING DATA AND MOBILE STORAGE APPARATUS USING THE SAME - Disclosure is related to a method for transmitting data, and the method is applicable to a mobile storage apparatus. The mobile storage apparatus provides multiple electronic devices to wirelessly access the files stored in the apparatus. The apparatus determines a scheme to segment the files to be sent according to the files' types, sizes and the order of the connected devices. The files are segmented into multiple sections before the transmission. A power management unit may turn off a communication unit within the apparatus when the transmission procedure enters an idle state. When the jobs in the electronic devices have been completed, the communication unit is again turned on for transmitting next segment until the files are completely transmitted. The invention achieves efficient transmission in a power-saving mode. | 05-29-2014 |
20140149536 | CONSISTENT DISTRIBUTED STORAGE COMMUNICATION PROTOCOL SEMANTICS IN A CLUSTERED STORAGE SYSTEM - Consistent distributed storage communication protocol semantics, such as SCSI target semantics, in a SAN-attached clustered storage system are disclosed. The system includes a mechanism for presenting a single distributed logical unit, comprising one or more logical sub-units, as a single logical unit of storage to a host system by associating each of the logical sub-units that make up the single distributed logical unit with a single host visible identifier that corresponds to the single distributed logical unit. The system further includes a mechanism to maintain consistent context information for each of the logical sub-units such that the logical sub-units are not visible to a host system as separate entities from the single distributed logical unit. | 05-29-2014 |
20140156780 | Parallel, Side-Effect Based DNS Pre-Caching - Embodiments of the present invention include methods and systems for domain name system (DNS) pre-caching. A method for DNS pre-caching is provided. The method includes receiving uniform resource locator (URL) hostnames for DNS pre-fetch resolution prior to a user hostname request for any of the URL hostnames. The method also includes making a DNS lookup call for at least one of the URL hostnames that are not cached by a DNS cache prior to the user hostname request. The method further includes discarding at least one IP address provided by a DNS resolver for the URL hostnames, wherein a resolution result for at least one of the URL hostnames is cached in the DNS cache in preparation for the user hostname request. A system for DNS pre-caching is provided. The system includes a renderer, an asynchronous DNS pre-fetcher and a hostname table. | 06-05-2014 |
20140164552 | METHOD OF CACHING CONTENTS BY NODE AND METHOD OF TRANSMITTING CONTENTS BY CONTENTS PROVIDER IN A CONTENT CENTRIC NETWORK - A method of caching a content in a node in a content-centric network, includes receiving, from a content requester, a content request packet requesting a first chunk of the content, and setting a mark bit indicating whether the node is to cache the first chunk when the first chunk is received. The method further includes receiving, from a content provider, a data packet including the first chunk in response to transmitting the content request packet to the content provider, and caching the first chunk. | 06-12-2014 |
20140181236 | SYSTEMS AND APPARATUSES FOR AGGREGATING NODES TO FORM AN AGGREGATED VIRTUAL STORAGE FOR A VIRTUALIZED DESKTOP ENVIRONMENT - Embodiments of the invention relate generally to software, data storage, and virtualized computing and processing resources. More specifically, systems and apparatuses are described for aggregating nodes to form an aggregated virtual storage for a virtualized desktop environment. In one embodiment, a virtual storage system includes servers including processors and memories, and an aggregated virtual storage including the memories, each of the memories being associated with a corresponding server. Also included is a storage aggregator processor coupled to a memory including executable instructions to generate a data structure for storage in each memory in an associated server in the servers, each of the data structures being configured to store a reference to duplicative data stored in a first number of servers in the servers. The duplicative data provides redundancy when a second number of servers, or fewer, in the servers are inaccessible. | 06-26-2014 |
20140181237 | SERVER AND METHOD FOR STORING DATA - In a method for storing data, the data received from a client is stored into a first storage node. A summary list of the data is created and stored into a storage unit, and summary information of the data is recorded in the summary list. A feedback message indicating whether the data has been successfully stored into the first storage node is transmitted to the client. The data that has not been successfully stored into each corresponding storage node is read from the first storage node and copied to a next storage node. The summary information of the data in the summary list is amended. | 06-26-2014 |
20140189043 | NETWORK DATA STORAGE SYSTEM, APPARATUS AND METHOD THEREOF - A network data storage system is provided. The system includes a data transmitting end, a network data storage apparatus and a notifier. The data transmitting end transmits an external data. The network data storage apparatus is connected to the data transmitting end through an Internet. The notifier is connected to the network data storage apparatus. When the external data is received by the network data storage apparatus, the network data storage apparatus transmits a notification signal to the notifier, and the notifier generates a notification when receiving the notification signal. | 07-03-2014 |
20140215004 | VIRTUAL STORAGE SYSTEM AND METHOD OF SHARING ELECTRONIC DOCUMENTS WITHIN THE VIRTUAL STORAGE SYSTEM - A virtual storage system and a method of sharing electronic documents within a virtual storage system that includes at least one processor that processes a plurality of electronic documents received from an external system, receives from the user computing device, a request for sharing an electronic document of the plurality of electronic documents, and input information including download information and expiration information corresponding to the electronic document, as input by a user, and creates at least one share link corresponding to the electronic document based on the input information, for sharing the electronic document with a recipient. The virtual storage system further includes a plurality of redundant physical storage devices in data communication with the at least one processor each storing the electronic documents and the at least one share link created. | 07-31-2014 |
20140237069 | ASSIGNING PRE-EXISTING PROCESSES TO SELECT SETS OF NON-UNIFORM MEMORY ACCESS (NUMA) ALIGNED RESOURCES - A system and a method are disclosed for assigning pre-existing processes to select sets of non-uniform memory access (NUMA) aligned resources. In one example, the method includes receiving, by a processing device, a report indicating a measure of resources available on each respective Non-Uniform Memory Access (NUMA) node of a plurality of NUMA nodes in a system, and a measure of resources consumed by a first process being executed on a first NUMA node of the plurality of NUMA nodes in the system, determining that the first process being executed requires an additional resource, determining whether the first NUMA node has capacity for the additional resource, when the first NUMA node does not have the capacity for the additional resource, identifying a second NUMA node for the first process in view of the report, and binding, by the processing device, the first process to the second NUMA node. | 08-21-2014 |
20140280687 | Disaggregated Server Architecture for Data Centers - A system comprising a unified interconnect network, a plurality of process memory modules, and a plurality of processor modules configured to share access to the memory modules via the unified interconnect network. Also disclosed is a method comprising communicating data between a plurality of processor modules and a plurality of shared resource pools via a unified interconnect network, wherein the communications comprise a protocol that is common to all resource pools, and wherein each resource pool comprises a plurality of resource modules each configured to perform a common function. Also disclosed is an apparatus comprising a network interface controller (NIC) module configured to receive data from a plurality of processor modules via a unified interconnect network, and provide core network connectivity to the processor modules. | 09-18-2014 |
20140280688 | Methods And Systems For Dynamic Data Management - Methods and systems for managing data are disclosed. One method can comprise storing first data locally relative to a user device and storing second data remotely relative to the user device. The first data and the second data can relate to the same content. The method can also comprise generating a manifest comprising location information relating to the first data and the second data and receiving a request for transmission of one or more of the first data and the second data based upon the manifest. | 09-18-2014 |
20140280689 | Content Centric Networking - A caching system is provided. The computing infrastructure runs off of a centralized storage, and data stored on the centralized store can also be retrieved from nearby machines that are part of the local infrastructure and have recently accessed the centralized store. Address-to-digest mappings are used to find an index of the desired data block. That digest is then used to hold where the data block is being cached. In some embodiments, the digest is hashed and the hash of the digest is used to determine where the data block is being cached. The data block is accessed from the cache using its cache, therefore different addresses may result in the retrieval of the same data block. For example, in a virtual machine environment, two different nodes may retrieve the same data block using different addresses. | 09-18-2014 |
20140289358 | Distributed Storage System - In one embodiment, a first computing device receives a write request and data from a second computing device; iteratively attempts to write the data until a copy of the data is successfully written to each and every storage node belonging to a storage volume; and transmits a volume identifier of the storage volume and a data identifier assigned to the data to the second computing device. In one embodiment, a first computing device receives a read request and a volume identifier and a data identifier from a second computing device; accesses a cache to select the storage volume identified by the volume identifier; iteratively attempts to read data identified by the data identifier until a copy of the data is successfully read from a storage node belonging to the selected storage volume; and transmits the copy of the data to the second computing device. | 09-25-2014 |
20140304360 | DISTRIBUTING MULTI-MEDIA CONTENT TO A PLURALITY OF POTENTIAL ACCESSING DEVICES - A method begins by receiving a first sub-set of encoded data slices of a set of encoded data slices. The first sub-set of encoded data slices includes less than a decode threshold number of encoded data slices. The method continues by sending accessing information regarding access to the multi-media content subsequent to receiving the first sub-set of encoded data slices. The method continues by receiving, as a favorable response to the accessing information, at least one of the encoded data slices of the second sub-set of encoded data slices such that at least the decode threshold number of encoded data slices have been received from the set of encoded data slices. The method continues by decoding the at least the decode threshold number of encoded data slices to recover the data segment. | 10-09-2014 |
20140325015 | COMPUTER SYSTEM AND ITS RENEWAL METHOD - A computer system including a management computer for managing the entire system, an integral apparatus, and a high-level connecting device for connecting the management computer and the integral apparatus is designed so that the management computer retains integral apparatus internal configuration information, configuration information about an integral apparatus to be introduced, that indicates the configuration of the integral apparatus that may possibly be introduced to the system, and lifetime information indicating lifetime of the integral apparatus; obtains connectivity guarantee information indicating whether connectivity between the computer and the storage apparatus is guaranteed or not; selects an integral apparatus to be removed from the system by referring to the lifetime information; selects an integral apparatus to be introduced to the system by referring to the integral apparatus internal configuration information, the configuration information about the integral apparatus to be introduced, and the connectivity guarantee information. | 10-30-2014 |
20140359050 | MODULAR ARCHITECTURE FOR EXTREME-SCALE DISTRIBUTED PROCESSING APPLICATIONS - Embodiments of the present invention relate to a new data center architecture that provides for efficient processing in distributed analytics applications. In one embodiment, a distributed processing node is provided. The node comprises a plurality of subnodes. Each subnode includes at least one processor core operatively connected to a memory. A first interconnect operatively connects each of the plurality of subnodes within the node. A second interconnect operably connects each of the plurality of subnodes to a storage. A process runs on a first of the plurality of subnodes, the process being operative to retrieve data from the memory of the first subnode. The process interrogates the memory of the first subnode for requested data. If the requested data is not found in the memory of the first subnode, the process interrogates the memory of at least one other subnode of the plurality of subnodes via the first interconnect. If the requested data is found in the memory of the other subnode, the process copies the requested data to the memory of the first subnode. If the requested data is not found in the memory of the first subnode or the memories of at least one subnode of the plurality of subnodes, the process interrogates the storage via the second interconnect. | 12-04-2014 |
20140372552 | MOBILE APPLICATION TRAFFIC OPTIMIZATION - A system with distributed proxy for reducing traffic to satisfy data requests made in a wireless network is provided. The system includes a mobile device having a local proxy for intercepting a data request made via the mobile device and a proxy server coupled to the mobile device and a content server to which the data request is directed. The proxy server is able to communicate with the local proxy and the local proxy forwards the data request to the proxy server for transmission to the content server for a response to the data request. The proxy server sends the data request to the content server independent of activities on the local proxy and notifies the local proxy when different content on the content server is detected for the data request | 12-18-2014 |
20150012609 | DATA REPLICATION NETWORK TRAFFIC COMPRESSION - An apparatus and method improving effective system throughput for replication of data over a network in a storage computing environment by using software components to perform data compression is disclosed. Software compression support is determined between applications in a data storage computing environment. If supported, compression parameters are negotiated for a communication session between storage systems over a network. Effective system throughput is improved since the size of a compressed lost data packet is less than the size of an uncompressed data packet when a lost packet needs to be retransmitted in a transmission window. | 01-08-2015 |
20150019680 | Systems and Methods for Consistent Hashing Using Multiple Hash Rlngs - Systems and methods for consistent hashing using multiple hash rings. An example method may comprise: assigning two or more tokens to each node of a plurality of nodes, the two or more tokens belonging to two or more distinct cyclic sequences of tokens, wherein each node is assigned a token within each cyclic sequence; receiving a request comprising an attribute of an object; determining, based on the attribute, a sequence identifier and an object position, the sequence identifier identifying a sequence of the two or more cyclic sequences of tokens, the object position identifying a position of the object within the sequence; and identifying, based on the sequence identifier and the object position, a node for servicing the request. | 01-15-2015 |
20150026290 | METHOD FOR MANAGING CLOUD HARD DISKS - The present invention provides a method for managing cloud hard disks. A plurality of hard disk spaces are first logged in to a client unit. Thereby, when the client unit accesses at least a personal datum, the client unit can distribute according to the required hard disk space of the accessed datum without checking the plurality of hard disk spaces one by one. Accordingly, it becomes more convenient for users in using hard disk spaces. | 01-22-2015 |
20150032839 | SYSTEMS AND METHODS FOR MANAGING STORAGE NETWORK DEVICES - Systems and methods for managing storage entities in a storage network are provided. Embodiments may provide a group of management devices to manage a plurality of storage entities in the storage network. In some instances, a storage entity hierarchy for the plurality of storage entities may be identified. At least one of a load or a health associated with a management device of the group of management devices may, in embodiments, be determined. In some embodiments, the plurality of storage entities may be managed in accordance with the identified storage entity hierarchy and based, at least in part, on the determined at least one of a load or a health. | 01-29-2015 |
20150039716 | Management of a Networked Storage System Through a Storage Area Network - Management of a networked storage system through a storage area network (SAN). The storage system includes a storage host, a server, and a management host. The storage host includes a plurality of storage devices. The server is configured to access the storage devices of the storage host via the SAN. The server is also configured to transmit attribute information via the SAN, where the attribute information describes at least one attribute of the server. The management host is configured to receive the attribute information and to determine a desired configuration change to the storage system based on the attribute information. The desired configuration change affects access by the server to the storage devices of the storage host via the SAN. | 02-05-2015 |
20150039717 | CACHE MIGRATION MANAGEMENT IN A VIRTUALIZED DISTRIBUTED COMPUTING SYSTEM - In accordance with one aspect of the present description, in response to a detection by a storage controller, of an operation by a host relating to migration of input/output operations from one host to another, a cache server of a storage controller, transmits to a target cache client of the target host, a cache map of the source cache of the source host wherein the cache map identifies locations of a portion of the storage cached in the source cache. In response, the cache client of the target host, may populate the target cache of the target host with data from the locations of the portion of the storage, as identified by the cache map transmitted by the cache server, which may reduce cache warming time. Other features or advantages may be realized in addition to or instead of those described herein, depending upon the particular application. | 02-05-2015 |
20150074221 | DNS Server Arrangement And Method - The present invention relates to a Domain Name System (DNS) server and a method for resolving DNS queries from a number of clients. The DNS server comprises multiple virtual DNS server instances servicing different clients. The DNS server further comprises a shared cache for caching records which indicate answers to resolved DNS queries. The shared cache is shared between a set of virtual DNS server instances. The virtual DNS server instances that share the shared cache are able to cache DNS query results in the shared cache as well as resolve a DNS query by retrieving a cached record corresponding to the DNS query from the shared cache. Thus it is possible for a virtual DNS server instance to make use of DNS query results obtained by other virtual DNS server instances. | 03-12-2015 |
20150074222 | METHOD AND APPARATUS FOR LOAD BALANCING AND DYNAMIC SCALING FOR LOW DELAY TWO-TIER DISTRIBUTED CACHE STORAGE SYSTEM - A method and apparatus is disclosed herein for load balancing and dynamic scaling for a storage system. In one embodiment, an apparatus comprises a load balancer to direct read requests for objects, received from one or more clients, to at least one of one or more cache nodes based on a global ranking of objects, where each cache node serves the object to a requesting client from its local storage in response to a cache hit or downloads the object from the persistent storage and serves the object to the requesting client in response to a cache miss, and a cache scaler communicably coupled to the load balancer to periodically adjust a number of cache nodes that are active in a cache tier based on performance statistics measured by one or more cache nodes in the cache tier. | 03-12-2015 |
20150095445 | Dynamic Path Selection Policy for Multipathing in a Virtualized Environment - Particular embodiments change a current storage I/O path used by a host computer to access networked storage to an alternative storage I/O path by considering traffic load at a networked switch in the current storage I/O path. The host computer transmits a request to the networked switch in the current storage I/O path to provide network load information currently experiences by the networked switch. After receiving network load information from the networked switch, the host computer then evaluates whether the networked switch is overloaded based on the received network load information. Based on the evaluation, the host computer selects a new alternative storage I/O path to the networked storage that does not include the networked switch, and then forwards future storage I/O communications to the networked storage using the new alternative storage I/O path. | 04-02-2015 |
20150095446 | SYSTEM AND METHOD FOR INCREASING PHYSICAL MEMORY PAGE SHARING BY WORKLOADS - System and method for increasing physical memory page sharing by workloads executing on different host computing systems are described. In one embodiment, workloads executing on different host computing systems that access physical memory pages having identical contents are identified. Further, migration to consolidate the identified workloads on a single host computing system such that the physical memory pages can be shared using a page sharing mechanism is recommended. | 04-02-2015 |
20150095447 | SERVING METHOD OF CACHE SERVER, CACHE SERVER, AND SYSTEM - A serving method of a cache server relates to the field of communications, and can reduce bandwidth consumption of an upstream network and alleviate network pressure. The method include: receiving first request information sent by multiple user equipments, where the first request information indicates data separately required by the multiple user equipments and request points for the data separately required; if it is determined that same data is indicated in the first request information sent by at least two user equipments among the multiple user equipments and the same data has not been cached in the cache server, selecting one request point from request points falling within a preset window; and sending second request information to a source server, where the second request information indicates the uncached data and the selected request point. | 04-02-2015 |
20150113090 | SELECTING A PRIMARY STORAGE DEVICE - In a method for determining a primary storage device and a secondary storage device for copies of data, one or more processors determine metrics data for at least two storage devices in a computing environment. The one or more processors adjust the metrics data. The one or more processors determine an I/O throughput value based on the adjusted metrics data for each of the at least two storage devices. The one or more processors compare the determined I/O throughput values for each of the at least two storage devices. The one or more processors select a storage device of the at least two storage devices with the lowest determined I/O throughput as a primary storage device. | 04-23-2015 |
20150113091 | MASTERLESS CACHE REPLICATION - In an example of masterless cache replication, a processor of a server of a plurality of servers hosting a distributed application can receive a local cache event for a local data item stored in an application cache of the server. The processor can determine whether the local cache event is from another server. The processor can also determine whether a remote cache event of the other server is different from the local cache event and whether the local cache event is in conflict with at least one other cache event for the local data item. The processor can also determine whether the local cache event has a higher priority over the at least one other cache event and direct performance of the local cache event amongst the plurality of servers. | 04-23-2015 |
20150113092 | METHOD AND APPARATUS FOR DISTRIBUTED ENTERPRISE DATA PATTERN RECOGNITION - An apparatus for accessing data in an enterprise data storage system. The apparatus includes memory for storing data, a storage controller, a secure hypervisor, and an interface. The storage controller is coupled to the memory and is configured for managing data stored in the memory. The controller is also configured to receive a command from a client device to access specified data in the memory. The secure virtualized hypervisor within the memory is configured for deploying an operating system of the storage controller for purposes of secure operation by the storage controller. The interface is configured for communicating with the storage controller and initiates the storage controller to perform the command on the specified data that is fetched into the secure virtualized hypervisor, wherein results of the command is transmitted over a network to the client device. | 04-23-2015 |
20150127767 | RESOLVING CACHE LOOKUP OF LARGE PAGES WITH VARIABLE GRANULARITY - A method, system, and computer program product for resolving cache lookup of large pages with variable granularity are provided in the illustrative embodiments. A number of unused bits in an available number of bits is identified. The available number of bits is configured to address a page of data in memory, wherein the page exceeding a threshold size, and the page comprising a set of parts. The unused bits are mapped to the plurality of parts such that a value of the unused bits corresponds to existence of a subset of the set of parts in a memory. A virtual address is translated to a physical address of a requested part in the set of parts. A determination is made, using the unused bits, whether the requested part exists in the memory. | 05-07-2015 |
20150312370 | Screen Sharing Cache Management - In one embodiment, a managed cache system, includes a cache memory to receive storage units via an uplink from a transmitting client, each storage unit including a decodable video unit, each storage unit having a priority, and enable downloading of the storage units via a plurality of downlinks to receiving clients, and a controller processor to purge the cache memory of one of the storage units when all of the following conditions are satisfied: the one storage unit is not being downloaded to any of the receiving clients, the one storage unit is not currently subject to a purging exclusion, and another one of the storage units now residing in the cache, having a higher priority than the priority of the one storage unit, arrived in the cache after the one storage unit. Related apparatus and methods are also described. | 10-29-2015 |
20150319241 | ACCOUNTABLE CONTENT STORES FOR INFORMATION CENTRIC NETWORKS - A set of Content Store nodes of an information-centric network (ICN) can cache data, and can processes an Interest for this data based on a domain assigned to the requested data. During operation, a CS node can receive a Content Object that is to be cached, and processes the Content Object by determining a domain associated with the Content Object. The CS node selects a storage repository associated with the domain, and stores the Content Object in the selected repository. The CS node can also receive an Interest for a piece of content, and processes the Interest by performing a lookup operation for a rule associated with the Interest's name. The rule can include a set of commands for performing a programmatic operation. Then, if the CS node finds a matching rule, the CS node can execute the rule's commands to perform the programmatic operation. | 11-05-2015 |
20150319247 | MESH-MANAGING DATA ACROSS A DISTRIBUTED SET OF DEVICES - Data files, applications and/or corresponding user interfaces may be accessed at a device that collaborates in a mesh. The mesh may include any number or type of devices that collaborate in a network. Data, applications and/or corresponding user interfaces may be stored within a core object that may be shared over the mesh. Information in the core object may be identified with a corresponding user such that a user may use any collaborating device in the mesh to access the information. In one example, the information is stored remotely from a device used to access the information. A remote source may store the desired information or may determine the storage location of the desired information in the mesh and may further provide the desired information to a corresponding user. | 11-05-2015 |
20150331794 | SYSTEMS AND METHODS FOR CACHE COHERENCE PROTOCOL - The present disclosure relates to systems, methods, and computer program products for keeping multiple caches updated, or coherent, on multiple servers when the multiple caches contain independent copies of cached data. Example methods may include receiving a request to write data to a block of a first cache associated with a first server in a clustered server environment. The methods may also include identifying a second cache storing a copy of the block, where the second cache is associated with a second server in the clustered environment. The methods may further include transmitting a request to update the second cache with the received write data, and upon receiving a subsequent request to write subsequent data, identifying a third cache for invalidating based on access patterns of the blocks, where the third cache is associated with a third server in the clustered environment. | 11-19-2015 |
20150341458 | METHOD OF ADAPTIVELY DEPLOYING CACHE POSITIONED AT SUBSCRIBER NETWORK, AND SYSTEM THEREFOR - Disclosed is an adaptive cache transformation architecture for a cache deployed forward to minimize duplicated transmission, by automatically storing content in a subscriber network area. The system for adaptively deploying a cache positioned at a subscriber network includes a cache service group configured to store all or a part of pieces of content serviced from one or more content providing apparatuses to one or more terminals and including a plurality of caches deployed at a subscriber network between the content providing apparatus and the terminal in a distributed manner, and a resource manager configured to transform a deployment structure of the plurality of caches forming the cache service group, based on at least one of an increase rate in the number of pieces of contents requested by the one or more terminals and a reutilization rate for each content. | 11-26-2015 |
20150350364 | FEDERATED CACHE NETWORK - A system and method, called federated cache network (FCN), places and optimizes content caches located in an access or backhaul network of a network service provider or multiple system operator. A FCN comprises a trunk cache and a plurality of branch caches and leaf caches. In a mobile carrier network, the mobility management entity (MME) informs a TCP termination device that a mobile device is moving toward a new base station. The live and terminated TCP sessions associated with a mobile terminal that is moving to a new base station are copied and re-terminated at the new base station, in anticipation of a possible handoff. | 12-03-2015 |
20150363355 | FINE-GRAINED STREAM-POLICING MECHANISM FOR AUTOMOTIVE ETHERNET SWITCHES - A system and method for monitoring a plurality of data streams is disclosed. At a first processing stage, a first memory area is associated to an element of a plurality of data streams. Upon arrival of a frame associated with one of the plurality of data streams, a second memory area is associated to the arrived frame based on the element. In the second memory area, a data indicating an arrival of the arrived frame is recorded and on a successful recording, the frame is forwarded to a second processing stage. An independent process executes at a preselected time interval to erase contents of the first memory area. | 12-17-2015 |
20150381727 | STORAGE FUNCTIONALITY RULE IMPLEMENTATION - One or more techniques and/or systems are provided for storage functionality rule implementation on behalf of external client agents. For example, a network storage controller may be configured to perform storage operations on behalf of clients, such as providing read/write access to storage devices. The network storage controller may receive a storage functionality rule (e.g., a rule that tracing is to be enabled for write operations by user (B)) from an external client agent hosted on a client device. Responsive to identify a storage operation context that corresponds to the storage functionality rule (e.g., user (B) may attempt to perform a write operation), the network storage controller may implement the storage functionality rule for the storage operation context on behalf of the external client agent. In this way, network bandwidth and/or processing latency otherwise associated with obtaining storage operation processing instructions from the external client agent may be mitigated. | 12-31-2015 |
20150381755 | CACHE MANIFEST FOR EFFICIENT PEER ASSISTED STREAMING - A method for delivering content in a communication network includes receiving, by a cache, a request message requesting content to be served. The method includes storing multiple cache manifests corresponding to indicating content and capabilities of a plurality of caches. Each cache manifest indicates content and capabilities of a respective one of the caches and lists descriptions of the content stored in the respective cache. The method includes determining, based on information in the plurality of cache manifests, to serve the requested content, by selecting a cache from which to serve the requested content. The method includes in response to the determination, instructing the selected cache to transmit the requested content to a client device that generated the request message; and alternatively determining to not serve the requested content, based on the information in the plurality of cache manifests, and forwarding the request message to a higher level device. | 12-31-2015 |
20160014203 | STORAGE FABRIC ADDRESS BASED DATA BLOCK RETRIEVAL | 01-14-2016 |
20160088114 | INTRANET DISTRIBUTED CACHING - A routing device capable of performing application layer data caching is described. Application data caching at a routing device can alleviate the bottleneck that an application data host may experience during high demands for application data. Requests for the application data can also be fulfilled faster by eliminating the network delays for communicating with the application data host. The techniques described can also be used to perform analysis of the underlying application data in the network traffic transiting though a routing device. | 03-24-2016 |
20160100027 | MECHANISM FOR UNIVERSAL PARALLEL INFORMATION ACCESS - Inventive aspects include one or more local servers each including a local universal access logic section, one or more remote servers each including a remote universal access logic section, and a coherency node to provide coherent access to first data that is stored on the one or more local servers to the one or more remote servers, and to provide coherent access to second data that is stored on the one or more remote servers to the one or more local servers. Embodiments of the inventive concept herein can use hardware and/or software mechanism to unify direct and remote attached devices via command, data, status, and completion memory queues. Applications and operating systems can be presented with a uniform access interface for sharing data and resources across multiple disparately situated servers and nodes. | 04-07-2016 |
20160112513 | VIRTUAL STORAGE APPLIANCE GETAWAY - A network connection is established between a virtual storage appliance (VSA) in a virtual machine and a storage server system. The virtual machine can run on a computing device remote to the storage server system. Access is provided to a second shared namespace of data at the VSA over the network connection. The second shared namespace is a policy defined subset of a first shared namespace of the storage server system. Data in the second shared namespace is accessible at the storage server system by at least one other computing device communicatively coupled to the storage server system. The data in the second shared namespace at the VSA is replicated to create a local copy at the computing device. Changes to the local copy are synchronized with the data in the second shared namespace at the storage sever system. | 04-21-2016 |
20160150018 | Method and System for Storing and Providing Electronic Records of Individual Users - A method and system for electronically sharing user memories, the method comprising receiving user events data at a server, generating, via the server, user memory data from the user events data, and distributing the user memory data from the server to one or more user devices at given times based on a distribution configuration, the distribution configuration including one or more dates configured specifying distribution of the user memory data to users of the one or more user devices. | 05-26-2016 |