Patent application number | Description | Published |
20110161802 | Methods, processes and systems for centralized rich media content creation, custimization, and distributed presentation - The present invention is related to methods, processes, and systems that enable web users to quickly create, customize, and publish rich media contents via Internet. Web addresses and attributes with regard to the published rich media contents are also generated. The published rich media contents, web addresses and attributes are stored locally in a centralized place, but they can be called by any geographically distributed third-party websites or remote web users, and then be presented on the third-party websites or the terminal devices of the remote web users. Furthermore, the present invention also enables web users to quickly create and customize personal online stores at a centralized place, and then list the published rich media contents in their personal online stores. These listed rich media contents can also be referenced and called by any geographically distributed third-party websites or remote web users, and then be presented on the third-party websites or the terminal devices of the remote web users. | 06-30-2011 |
20130346513 | MIGRATING A CHAT MESSAGE SERVICE PROVIDED BY A CHAT SERVER TO A NEW CHAT SERVER - Migrating a chat messaging service provided for a chat user is disclosed. At a second chat server from a first chat server, static information associated with a chat user is received. The static information is received before the chat user is indicated as being associated with a migration state. At the second chat server from the first chat server, dynamic information associated with the chat user is received. At least a portion of the dynamic information is received after the chat user is indicated as being associated with the migration state. After the chat user is no longer indicated as being associated with the migration state, a chat message for the chat user is received at the second chat server. | 12-26-2013 |
20130346587 | METHODS AND SYSTEMS FOR ADAPTIVE CAPACITY MANAGEMENT - Techniques to adaptively manage service requests within a multi-server system. In one embodiment, a service request and a service rule associated with the service request are received. Data about operating parameters of at least one server in a multi-server system are also received as part of a feedback loop. A response to the service request based on the service rule and the operating parameters is determined. Execution of the service request may be modified according to a tiered service rule based on the at least one server reaching a capacity threshold. The modification includes omitting an action in execution of the service request. | 12-26-2013 |
20140068198 | STATISTICAL CACHE PROMOTION - Storing data in a cache is disclosed. It is determined that a data record is not stored in a cache. A random value is generated using a threshold value. It is determined whether to store the data record in the cache based at least in part on the generated random value. | 03-06-2014 |
20150081974 | STATISTICAL CACHE PROMOTION - Storing data in a cache is disclosed. It is determined that a data record is not stored in a cache. A random value is generated using a threshold value. It is determined whether to store the data record in the cache based at least in part on the generated random value. | 03-19-2015 |
20150370718 | STATISTICAL CACHE PROMOTION - Storing data in a cache is disclosed. It is determined that a data record is not stored in a cache. A random value is generated using a threshold value. It is determined whether to store the data record in the cache based at least in part on the generated random value. | 12-24-2015 |
20160048185 | DYNAMICALLY RESPONDING TO DEMAND FOR SERVER COMPUTING RESOURCES - Embodiments are described for dynamically responding to demand for server computing resources. The embodiments can monitor performance of each of multiple computing systems in a data center, identify a particular computing system of the multiple computing systems for allocation of additional computing power, determine availability of an additional power supply to allocate to the identified computing system, determine availability of a capacity on a power distribution line connected to the particular computing system to provide the additional power supply to the particular computing system, and allocate the additional computing power to the identified computing system as a function of the determined availability of the additional power supply and the determined availability of the capacity on the power distribution line. | 02-18-2016 |
20160048342 | REDUCING READ/WRITE OVERHEAD IN A STORAGE ARRAY - Techniques, systems, and devices are disclosed for reducing data read/write overhead in a storage array, such as a redundant array of independent disks (RAID), by dynamically configuring stripe sizes in disk drives. In one aspect, each disk drive is configured with multiple stripe sizes based on statistical file sizes of incoming data traffic. For example, a preconfigured disk drive can include a set of different stripe sizes wherein a stripe size is consistent with the size of a common file type in the historical or predicted data traffic. Moreover, the allocation of disk space for each stripe size may be consistent with the composition percentage of the associated file type in the historical or predicted data traffic. As a result, reads/writes of large data files in the storage array predominantly take place on a single disk drive rather than on multiple drives, thereby reducing read/write overheads. | 02-18-2016 |
20160048345 | ALLOCATION OF READ/WRITE CHANNELS FOR STORAGE DEVICES - Embodiments are disclosed for improving channel performance in a storage device, such as a flash memory or a flash-based solid state drive, by dynamically provisioning available data channels for both write and read operations. In one aspect, a set of available data channels on a storage device is partitioned into a set of write channels and a set of read channels according to a read-to-write ratio. Next, when an incoming data stream of mixed read requests and write requests arrives at the storage device, the allocated read channels process the read requests on a first group of memory blocks, which does not include garbage collection and write amplification on the first group of memory blocks. In parallel, the allocated write channels process the write requests on a second group of memory blocks, which does include garbage collection and write amplification on the second group of memory blocks. | 02-18-2016 |