Patent application number | Description | Published |
20080208715 | METHOD, SYSTEM AND APPARATUS FOR PROVIDING A PERSONALIZED ELECTRONIC SHOPPING CENTER - According to embodiments of the present invention, a user's local storage system may be used to create a virtual personal mall comprising one or more virtual personal stores and configured for purchasing products by one or several providers. The virtual personal store and/or virtual personal mall may be organized in virtual shelves. Each virtual shelf may contain a group of products with one or more common properties, for example, books by a certain author and/or published by a certain publisher, and/or supplied by the same virtual personal store provider, etc. The groups may be defined by the virtual personal store provider and/or by the user and/or by a group of users. | 08-28-2008 |
20080215437 | System, apparatus and method for advertising using a data storage device - A system, method, and apparatus for downloading advertisements, storing advertisements on a storage device, selecting advertisements for presentation, and presenting selected advertisements. In some embodiments of the invention, advertisements may be dynamically associated and presented in coordination with content according to predefined parameters, stored information, and other criteria. Advertisement credits may be allocated in exchange for advertisement consumption. Advertisements and other information may be exchanged with remote servers. Other embodiments are described and claimed. | 09-04-2008 |
20080263130 | APPARATUS, SYSTEM AND METHOD OF DIGITAL CONTENT DISTRIBUTION - A system and apparatus for content delivery to storage. Delivery may be performed according to content types, which may be, for example, content object identifier, a flow of content objects, and store channel levels. Delivery may be performed according to a virtual network defined over a physical network infrastructure and further using peer-to-peer, multicast and/or unicast protocols. | 10-23-2008 |
20130227562 | SYSTEM AND METHOD FOR MULTIPLE QUEUE MANAGEMENT AND ADAPTIVE CPU MATCHING IN A VIRTUAL COMPUTING SYSTEM - A method and system for managing multiple queues providing a communication path between a virtual machine and a hypervisor in a virtual machine system. The multiple queues are bundled together and identified on a polled list. When one of the queues on the polled list is used to communicate a request from the virtual machine to the hypervisor, a virtual machine exit is performed and a virtual machine exit is disabled for all of the queues on the polled list. The queues on the polled list are assigned to an initial host CPU to service requests from the virtual machine. If a particular queue on the polled list experiences a load that exceeds a load threshold, the particular queue is removed from the polled list and assigned to a different host CPU. | 08-29-2013 |
20140359607 | Adjusting Transmission Rate of Execution State in Virtual Machine Migration - Systems and methods for adjusting the rate of transmission of the execution state of a virtual machine undergoing live migration. An example method may comprise: determining, by a migration agent executing on a computer system, a first rate being a rate of change of an execution state of a virtual machine undergoing live migration from an origin host computer system to a destination host computer system; determining a second rate being a rate of transferring the execution state of the virtual machine from the origin host computer system to the destination host computer system; determining that a ratio of the first rate to the second rate exceeds a threshold convergence ratio; and reducing the rate of transferring the execution state of the virtual machine from the origin host computer system to the destination host computer system. | 12-04-2014 |
20140365738 | Systems and Methods for Memory Page Offloading in Multi-Processor Computer Systems - Systems and methods for memory page offloading in multi-processor computer systems. An example method may comprise: detecting, by a computer system, a memory pressure condition on a first node; invalidating a page table entry for a memory page residing on the first node; copying the memory page to a second node; and updating the page table entry for the memory page to reference the second node. | 12-11-2014 |
20150248303 | PARAVIRTUALIZED MIGRATION COUNTER - An application associated with a processor reads a first value of a counter and a second value of the counter. The counter is indicative of a migration status of the application with respect to the processor. Responsive to determining that the first value of the counter does not equal the second value of the counter, the application ascertains whether a value of a hardware parameter associated with the processor has changed during a time interval. The migration status indicates a count of the number of times the application has migrated from one processor to another processor. The application determines the validity of a value of a performance monitoring unit derived from the hardware parameter in view of the application ascertaining whether the value of the hardware parameter has changed during the time interval. | 09-03-2015 |
20150277962 | REDUCING OR SUSPENDING TRANSFER RATE OF VIRTUAL MACHINE MIGRATION WHEN DIRTYING RATE EXCEEDS A CONVERGENCE THRESHOLD - An example method for adjusting the rate of transfer of the execution state of a virtual machine undergoing live migration may comprise determining, by a processor, a first rate being a rate of change of an execution state of a virtual machine undergoing live migration from a first computer system to a second computer system. The example method may further comprise determining a second rate being a rate of transfer of the execution state of the virtual machine to the second computer system. The example method may further comprise, responsive to determining that a ratio of the first rate to the second rate exceeds a first threshold ratio, suspending the transfer of the virtual machine execution state to the second computer system. | 10-01-2015 |