Patent application number | Description | Published |
20090125918 | SHARED SENSING SYSTEM INTERFACES - Various interfaces such as application programming interfaces (APIs) are employed to allow developers to construct applications that use multiple shared sensors. In one instance, a coordinator can be utilized to facilitate coordination of sensor data contributors and applications desirous of utilizing such data. Standardized interfaces can be employed to aid interaction between all entities including contributors, applications and a coordinator, amongst others. | 05-14-2009 |
20090144011 | ONE-PASS SAMPLING OF HIERARCHICALLY ORGANIZED SENSORS - One-pass sampling is employed within a hierarchically organized structure to efficiently and expeditiously respond to sensor inquires. Identification of relevant sensors and sampling of those sensors is combined and performed in a single pass. Oversampling can also be employed to ensure a target sample size is met where some sensors fail or are otherwise unavailable. Further yet, sensor data can be cached and utilized to hasten processing as well as compensate for occasional sensor unavailability. | 06-04-2009 |
20090222544 | FRAMEWORK FOR JOINT ANALYSIS AND DESIGN OF SERVER PROVISIONING AND LOAD DISPATCHING FOR CONNECTION-INTENSIVE SERVER - The claimed subject matter provides a system and/or a method that facilitates managing a number of active servers in a cluster. A forecast component can predict at least one of login rate or number of connections in the cluster at a future time. A dynamic load analysis component can evaluate dynamic behaviors in login rate and number of connections in the cluster as a result of load dispatching. Moreover, a provisioning component can determine a number of servers in the cluster needed based at least in part on the prediction and dynamic behavior analysis. In addition, the provisioning component can include an additional margin in the number of servers needed in accordance with multiplicative factors. | 09-03-2009 |
20090222562 | LOAD SKEWING FOR POWER-AWARE SERVER PROVISIONING - The claimed subject matter provides a system and/or a method facilitates energy-aware connection distribution among a plurality of servers in a cluster. A set of busy servers in the cluster can be provided that each handle a high number of connections. In addition, a set of tail servers in the cluster can be managed that each maintain a low number of connections. A load skewing component gives priority to at least a subset of the set of busy servers when dispatching new connection requests from a plurality of users. In addition, the load skewing component controls the number of tail servers to maintain a sufficient number for energy-aware operation. | 09-03-2009 |
20090327376 | B-FILE ABSTRACTION FOR EFFICIENTLY ARCHIVING SELF-EXPIRING DATA - Systems and methods are provided for data processing and storage management. In an illustrative implementation an exemplary computing environment comprises at least one data store, a data processing and storage management engine (B-File engine) and at least one instruction set to instruct the B-File engine to process and/or store data according to a selected data processing and storage management paradigm. In an illustrative operation, the illustrative B-File engine can generate a B-File comprising multiple buckets and store sample items in a random bucket according to a selected distribution. When the size of the B -FILE grows to reach a selected threshold (e.g., maximum available space), the B-File engine can shrink the B-File by discarding the largest bucket. Additionally, the B-File engine can append data to existing buckets and explicitly cluster data when erasing data such that data can be deleted together into the same flash block. | 12-31-2009 |
20100030809 | MAINTAINING LARGE RANDOM SAMPLE WITH SEMI-RANDOM APPEND-ONLY OPERATIONS - Systems and methods are provided for online maintenance, processing, and querying of large random samples of data from a large/infinite data stream. In an illustrative implementation an exemplary computing environment comprises at least one data store, a data storage and management engine operable to process and/or store data according to a selected data processing and storage management paradigm on a cooperating data store (e.g., flash media). The exemplary data storage and management engine can deploy the exemplary sampling algorithm to perform and/or provide one or more of the following operations/features comprising the algorithm is operable for streaming data (or a single pass through the dataset), allows for the semi-random data write operations, the algorithm avoids operations (e.g., in-place updates) that are expensive on flash storage media, and the algorithm is tunable to both the amount of flash storage and the amount of standard memory (DRAM) available to the algorithm. | 02-04-2010 |
20100325132 | QUERYING COMPRESSED TIME-SERIES SIGNALS - A system described herein includes a receiver component that receives a query that pertains to a raw time-series signal. A query executor component selectively executes the query over at least one of multiple available compressed representations of the raw time-series signal, wherein the query pertains to at least one of one of determining a trend pertaining to the raw time-series signal, generating a histogram pertaining to the raw time-series signal, or determining a correlation pertaining to the raw time-series signal. | 12-23-2010 |
20120110015 | SEARCH CACHE FOR DOCUMENT SEARCH - A method is described herein that includes receiving a query from a user at a computing device. The method also includes performing a search for one or more documents based at least in part upon the received query, wherein performing the search comprises causing a processor to perform the search through utilization of a search cache retained on the computing device, wherein the search cache comprises a results cache, an index cache, and a Boolean cache. | 05-03-2012 |
20120131009 | ENHANCING PERSONAL DATA SEARCH WITH INFORMATION FROM SOCIAL NETWORKS - The personal data search technique uses data input by users for a given user's personal data on a social networking site to enrich the given user's personal data. The technique annotates personal data stored on a personal computing device or in a computing cloud with data obtained from social networking sites (for example, tags, comments, likes/dislikes and so forth) provided by friends/other users in the given user's social network or networks. Such annotations can later are used by search engine to enhance the search functionality and/or to improve the ranking of search results. Since the data is entered by actual human users it is very accurate and since the data is already readily available on social networks the cost to obtain it is very inexpensive. | 05-24-2012 |
20120246169 | QUERYING COMPRESSED TIME-SERIES SIGNALS - Technologies pertaining to compressing time-series signals are described herein. Groups of time-series signals are generated based upon similarities between time-series signals. Each group of time-series signals includes a respective base time-series signal. Ratio signals that are representative of time-series signals are computed, wherein the ratio signals are based upon the base time-series signal and other respective time-series signals in a group of time-series signals. | 09-27-2012 |
20130332442 | DEEP APPLICATION CRAWLING - The deep application crawling technique described herein crawls one or more applications, commonly referred to as “apps”, in order to extract information inside of them. This can involve crawling and extracting static data that are embedded within apps or resource files that are associated with the apps. The technique can also crawl and extract dynamic data that apps download from the Internet or display to the user on demand, in order to extract data. This extracted static and/or data can then be used by another application or an engine to perform various functions. For example, the technique can use the extracted data to provide search results in response to a user query entered into a search engine. Alternately, the extracted static and/or dynamic data can be used by an advertisement engine to select application-specific advertisements. Or the data can be used by a recommendation engine to make recommendations for goods/services. | 12-12-2013 |
20140279026 | ENERGY-EFFICIENT MOBILE ADVERTISING - Various technologies described herein pertain to prefetching advertisements for mobile advertising. A prediction model for estimating a number of advertisements that a mobile client is likely to request during an upcoming prediction time period can be employed. An estimated total amount of time of likely interaction with application(s) executed by the mobile client can be predicted; based upon such prediction, a number of advertisement slots likely to be available and a probability of each of the advertisement slots being available can be computed. Moreover, an ad server can allocate advertisements in a pending advertisement queue and/or disparate advertisements collected from an ad exchange to the mobile client based upon the number of advertisement slots likely to be available, the probability of each of the advertisements slots being available, and aggregated probabilities of the pending advertisements in the pending advertisement queue being displayed prior to corresponding deadlines for expiration. | 09-18-2014 |