Class / Patent application number | Description | Number of patent applications / Date published |
707764000 | For parallel processing system | 20 |
20100094893 | QUERY INTERFACE CONFIGURED TO INVOKE AN ANALYSIS ROUTINE ON A PARALLEL COMPUTING SYSTEM AS PART OF DATABASE QUERY PROCESSING - Techniques are disclosed for invoking an analysis routine running on a parallel computer system to analyze query results. An interface used to build and execute a database query may be used to invoke a complex analysis routine on a parallel computer system to analyze query results obtained by executing the database query. Alternatively, a user may build a query that includes specific conditions evaluated by an analysis routine on the parallel computer system (as opposed to selecting an analysis routine after receiving query results). | 04-15-2010 |
20100094894 | Program Invocation From A Query Interface to Parallel Computing System - Techniques are disclosed for invoking an analysis routine running on a parallel computer system to analyze query results. An interface used to build and execute a database query may be used to invoke a complex analysis routine on a parallel computer system to analyze query results obtained by executing the database query. Alternatively, a user may build a query that includes specific conditions evaluated by an analysis routine on the parallel computer system (as opposed to selecting an analysis routine after receiving query results). | 04-15-2010 |
20100131540 | SYSTEM, METHOD, AND COMPUTER-READABLE MEDIUM FOR OPTIMIZING PROCESSING OF DISTINCT AND AGGREGATION QUERIES ON SKEWED DATA IN A DATABASE SYSTEM - A system, method, and computer-readable medium for optimization of query processing in a parallel processing system are provided. Skewed values and non-skewed values are treated differently to improve upon conventional DISTINCT and aggregation query processing. Skewed attribute values on which a DISTINCT selection or group by aggregation is applied are allocated entries in a hash table. In this manner, a processing module may consult the hash table to determine if a skewed attribute value has been encountered during the query processing in a manner that precludes repetitive redistribution of rows with highly skewed attribute values on which a DISTINCT selection or group by aggregation is applied. | 05-27-2010 |
20100198855 | PROVIDING PARALLEL RESULT STREAMS FOR DATABASE QUERIES - A system and method for providing parallel result streams for database queries is provided. The system includes a network including a client, a server, and a database. The client executes an application and sends a query to the server. In response, the server compiles the query to produce a query plan, executes statements in the query plan and sends parallel result streams to the client. | 08-05-2010 |
20100241646 | SYSTEM AND METHOD OF MASSIVELY PARALLEL DATA PROCESSING - A system and method of massively parallel data processing are disclosed. In an embodiment, a method includes generating an interpretation of a customizable database request which includes an extensible computer process and providing an input guidance to available processors of an available computing environment. The method further includes automatically distributing an execution of the interpretation across the available computing environment operating concurrently and in parallel, wherein a component of the execution may be limited to at least a part of an input data. The method also includes automatically assembling a response using a distributed output of the execution. | 09-23-2010 |
20110047172 | MAP-REDUCE AND PARALLEL PROCESSING IN DATABASES - One embodiment is a method that uses MapReduce and Relation Valued Functions (RVFs) with parallel processing to search a database and obtain search results. | 02-24-2011 |
20110072032 | Transfer of Data Structures in Sub-Linear Time For Systems with Transfer State Awareness - A method for data transfer in a data processing system, and corresponding system and machine-readable medium. One method includes receiving by the data processing system a request for a data structure from a calling process, and splitting the data structure into a plurality of substructures by the data processing system. That method includes transferring the plurality of substructures to the calling process by the data processing system, wherein at least two of the substructures are transferred in parallel, and maintaining a transfer state for each substructure in the data processing system. | 03-24-2011 |
20110087684 | POSTING LIST INTERSECTION PARALLELISM IN QUERY PROCESSING - Disclosed herein is parallel processing of a query, which uses inter-query parallelism in posting list intersections. A plurality of tasks, e.g., posting list intersection tasks, are identified for processing in parallel by a plurality of processing units, e.g., a plurality of processing cores of a multi-core system. | 04-14-2011 |
20110153634 | METHOD AND APPARATUS FOR LOCATING SERVICES WITHIN PEER-TO-PEER NETWORKS - A capability is provided for supporting a service location capability in a peer-to-peer network (P2P), such as a Chord network or other P2P network. In one embodiment, a method for locating a service within a P2P network is provided. The P2P network includes a plurality of nodes, including a target node which performs the method for locating the service within the P2P network. The target node includes a search table including a plurality of entries identifying a respective plurality of nodes of the P2P network. The method includes detecting a request to search for the service within the P2P network and initiating, toward at least one of the nodes of the search table, a service search request. The service search request is a request to identify at least one node of the P2P network that supports the service. The service search request includes information indicative of the service and a search range for use by the node receiving the service search request. | 06-23-2011 |
20120036146 | APPARATUS FOR ELASTIC DATABASE PROCESSING WITH HETEROGENEOUS DATA - A database management system implemented in a cloud computing environment. Operational nodes are assigned as groups of controller-nodes, compute-nodes or storage-nodes. Queries specify one or more tables for an associated database operation, with each table being assigned to respective storage nodegroup(s). The number of nodes executing a given query may change, by (a) changing the compute-nodes associated with a connection, or (b) adding or removing nodes associated with a connection; and/or distributing data to a storage nodegroup based on a Distribution Method which may be either data dependent or data independent. A controller node further executes a Dynamic Query Planner (DQP) process that develops a query plan. | 02-09-2012 |
20120109992 | Query Rewrite With Auxiliary Attributes In Query Processing Operations - Techniques are provided for rewriting queries during a database query processing operation to include auxiliary attributes not included in the original query, thus improving processing efficiency. For example, a technique for rewriting a query in a query processing operation includes the following steps. The query is processed in accordance with at least a portion of a data set, producing query results. Data attributes from the query results are analyzed. At least one new predicate from at least one auxiliary data attribute is appended on the query. | 05-03-2012 |
20130007033 | SYSTEM AND METHOD FOR PROVIDING ANSWERS TO QUESTIONS - Providing answers to questions based on any corpus of data implements a method that generates a number of candidate passages from the corpus that answer an input query, and finds the correct resulting answer by collecting supporting evidence from the multiple passages. By analyzing all retrieved passages and that passage's metadata in parallel, an output plurality of data structures is generated including candidate answers based upon the analyzing. Then, supporting passage retrieval operations are performed upon the set of candidate answers, and for each candidate answer, the data corpus is traversed to find those passages having candidate answer in addition to query terms. All candidate answers are automatically scored by a plurality of scoring modules, each producing a module score. The modules scores are processed to determine one or more query answers; and, a query response is generated based on the one or more query answers. | 01-03-2013 |
20130110860 | USER PIPELINE CONFIGURATION FOR RULE-BASED QUERY TRANSFORMATION, GENERATION AND RESULT DISPLAY | 05-02-2013 |
20130275452 | DISTRIBUTING AND PROCESSING STREAMS OVER ONE OR MORE NETWORKS - In an embodiment, a method for distributing and processing streams over wide area networks comprises receiving, at a unified data processing node, a continuous query; determining a parallel portion of the continuous query; sending the parallel portion to a plurality of distributed data processing nodes located in a plurality of data centers; at each distributed node in the plurality of distributed nodes, locally executing the parallel portion against independent data partitions, producing a partial summary data, sending the partial summary data to the unified node; continuously receiving, at the unified node, in real-time, the partial summary data. | 10-17-2013 |
20140095526 | Random Number Generator In A MPP Database - A random number generation process generated uncorrelated random numbers from identical random number sequences on parallel processing database segments of an MPP database without communications between the segments by establishing a different starting position in the sequence on each segment using an identifier that is unique to each segment, query slice information and the number of segments. A master node dispatches a seed value to initialize the random number sequence generation on all segments, and dispatches the query slice information and information as to the number of segments during a normal query plan dispatch process. | 04-03-2014 |
20140280283 | Database System with Data Organization Providing Improved Bit Parallel Processing - A database system provides vertical or horizontal pre-packing of database data elements according to a size of physical processor words in order to obtain improved parallel processing at the bit level. After processor words are populated with data from multiple data elements of the database, query operations are used which may simultaneously process the multiple data elements in each data word simultaneously in the computer arithmetic logic unit. | 09-18-2014 |
20150095364 | System and Methods for Caching and Querying Objects Stored in Multiple Databases - A method for organizing and searching objects from a plurality of databases includes querying an attribute of each entry stored in the plurality of databases; assigning a memory value for each of the attributes retrieved from each of the objects stored in the plurality of databases and storing the memory values for each of the attributes in a cache. At a client device, a search query is received and it is determined if the search query contains an attribute of the entry to be searched. Upon positive determination, a search is performed at the cache using the attribute contained in the search query; and upon negative determination, a search for the entry is performed at the plurality of databases. | 04-02-2015 |
20150356138 | DATASTORE MECHANISM FOR MANAGING OUT-OF-MEMORY DATA - According to some embodiments, a method for making input data available for processing by one or more processors comprises storing one or more parameters, wherein the one or more parameters comprise information identifying a location of the input data; and creating a datastore object using the one or more parameters, wherein the datastore object interfaces the input data and includes a read method for reading a chunk, the chunk being a subset of the input data, and having a size that does not exceed a memory size assigned to the one or more processors. According to some embodiments, the one or more parameters further comprise one or more of a type of the input data; a format of the input data; an offset for reading from the input data; a size of the chunk; a condition for determining the chunk; and a query for deriving the input data. | 12-10-2015 |
20160012107 | MAPPING QUERY OPERATIONS IN DATABASE SYSTEMS TO HARDWARE BASED QUERY ACCELERATORS | 01-14-2016 |
20160188669 | PARTITIONING AND REPARTITIONING FOR DATA PARALLEL OPERATIONS - A query that identifies an input data source is rewritten to contain data parallel operations that include partitioning and merging. The input data source is partitioned into a plurality of initial partitions. A parallel repartitioning operation is performed on the initial partitions to generate a plurality of secondary partitions. A parallel execution of the query is performed using the secondary partitions to generate a plurality of output sets. The plurality of output sets are merged into a merged output set. | 06-30-2016 |