Patent application number | Description | Published |
20130281105 | FAST EFFICIENT RESOURCE DISTRIBUTION IN LONG TERM EVOLUTION COMMUNICATION NETWORK SCHEDULING - A method and system for determining, for a transmission time interval (TTI), an amount of resource units to be allocated to each cell served by at least one module in a wireless communication network are disclosed. The allocation is based on an estimated resource consumption of each cell, a resource limit of each cell and resource limits of the at least one module. For each TTI, a prioritized list of queues having information to be transmitted is determined, where each queue belongs to a cell of the plurality of cells. Also, for each TTI, in an order of priority of the queues, a number of resource units to be consumed by a queue is determined based at least in part on a number of resource units required to empty the queue. | 10-24-2013 |
20130308469 | SHARED CELL RECEIVER FOR UPLINK CAPACITY IMPROVEMENT IN WIRELESS COMMUNICATION NETWORKS - A wireless communication method and system are provided in which an uplink data stream that has uplink data associated with a user device is received. Channel performance data based at least in part on a portion of the uplink data stream is determined. A determination is made whether the channel performance data meets a predetermined performance level. The portion of the uplink data stream is discarded when the channel performance data does not meet the predetermined performance level. The portion of the uplink data stream is tagged for additional processing when the channel performance data meets the predetermined performance level. | 11-21-2013 |
20140086062 | DETERMINING HEARABILITY IN A HETEROGENOUS COMMUNICATION NETWORK - A method for determining whether a radio unit in a shared cell configuration is hearable by a user equipment is provided. Transmission of a probe message to the user equipment by the radio unit is caused. The probe message invokes a probe response from the user equipment if the radio unit is hearable by the user equipment on a downlink channel. The radio unit is hearable by the user equipment if the downlink channel performance between the radio unit and the user equipment meets a predetermined signal criteria. An uplink channel associated with the user equipment is monitored for the probe response from the user equipment after transmission of the probe message. Hearability data associated with the user equipment is determined based on the monitored uplink channel. The hearability data indicates whether the radio unit is hearable by the user equipment the downlink channel. | 03-27-2014 |
20150023235 | Flexible Downlink Subframe Structure for Energy-Efficient Transmission - Disclosed herein are methods for using new energy-saving subframe structures, as well as corresponding apparatus that are configured to exploit these energy-saving subframes. In these energy-saving subframes, some, but not all, of the individual OFDM symbols in a subframe are inactive, meaning that no signal is transmitted during at least a part of the symbol time of the blanked symbols. An example method according to these techniques may be implemented in a radio transceiver, such as an LTE eNodeB, and comprises transmitting, in a first symbol time of a downlink subframe that comprises a plurality of symbol times, a codeword indicating that the downlink subframe includes at least one inactive symbol time. The method further includes transmitting data during at least one but fewer than all of the remaining ones of the plurality of symbol times in the downlink subframe. Corresponding techniques for receiving the energy-saving subframes are also disclosed. | 01-22-2015 |
20150092548 | SHARED CELL RECEIVER FOR UPLINK CAPACITY IMPROVEMENT IN WIRELESS COMMUNICATION NETWORKS - A wireless communication method and system are provided in which an uplink data stream that has uplink data associated with a user device is received. Channel performance data based at least in part on a portion of the uplink data stream is determined. A determination is made whether the channel performance data meets a predetermined performance level. The portion of the uplink data stream is discarded when the channel performance data does not meet the predetermined performance level. The portion of the uplink data stream is tagged for additional processing when the channel performance data meets the predetermined performance level. | 04-02-2015 |
20150163022 | Method and Network Node for Allocating Resources of an Uplink Subframe - It is presented a method of allocating resources of a first uplink subframe being part of a radio frame, each resource being a combination of a frequency range, and a time slot and a code. The method is performed in a network node and comprises: determining a first set of resources allocated for Hybrid Automatic Repeat Request, HARQ, feedback in the first uplink subframe; determining a second set of resources allocated for HARQ feedback in a second uplink subframe being part of the radio frame; identifying free resources in the first uplink subframe by identifying resources of the second set of resources which have no correspondence in the first set of resources; and allocating when a free resource is found, at least part of the free resources to a use other than HARQ feedback. A corresponding network node is also presented. | 06-11-2015 |
20150223084 | AUTONOMOUS DETERMINATION OF OVERLAPPING COVERAGE IN HETEROGENEOUS NETWORKS - Systems and methods are disclosed for autonomously determining overlapping coverage in a cellular communications network. In one embodiment, the cellular communications network is a heterogeneous cellular communications network. In one embodiment, a network node of a cellular communication system obtains information (e.g., pilot reports) indicative of a perceived coverage of one or more covering cells at wireless devices within a measuring cell over a measurement interval. The network node determines overlapping coverage of the measuring cell and the one or more covering cells based on the information indicative of the perceived coverage of the one or more covering cells at the wireless devices. | 08-06-2015 |
Patent application number | Description | Published |
20090144235 | METHOD FOR AUTOMATED DESIGN OF RANGE PARTITIONED TABLES FOR RELATIONAL DATABASES - A workload specification, detailing specific queries and a frequency of execution of each of the queries, and a set of partitions, are obtained for the database, as inputs. A number of candidate tables are identified for the database, the tables having a plurality of attributes. A chosen attribute is allocated for each of the tables, to obtain a set of tables and a set of appropriate partitions for each of the tables. | 06-04-2009 |
20090144303 | SYSTEM AND COMPUTER PROGRAM PRODUCT FOR AUTOMATED DESIGN OF RANGE PARTITIONED TABLES FOR RELATIONAL DATABASES - A workload specification, detailing specific queries and a frequency of execution of each of the queries, and a set of partitions, are obtained for the database, as inputs. A number of candidate tables are identified for the database, the tables having a plurality of attributes. A chosen attribute is allocated for each of the tables, to obtain a set of tables and a set of appropriate partitions for each of the tables. | 06-04-2009 |
20140032851 | RANDOMIZED PAGE WEIGHTS FOR OPTIMIZING BUFFER POOL PAGE REUSE - In general, the disclosure is directed to techniques for choosing which pages to evict from the buffer pool to make room for caching additional pages in the context of a database table scan. A buffer pool is maintained in memory. A fraction of pages of a table to persist in the buffer pool are determined. A random number is generated as a decimal value of 0 to 1 for each page of the table cached in the buffer pool. If the random number generated for a page is less than the fraction, the page is persisted in the buffer pool. If the random number generated for a page is greater than the fraction, the page is included as a candidate for eviction from the buffer pool. | 01-30-2014 |
20140032852 | RANDOMIZED PAGE WEIGHTS FOR OPTIMIZING BUFFER POOL PAGE REUSE - In general, the disclosure is directed to techniques for choosing which pages to evict from the buffer pool to make room for caching additional pages in the context of a database table scan. A buffer pool is maintained in memory. A fraction of pages of a table to persist in the buffer pool are determined. A random number is generated as a decimal value of 0 to 1 for each page of the table cached in the buffer pool. If the random number generated for a page is less than the fraction, the page is persisted in the buffer pool. If the random number generated for a page is greater than the fraction, the page is included as a candidate for eviction from the buffer pool. | 01-30-2014 |
20140074818 | MULTIPLICATION-BASED METHOD FOR STITCHING RESULTS OF PREDICATE EVALUATION IN COLUMN STORES - A system joins predicate evaluated column bitmaps having varying lengths. The system includes a column unifier for querying column values with a predicate and generating an indicator bit for each of the column values that is then joined with the respective column value. The system also includes a bitmap generator for creating a column-major linear bitmap from the column values and indicator bits. The column unifier also determines an offset between adjacent indicator bits. The system also includes a converter for multiplying the column-major linear bitmap with a multiplier to shift the indicator bits into consecutive positions in the linear bitmap. | 03-13-2014 |
20140214795 | DYNAMICALLY DETERMINING JOIN ORDER - A weight is determined for each of a plurality of join predicates for a join between one or more first database objects and one or more second database objects based on a join selectivity for each of the plurality of join predicates. The plurality of join predicates are sorted based on the determined weights. The join operation is performed joining the one or more first database objects with the one or more second database objects in accordance with an order of the sorted plurality of join predicates. | 07-31-2014 |
20140280372 | PARTITIONING DATA FOR PARALLEL PROCESSING - According to one embodiment of the present invention, a system partitions data for parallel processing and comprises one or more computer systems with at least one processor. The system partitions data of a data object into a plurality of data partitions within a data structure based on a plurality of keys. The data structure includes a plurality of dimensions and each key is associated with a corresponding different dimension of the data structure. Portions of the data structure representing different data partitions are assigned to the computer systems for parallel processing, and the assigned data structure portions are processed in parallel to perform an operation. Embodiments of the present invention further include a method and computer program product for partitioning data for parallel processing in substantially the same manner described above. | 09-18-2014 |
20140372411 | ON-THE-FLY ENCODING METHOD FOR EFFICIENT GROUPING AND AGGREGATION - Embodiments include a method and computer program product for encoding data while it is being processed as part of a query is provided. The method includes receiving a query request and determining a set of values associated with data to be encoded for completing the query request. The method also includes encoding those values such that any subsequent processing operations can be performed on the encoded values to complete the requested query. After performing the subsequent processing operations to complete the requested query, each value is decoded back to its original value. | 12-18-2014 |
20140372470 | ON-THE-FLY ENCODING METHOD FOR EFFICIENT GROUPING AND AGGREGATION - Embodiments include a system for encoding data while it is being processed. The system includes a processor, an encoder and a decoder. The processor is configured to process a query request by determining a set of values. The encoder is configured for encoding the set of values, such that a subsequent processing operation can be performed on the encoded values. The processor performs the subsequent processing operations. The decoder is configured for decoding each value back to its value prior to being encoded upon completion of the processor completing the requested query. | 12-18-2014 |
20140379985 | MULTI-LEVEL AGGREGATION TECHNIQUES FOR MEMORY HIERARCHIES - Embodiments include method, system, and computer program product for providing aggregation hierarchy that is related memory hierarchies. In one embodiment, the method includes determining capacity of a first level memory of a memory hierarchy for processing data relating to completion of an aggregation process and generating a per thread local look-up table in said first level memory upon determining said capacity. Upon the first level memory reaching capacity, a plurality of per thread partitions to store remaining data to complete the aggregation process in a second level memory of the memory hierarchy is generated such that each of said per-thread partitions includes an identical amount of data portion on each thread. The method also includes storing the per thread partitions in said second level memory and providing a single global look up table for each of the identical data portions. | 12-25-2014 |
20150032780 | PARTITIONING DATA FOR PARALLEL PROCESSING - According to one embodiment of the present invention, a system partitions data for parallel processing and comprises one or more computer systems with at least one processor. The system partitions data of a data object into a plurality of data partitions within a data structure based on a plurality of keys. The data structure includes a plurality of dimensions and each key is associated with a corresponding different dimension of the data structure. Portions of the data structure representing different data partitions are assigned to the computer systems for parallel processing, and the assigned data structure portions are processed in parallel to perform an operation. Embodiments of the present invention further include a method and computer program product for partitioning data for parallel processing in substantially the same manner described above. | 01-29-2015 |
20150213071 | BUFFERING INSERTS INTO A COLUMN STORE DATABASE - Embodiments relate to database systems. An aspect includes deferring row insert operations until occurrence of a triggering event. One method includes receiving a row insert for a tuple into a column group store table, where the tuple includes one or more tuplets and each of the tuplets corresponds to a column group in the column group store table. The method also includes copying at least one of the tuplets into an insert buffer that is specific to one of the column groups in the column group store table. The method also includes deferring the row insert into the column group store table until an occurrence of one or more triggering events. The method also includes flushing the row insert into storage associated with the column group store table, in response to the occurrence of the one or more triggering events. | 07-30-2015 |
20150213072 | PARALLEL LOAD IN A COLUMN-STORE DATABASE - In one embodiment, a method includes adding, by a computer processor, two or more compressed columns to one or more pages of a database. The adding is performed in parallel by a plurality of page-formatter threads. Each page-formatter thread adds data to the database from no more than a single compressed column. | 07-30-2015 |
20160034527 | ACCURATE PARTITION SIZING FOR MEMORY EFFICIENT REDUCTION OPERATIONS - Embodiments of the invention relate to processing data records, and for a multi-phase partitioned data reduction. The first phase relates to processing data records and partitioning the records into a first partition of records having a common characteristic and a second partition of records that are not members of the first partition. The data records in each partition are subject to intra-partition data reduction responsive to a resource constraint. The data records in each partition are also subject to an inter-partition data reduction, also referred to as an aggregation to reduce a footprint for storing the records. Partitions and/or individual records are logically aggregated and a data reduction operation for the logical aggregation of records takes place in response to available resources. | 02-04-2016 |
Patent application number | Description | Published |
20130325900 | INTRA-BLOCK PARTITIONING FOR DATABASE MANAGEMENT - A method for storing database information includes storing a table having data values in a column major order. The data values are stored in a list of blocks. The method also includes assigning a tuple sequence number (TSN) to each data value in each column of the table according to a sequence order in the table. The data values that correspond to each other across a plurality of columns of the table have equivalent TSNs. The method also includes assigning each data value to a partition based on a representation of the data value. The method also includes assigning a tuple map value to each data value. The tuple map value identifies the partition in which each data value is located. | 12-05-2013 |
20130325901 | INTRA-BLOCK PARTITIONING FOR DATABASE MANAGEMENT - A method for storing database information, including: storing a table having data values in a column major order, wherein the data values are stored in a list of blocks, assigning a tuple sequence number (TSN) to each data value in each column of the table according to a sequence order in the table, wherein data values that correspond to each other across a plurality of columns of the table have equivalent TSNs; assigning each data value to a partition based on a representation of the data value; and assigning a tuple map value to each data value, wherein the tuple map value identifies the partition in which each data value is located. | 12-05-2013 |
20140372389 | Data Encoding and Processing Columnar Data - Aspects of the invention are provided for accessing a plurality of data elements. A page of column data is stored in a format that includes compressed and/or non-compressed elements, with the format including a plurality of arrays and a vector. Each of the arrays stores elements with common characteristics, with the vector functioning as a mapping to the stored data elements. The vector is leveraged to identify an array and determine an offset to support access to one or more of the data elements. | 12-18-2014 |
20160070730 | Data Encoding and Processing Columnar Data - The embodiments described herein relate to accessing a plurality of data elements. A page of column data is compressed and stored in a format that includes a collection of data elements. A tuple map is stored, and the collection of data elements is indexed via the tuple map. A query is processed based on the compressed page by identifying a set of tuple identifiers mapping to stored data in support of the query. Each tuple identifier corresponds to a location of a respective tuple of the compressed page. | 03-10-2016 |
Patent application number | Description | Published |
20080228831 | METHOD, SYSTEM AND PROGRAM FOR PRIORITIZING MAINTENANCE OF DATABASE TABLES - There is disclosed a data processing system implemented method, a data processing system, and an article of manufacture for directing a data processing system to maintain a database table associated with an initial maintenance scheduling interval. The data processing system implemented method includes selecting a randomizing factor, and selecting a new maintenance scheduling interval for the database table based on the initial maintenance scheduling interval and the selected randomizing factor. | 09-18-2008 |
20080263563 | METHOD AND APPARATUS FOR ONLINE SAMPLE INTERVAL DETERMINATION - In one embodiment, functional system elements are added to an autonomic manager to enable automatic online sample interval selection. In another embodiment, a method for determining the sample interval by continually characterizing the system workload behavior includes monitoring the system data and analyzing the degree to which the workload is stationary. This makes the online optimization method less sensitive to system noise and capable of being adapted to handle different workloads. The effectiveness of the autonomic optimizer is thereby improved, making it easier to manage a wide range of systems. | 10-23-2008 |
20090006049 | SYSTEM FOR ESTIMATING STORAGE REQUIREMENTS FOR A MULTI-DIMENSIONAL CLUSTERING DATA CONFIGURATION - A storage requirements estimating system estimates the storage required for a proposed multidimensional clustering data by modeling wasted space. The amount of wasted space is modeled by calculating the cardinality of the unique value of the clustering key for the proposed configuration. Cardinality may be determined by estimation techniques. Specific values for wasted space and total space may be determined in response to the determined cardinality. Comparison of estimates for different proposed clustering configurations facilitate a selection among proposed multidimensional clustering data configurations. | 01-01-2009 |
20090055609 | SYSTEMS FOR DYNAMICALLY RESIZING MEMORY POOLS - There are disclosed systems and computer program products for dynamically resizing memory pools used by database management systems. In one aspect, if a decrease in allocation to the memory pool is required, at least one page grouping that may be freed from the memory pool is identified as a candidate based on its position in a list of page groupings. If the page grouping contains any used memory blocks, the used memory blocks may be copied from a candidate page grouping to another page grouping in the list in order to free the candidate page grouping. Once the candidate page grouping is free of used memory blocks, the candidate page grouping may be freed from the memory pool. As an example, this system or computer program product may be used for dynamically resizing locklists or lock memory. | 02-26-2009 |
20090089306 | Method, System and Article of Manufacture for Improving Execution Efficiency of a Database Workload - Disclosed is a data processing system implemented method, a data processing system and an article of manufacture for improving execution efficiency of a database workload to be executed against a database. The database includes database tables, and the database workload identifies at least one of the database tables. The data processing system includes an identification module for identifying candidate database tables being identifiable in the database workload, the identified candidate database tables being eligible for organization under a clustering schema, a selection module for selecting the identified candidate tables according to whether execution of the database workload is improved if the selected identified candidate table is organized according to the clustering scheme, and an organization module for organizing the clustering schema of the selected organized identified candidate tables prior to the database workload being execution against the database. | 04-02-2009 |
20130013785 | METHOD AND APPARATUS FOR ONLINE SAMPLE INTERVAL DETERMINATION - In one embodiment, functional system elements are added to an autonomic manager to enable automatic online sample interval selection. In another embodiment, a method for determining the sample interval by continually characterizing the system workload behavior includes monitoring the system data and analyzing the degree to which the workload is stationary. This makes the online optimization method less sensitive to system noise and capable of being adapted to handle different workloads. The effectiveness of the autonomic optimizer is thereby improved, making it easier to manage a wide range of systems. | 01-10-2013 |