Class / Patent application number | Description | Number of patent applications / Date published |
707700000 | Range checking | 27 |
20100223241 | Method, System and Computer Program Product for Certifying a Timestamp of a Data Processing System - The disclosed embodiments present a system, method, and computer program product for certifying a timestamp generated by a data processing system. In some embodiments, the method includes receiving a request to certify a timestamp generated by a trusted data processing system, analyzing historical data related to a system time of the data processing system, and certifying the timestamp in response to determining that the historical data indicates a trustworthy system time of the data processing system when the timestamp was generated. | 09-02-2010 |
20100228705 | Conditional commit for data in a database - A database comprises a database interface and a database updater. The database interface enables a reading of a first set of information from the database. The database updater updates a second set of information in the database based at least in part on one or more conditions. The one or more conditions limit changes allowable to the first set of information from the database that occurred after the reading of the first set of information from the database. | 09-09-2010 |
20100228706 | Dependent commit queue for a database - A database comprises a database interface and a database updater. The database interface receives a first set of information and a second set of information to be updated in the database. The database updater updates a second set of information in the database based at least in part on a condition that a first set of information in the database has been previously updated. | 09-09-2010 |
20110016101 | Stopping Functions For Grouping And Differentiating Files Based On Content - Methods and apparatus teach a digital spectrum of a data file. The digital spectrum is used to map a file's position in multi-dimensional space. This position relative to another file's position reveals closest neighbors. Certain of the closest neighbors are grouped together, while others are differentiated. Grouping ceases upon application of a stopping function so that rightly sized, optimum numbers of file groups are obtained. Embodiments of stopping functions relate to curve types in a mapping of numbers of groups per sequential rounds of grouping, recognizing whether groups have overlapping file members or not, and/or determining whether groups meet predetermined numbers of members, to name a few. Properly grouped files can then be further acted upon. | 01-20-2011 |
20110218980 | DATA VALIDATION IN DOCKETING SYSTEMS - A data validation system and method for a fully or partially automated docket management solution. The system may require single-user double entry and/or double user data re-entry for validation and confirmation of data content. Un-validated/un-confirmed data may be quarantined or otherwise hidden from part or all of the rest of the docket management system. | 09-08-2011 |
20110295821 | Servicing Daemon for Live Debugging of Storage Systems - A servicing daemon is described herein for providing servicing of a running computer system (such as a filer). The servicing daemon resides and executes on the operating system of the filer and communicates across a network with a debugger that resides and executes on a remote administering computer. A debugging session is performed that complies with a protocol relating to the remote accessing of files. The debugging session provides live servicing of an application executing on the filer without requiring an actual corefile (having copied filer memory data) to be created. Rather, the servicing daemon creates a simulated corefile header that is sent to the debugger, receives requests from the debugger, and maps addresses specified in the requests to filer memory addresses. The servicing daemon then reads and retrieves data directly from filer memory at the determined filer memory addresses and sends the data to the debugger for analysis. | 12-01-2011 |
20120036114 | METHOD AND APPARATUS USING A HIERACHICAL SEARCHING SCHEME AMONG VIRTUAL PRIVATE COMMUNITIES - Provided is a member or content search method in a virtual private community (VPC) network including at least one of a first VPC including communication devices owned by a predetermined user, a second VPC that may be positioned in an upper layer of the first VPC, and a third VPC that may be positioned in an upper layer of the second VPC, the method including receiving, by one of the communication devices, a search request comprising one of VPC identifiers of a user, verifying a VPC corresponding to the VPC identifiers that may be included in the search request, in response to the search request, and searching for members included in the verified VPC, a VPC positioned in a lower layer of the verified VPC, or contents owned by the members included in the verified VPC. | 02-09-2012 |
20120173498 | Verifying Correctness of a Database System - The invention provides a method for verifying correctness of a database system, comprising: receiving SQL instruction; extending access paths of the received SQL instruction; executing the SQL instruction by using the extended access paths; and verifying correctness of the database system according to result of executing the SQL instruction. With the method and system of the invention, the object of verifying correctness of a database system by automatically extending access paths of SQL statement may be achieved, and the object of verifying correctness of a database system scientifically, effectively and purposefully based on ratio of error or defect present in database itself due to various data manipulation approaches (different values of access path elements) may also be achieved. | 07-05-2012 |
20120191678 | Providing Reconstructed Data Based On Stored Aggregate Data in Response to Queries for Unavailable Data - In an embodiment, a method comprises dividing collected data into data clusters based on proximity of the data and adjusting the clusters based on density of data in individual clusters. Based on first data points in a first cluster, a first average point in the first cluster is determined. Based on second data points in a second cluster, a second average point in the second cluster is determined. Aggregate data, comprising the first average point and the second average point, are stored in storage. Upon receiving a request to provide data for a particular coordinate, the reconstructed data point is determined by interpolating between the first average point and the second average point at the particular coordinate. Accordingly, aggregated data may be stored and when a request specifies data that was not actually stored, a reconstructed data point with an approximated data value may be provided as a substitute. | 07-26-2012 |
20120254137 | SYSTEMS AND METHODS TO FACILITATE MULTI-THREADED DATA RETRIEVAL - According to some embodiments, a data source is accessed from which data will be retrieved via a plurality of processing threads. The data source may have, for example, a plurality of records with each record being associated with a plurality of identifiers. Each of the plurality of identifiers may be dynamically evaluated as a potential range identifier, and the evaluation may be based at least in part on a number of distinct values present within each identifier. One of the potential range identifiers may be selected as a selected range identifier, and the plurality of records may be divided into ranges defined using the selected range identifier. | 10-04-2012 |
20120290546 | IDENTIFYING MODIFIED CHUNKS IN A DATA SET FOR STORAGE - Provided are a computer program product, system, and method for identifying modified chunks in a data set for storage. Modifications are received to at least one of the chunks in the data set. A determination is made of at least one range of least one of the chunks including data affected by the modifications determination is made as to whether at least one chunk outside of the at least one range has changed. For each determined at least one chunk outside of the at least one range that has changed, a determination is made of at least one new chunk and a new digest of the at least one new chunk and information is added on the at least one new chunk and information to locate the new chunk in the data set. | 11-15-2012 |
20120310908 | METHOD OF PARSING OPTIONAL BLOCK DATA - A computer program product is provided and includes a tangible storage medium readable by a processing circuit and on which instructions are stored for execution by the processing circuit for initially verifying a presence of parameters passed to a parameter database and that a selected group of the parameters are greater than or equal to zero, parsing optional block data to validate the optional block data, determine a length thereof and a number of optional blocks contained therein and proceeding with one of a secondary info-parsing and a secondary data-parsing operation with respect to the optional block data in accordance with content of the parameters passed to the parameter database | 12-06-2012 |
20130091111 | Controlling Configurable Variable Data Reduction - Example apparatus, methods, and computers control configurable, variable data reduction. One example method includes identifying data reduction controlling attributes in an object to be data reduced by a configurable variable data reducer. The attributes provide information upon which decisions concerning whether and/or how to data reduce the object can be based. The example method also includes controlling a configurable variable data reducer to selectively data reduce the object based, at least in part, on the data reduction controlling attributes. The control exercised can determine whether, where, when, and/or how data reduction will proceed, | 04-11-2013 |
20130103658 | TIME SERIES DATA MAPPING INTO A KEY-VALUE DATABASE - A method for storing time series data in a key-value database includes receiving time series data relating to the occurrence of an event. An addressing scheme that defines attributes for inclusion in keys for the event is analyzed. The attributes include time granularity attributes of different sizes. The method generates a key corresponding to the time series data based on the analyzing of the addressing scheme including attributes specified in the addressing scheme that are related to the event and one of the attributes represents one of the plurality of time granularity attributes. The method further issues a command to the key-value database to store a record of the occurrence of the event as a value in the key-value database where stored values in the key-value database corresponding keys may be used to satisfy queries relating to the event over a range of time. | 04-25-2013 |
20130325825 | Systems And Methods For Quantile Estimation In A Distributed Data System - In accordance with the teachings described herein, systems and methods are provided for estimating quantiles for data stored in a distributed system. In one embodiment, an instruction is received to estimate a specified quantile for a variate in a set of data stored at a plurality of nodes in the distributed system. A plurality of data bins for the variate are defined that are each associated with a different range of data values in the set of data. Lower and upper quantile bounds for each of the plurality of data bins are determined based on the total number of data values that fall within each of the plurality of data bins. The specified quantile is estimated based on an identified one of the plurality of data bins that includes the specified quantile based on the lower and upper quantile bounds. | 12-05-2013 |
20130332433 | COMPUTER PRODUCT, GENERATING APPARATUS, AND GENERATING METHOD - A computer-readable recording medium stores a program causing a computer to execute tabulating a number of character data types for each compression code length specified by an occurrence probability corresponding to an appearance rate of each character data in a file; determining an upper limit_N of compression code length assigned to the character data, among lengths from a minimum to a maximum compression code length and based on the total number of character data types; correcting the number of character data types for the upper limit_N, to the sum of the numbers of character data types for the compression code lengths at least equal to the upper limit_N; and constructing a 2 | 12-12-2013 |
20140040217 | Checking Compatibility of Extended and Core SAM Schemas Based on Complex Goals - Methods, systems, and computer-readable storage media for evaluating a validity of an extended status and action management (SAM) schema. In some implementations, actions include receiving the extended SAM schema, the extended SAM schema being stored as a computer-readable document in memory and being an extension of a core SAM schema, providing one or more goals, each goal representing an intention of the core SAM schema, the one or more goals being provided in a computer-readable document stored in memory and comprising one or more primary goals that each express an intention of a process underlying the core SAM schema, and processing the one or more goals using a computer-executable model checking tool for evaluating the validity of the extended SAM schema. | 02-06-2014 |
20150012509 | DATA QUALITY MONITORS - Systems and methods are presented for data quality monitoring. Data quality monitors may be created and configured to identify objects with specified data quality issues and/or property values. Objects identified by a data quality monitor can be presented to users for confirmation and resolution. Properties used by the data quality monitor to match objects may also be displayed to users. | 01-08-2015 |
20150095299 | MERGING METADATA FOR DATABASE STORAGE REGIONS BASED ON OVERLAPPING RANGE VALUES - Metadata for a plurality of database storage regions within memory are merged, where the metadata for each storage region comprises an interval including first and second interval values indicating a value range for values within that storage region. The first and second interval values are examined to identify overlapping storage regions and produce a sum of overlapped storage regions. The sum of overlapped storage regions is compared to a threshold and the metadata of the overlapped storage regions are merged based on the comparison. | 04-02-2015 |
20150302046 | HANDLING AN INCREASE IN TRANSACTIONAL DATA WITHOUT REQUIRING RELOCATION OF PREEXISTING DATA BETWEEN SHARDS - A method, system and computer program product for handling an increase in transactional data load without requiring the relocation of preexisting data. A range of attribute values and identifications of associated shards are stored in a data structure. In response to adding a new shard, the data structure is updated by associating a range of attribute values to the added shard while maintaining the same range of attribute values being associated with one of the pre-existing shards. As a result, the new data assigned within this range of attribute values will be stored in the newly added shard while the older data assigned within this range of attribute values will continue to be stored in one of the preexisting shards. In this manner, an increase in transactional data load can be handled by adding a new shard without requiring the relocation of preexisting data. | 10-22-2015 |
20150302047 | HANDLING AN INCREASE IN TRANSACTIONAL DATA WITHOUT REQUIRING RELOCATION OF PREEXISTING DATA BETWEEN SHARDS - A method, system and computer program product for handling an increase in transactional data load without requiring the relocation of preexisting data. A range of attribute values and identifications of associated shards are stored in a data structure. In response to adding a new shard, the data structure is updated by associating a range of attribute values to the added shard while maintaining the same range of attribute values being associated with one of the pre-existing shards. As a result, the new data assigned within this range of attribute values will be stored in the newly added shard while the older data assigned within this range of attribute values will continue to be stored in one of the preexisting shards. In this manner, an increase in transactional data load can be handled by adding a new shard without requiring the relocation of preexisting data. | 10-22-2015 |
20150347492 | REPRESENTING AN OUTLIER VALUE IN A NON-NULLABLE COLUMN AS NULL IN METADATA - According to embodiments of the present invention, methods, systems and computer-readable media are presented for accessing data within a database object, wherein an element of the database object is stored among a plurality of different storage regions with each storage region being associated with first and second range values indicating a value range for element values within that storage region. One or more element values within a storage region are identified residing outside a range of values of remaining elements within that storage region. Each identified element value is mapped to a second value. The first and second range values are determined for the storage region in accordance with the range of values of the remaining elements within that storage region. The storage region is scanned in accordance with a comparison of a requested value to at least one of the determined first and second range values of those storage regions. | 12-03-2015 |
20160012094 | FASTER ACCESS FOR COMPRESSED TIME SERIES DATA: THE BOCK INDEX | 01-14-2016 |
20160070733 | CONDITIONAL VALIDATION RULES - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating conditional validation rules. One of the methods includes rendering a plurality of cells arranged in a two-dimensional grid having a first axis and a second axis, the two-dimensional grid including one or more subsets of the cells, each subset associated with a respective field of an element of the dataset, and multiple subsets of the cells extending in a direction along the second axis of the two-dimensional grid, one or more of the multiple subsets associated with a respective validation rule. The method includes applying one or more validation rules to an element of the dataset based on user input received from at least some of the cells. A condition cell associated with a field includes an input element for receiving input. | 03-10-2016 |
20160179866 | METHOD AND SYSTEM TO SEARCH LOGS THAT CONTAIN A MASSIVE NUMBER OF ENTRIES | 06-23-2016 |
20160253341 | MANAGING A BINARY OBJECT IN A DATABASE SYSTEM | 09-01-2016 |
20220138173 | FASTER ACCESS FOR COMPRESSED TIME SERIES DATA: THE BLOCK INDEX - A system and method for faster access for compressed time series data. A set of blocks are generated based on a table stored in a database of the data platform. The table stores data associated with multiple sources of data provided as consecutive values, each block containing index vectors having a range of the consecutive values. A block index is generated for each block having a field start vector representing a starting position of the block relative to the range of consecutive values, and a starting value vector representing a value of the block at the starting position. The field start vector of the block index is accessed to obtain the starting position of a field corresponding to a first block and to the range of the consecutive values of the first block. The starting value vector is then determined from the block index to determine an end and a length of the field of the first block. | 05-05-2022 |