Entries |
Document | Title | Date |
20100076938 | PROTOCOL MISMATCH DETECTION SYSTEM, PROTOCOL MISMATCH DETECTION METHOD, AND PROTOCOL MISMATCH DETECTION PROGRAM - A protocol mismatch detection system ( | 03-25-2010 |
20100076939 | INFORMATION PROCESSING SYSTEM, DATA UPDATE METHOD AND DATA UPDATE PROGRAM - An information processing system, a data update method and a data update program are disclosed. In a data base system of master-slave configuration, the update result can be accessed also on slave side with the access request immediately after the particular update. The data base system DBS includes a master DB computer and at least a slave DB computer. The slave DB computer judges from the count on an update counter table whether the update log received from the master DB computer is to be reflected in a duplicate data base or not. Thus, the lost update problem is solved while at the same time realizing a high-speed process. | 03-25-2010 |
20100094812 | Dynamically Defining and Using a Delete Cascade Trigger Firing Attribute - A method, computer program product or computer system for dynamically controlling the firing of a trigger for a DELETE CASECADE referential constraint in a database management system, which includes defining a DELETE CASCADE Trigger Fire Attribute (DCTFA) for each dependent file of the DELETE CASCADE referential constraint in a database, initializing each DCTFA with a value corresponding to enabling trigger firing or disabling trigger firing, and firing the trigger according to the value of the DCTFA of each dependent file during the DELETE CASCADE. | 04-15-2010 |
20100114841 | Referential Integrity, Consistency, and Completeness Loading of Databases - A method is provided for loading data from a source database to a target database that includes at least one table. Prior to loading the data from the source database into the target database, at least one referential integrity constraint and/or at least one consistency requirement regarding the data is automatically identified. A subset of the data that satisfies the at least one referential integrity constraint and/or consistency requirement is then automatically identified. The identified subset of the data is then loaded into the target database as a unit of work. | 05-06-2010 |
20100125557 | ORIGINATION BASED CONFLICT DETECTION IN PEER-TO-PEER REPLICATION - Systems and methods that enable conflict detection in a peer-to-peer replication by embedding origination information in data records. A tracing component can track embedded information in form of peer ID and transaction ID, wherein conflicts can be detected by comparing a pre-version (prior to current version) of data on the source node—with—a current version of the data on the destination node. | 05-20-2010 |
20100131473 | Method and System for Health Scoring Information Systems, Users, and Updates - A method and system is disclosed for monitoring the status of a system by providing a health score. A health scoring module accesses a configuration management database (CMDB) comprising a plurality of configuration items referencing physical, service and process information. A target value for each configuration item is decided, followed by collecting their current value. Comparison operations are then performed between each configuration item's current and target value and a health subscore is generated. The resulting health subscore is then indexed to its corresponding configuration item. Once indexed, a health score is generated from a predetermined plurality of health subscores. | 05-27-2010 |
20100138398 | Update management apparatus and update management method - To ensure consistency with a structure of a schema set stored in a database, i.e., an original source of an update request, the following processing is performed: storing reference relation for specifying another database storing a related schema set to be updated in accordance with an update of a structure of a schema set stored in a database, i.e., a request source of the update request; deciding whether a content of an update request is related to a structure change of the schema set; extracting, when the content of the update request is related to the structure change of the schema set, based on the reference relation, the other database storing an associated schema set to be updated to ensure consistency; deciding whether to update the extracted database; and sending an update approval/disapproval decision result to the database, i.e., the original source of the update request. | 06-03-2010 |
20100153345 | Cluster-Based Business Process Management Through Eager Displacement And On-Demand Recovery - Methods and apparatus, including computer program products, are provided for transporting processes within a distributed computing system, such as a cluster. In one aspect, the computer-implemented method may receive an event at a first node. The event may correspond to a process instance for handling the received event. The process instance may be transported from a second node to the first node. The process instance may be transported from a persistence when the process instance is inactive and, when the process instance is active, the process instance may be persisted to enable transport to the first node. Related apparatus, systems, methods, and articles are also described. | 06-17-2010 |
20100153346 | DATA INTEGRITY IN A DATABASE ENVIRONMENT THROUGH BACKGROUND SYNCHRONIZATION - Systems, methods and computer program products for maintaining data integrity in a database environment are described. In operation, a synchronization process is initiated in a remote database system for synchronization of remote data from the remote database system with consolidated data of a consolidated database. Metadata for each row of the remote data is utilized to allow transactional access to the remote data while the synchronization process occurs. | 06-17-2010 |
20100161566 | USING RELATIONSHIPS IN CANDIDATE DISCOVERY - Techniques are disclosed for adding entities to a group of entity resolution candidates by selecting entities that have a minimum threshold of similarity to a candidate, allowing a greater number of resolutions in an entity resolution system. To resolve an incoming identity record, an initial group of candidates may be selected from known entities by identifying entities that match a candidate building attribute of the incoming identity record. Additional candidates may be selected by identifying entities with some information that is similar to one of the candidate entities. | 06-24-2010 |
20100174686 | Generating Equivalence Classes and Rules for Associating Content with Document Identifiers - A system of reducing the possibility of crawling duplicate document identifiers partitions a plurality of document identifiers into multiple clusters, each cluster having a cluster name and a set of document parameters. The system generates an equivalence rule for each cluster of document identifiers, the rule specifying which document parameters associated with the cluster are content-relevant. Next, the system groups each cluster of document identifiers into one or more equivalence classes in accordance with its associated equivalence rule, each equivalence class including one or more document identifiers that correspond to a document content and having a representative document identifier identifying the document content. | 07-08-2010 |
20100174687 | SYSTEMS AND METHODS FOR VALIDATING DESIGN META-DATA - A metadata validation process that allows for deferring object model validation until after the objects are created. The process also allows for multi-threaded processing of the validation rules, thus increasing overall performance. Validation is performed by enforcing a series of validation rules on an appropriate subject. Rules are specified according to the subject that they are validating (i.e., attribute level, association level, object level or collection level). The metadata driven validation process implements several validation types on different validation units. Correctness validation rule types ensure that a validation unit satisfies all semantic rules defined for it. Completeness validation rule types ensure that a validation unit contains all the necessary data and is ready for further use. At design time, only correctness type validation is performed. Thus, the present invention advantageously allows for incomplete objects to be created at design time. The developer, however, in this case may opt to perform completeness validation at any time. In general, a developer may opt to perform completeness and/or correctness validation at any time independent of deployment processing. In another aspect, full validation (e.g., completeness and correctness) is automatically performed on the objects during the process of creating a configuration prior to deployment. | 07-08-2010 |
20100205156 | Remote Access Agent for Caching in a SAN File System - A system and method is disclosed for maintaining, in a Storage Area Network (SAN), the consistency of a local copy of a remote file system sub-tree obtained from a remote source. Directory structure of the remote file system sub-tree is mapped to a remote container attached to the SAN and each remote object of the remote file system sub-tree is represented as a local object component of the remote container. Next, each of the local objects are labeled with attributes associated with the represented remote object, and metadata describing each of the local objects is stored in a metadata server. Also, a consistency policy is associated with each of the local objects in the remote container (wherein the policy defines conditions for checking freshness of said labeled attributes), and the local object components of remote container is updated in accordance with the consistency policy. | 08-12-2010 |
20100205157 | Log Data Store and Assembler for Large Objects in Database System - A mechanism works in conjunction with a DB2® Log and an analysis tool, such as BMC's Log Master™, to handle logged data for Large Objects (LOBs) stored in tables of a DB2 database system. A plurality of controls track data logged for the LOBs. The mechanism reads log records from a DB2 Log and uses the controls to determine which of the tracked LOBs is associated with the log records and obtains data from those associated log records. The mechanism builds keys to index the data and stores the keys and the data in a Virtual Storage Access Method store having Key Sequenced Data Sets maintained separate from the log record store for the DB2 Log. When requested by the analysis tool, the data in the store can be reassembled using the keys and map records in the first store that map the logged data for the tracked LOBs. | 08-12-2010 |
20100217752 | DATA INTEGRITY VALIDATION IN STORAGE SYSTEMS - Data validation systems and methods are provided. Data is recorded in N data chunks on one or more storage mediums. A first validation chunk independently associated with said N data chunks comprises first validation information for verifying accuracy of data recorded in said N data chunks. The first validation chunk is associated with a first validation appendix comprising second validation information, wherein the first validation appendix is stored on a first storage medium independent of said one or more storage mediums. | 08-26-2010 |
20100223235 | SYSTEMS AND METHODS FOR PROVIDING NONLINEAR JOURNALING - In one embodiment, systems and methods are provided for nonlinear journaling. In one embodiment, groups of data designated for storage in a data storage unit are journaled into persistent storage. In one embodiment, the journal data is recorded nonlinearly. In one embodiment, a linked data structure records data and data descriptors in persistent storage. | 09-02-2010 |
20100235330 | Electronic linkage of associated data within the electronic medical record - The present invention provides a mechanism to define an association between different data elements from disparate sources of data and databases, and different database elements, and track that association over time. This mechanism track multiples related data elements throughout the continuum of an individual patient's medical record and identifies consistent data relationships across large patient populations. | 09-16-2010 |
20100250500 | USING A HEARTBEAT SIGNAL TO MAINTAIN DATA CONSISTENCY FOR WRITES TO SOURCE STORAGE COPIED TO TARGET STORAGE - Provided are a method, system, and program for using a heartbeat signal to maintain data consistency for writes to source storage copied to target storage. A copy relationship associates a source storage and target storage pair, wherein writes received at the source storage are transferred to the target storage. A determination is made whether a signal has been received from a system within a receive signal interval. A freeze operation is initiated to cease receiving writes at the source storage from an application in response to determining that the signal has not been received within the receive signal interval. A thaw operation is initiated to continue receiving write operations at the source storage from applications after a lapse of a freeze timeout in response to the freeze operation, wherein after the thaw operation, received writes completed at the source storage are not transferred to the target storage. | 09-30-2010 |
20100274771 | Information Processing Apparatus, and Information Processing Method, Program, and Recording Medium - A method of verifying the consistency in a hierarchical database includes: generating a pointer record by acquiring a reference point stored in the hierarchical database and associating a first reference point identification value determined from a storage location of the reference point with pointer information retained at the reference point; generating a segment record by acquiring a segment stored in the hierarchical database and associating verification data with a retention address of the acquired segment, the verification data giving a second reference point identification value in connection with the calculation module, the calculation module calculating, for a segment in the hierarchical database, a reference point identification value used to identify a reference point which points to the segment; and verifying the consistency of a chain formed in the hierarchical database from the reference point to the segment by comparing the segment record with the pointer record. | 10-28-2010 |
20100299315 | DATA ARCHIVING SYSTEM - An encrypted file storage solution consists of a cluster of processing nodes, external data storage, and a software agent (the “File System Watcher”), which is installed on the application servers. Cluster sizes of one node up to many hundreds of nodes are possible. There are also remote “Key Servers” which provide various services to one or more clusters. The preceding describes a preferred embodiment, though in some cases it may be desirable to “collapse” some of the functionality into a smaller number of hardware devices, typically trading off cost versus security and fault-tolerance. | 11-25-2010 |
20110010346 | PROCESSING RELATED DATA FROM INFORMATION SOURCES - Systems and methods for managing data are disclosed. Embodiments of the present invention may allow attribute values associated with data records to be assembled and presented in a unified manner. More particularly, embodiments of the present invention may utilize a set of locally stored identity information associated with a data record to determine a set of logical procedures operable to retrieve values for one or more non-identity attributes from a remote location. Furthermore, other embodiments of the present invention may apply a logical procedure to the values of the attributes corresponding to data records to select one or more values of one or more attributes of the data records. | 01-13-2011 |
20110035364 | SYSTEM AND METHOD OF COORDINATING CONSISTENCY OF KEY TERMS THROUGHOUT A PLURALITY OF DOCUMENTS - Provided is a method and system for coordinating consistency of key terms throughout a plurality of documents. The method includes identifying at least one key term in a first document provided by a first third-party application. The key term has at least a text element and a numerical element. Each additional instance of the key term in the first document is then linked. Each instance of the key term in at least one second document provided by a second third-party application is then also linked. An index is established for each identified key term and all instances of each key term in each document, the index permitting navigation to any specific instance of the key term in the first or second document. Each instance of each key term has selectable visibility for both the text element and the numerical element such that in varying instances, both the text element and the numerical element are visible, the text element is visible and the numerical element is invisible, and the text element is invisible and the numerical element is visible. An associated system is also provided. | 02-10-2011 |
20110055169 | LOGICAL CONFLICT DETECTION - Systems, methods, and other embodiments associated with detecting and avoiding logical conflicts between long duration transactions are described. One example method includes generating conflict keys for long transactions using conflict queries that operate on data being manipulated to return a conflict key to be associated with the transaction. The conflict keys may be used to detect or avoid logical conflicts that occur in long duration transactions running concurrently. | 03-03-2011 |
20110060728 | Operator-specific Quality Management and Quality Improvement - Methods and systems for improving a data processing operation based on operator-specific quality management and/or monitoring. For example, operator-specific frequency of errors, error rates, error patterns and/or root causes may be identified. Operator-specific actions may then be taken based on these. | 03-10-2011 |
20110060729 | METHOD FOR DATA MANAGEMENT IN A COLLABORATIVE SERVICE-ORIENTED WORKSHOP - A method for data management in a collaborative service-oriented workshop for processing objects associated with data representing real data or processes. After accessing at least one datum representing real data or processes stored in a remote device, at least one characteristic piece of information is extracted from the at least one datum according to a predetermined parameter. The at least one characteristic piece of information and a link to the at least one datum are then stored in an object associated with the at least one datum, the object being stored in a centralized storage area. | 03-10-2011 |
20110066601 | INFORMATION LIFECYCLE CROSS-SYSTEM RECONCILIATION - In the described systems and methods for information lifecycle cross-system reconciliation, a number of reconciliation indicators for a certain type of data are defined. A first set of values of the reconciliation indicators are calculated at a first computer system based on data stored in a memory of the first computer system. A second set of values of the reconciliation indicators are calculated at a second computer system based on data transferred from the first computer system. The two sets of values are received at reconciliation cockpit and stored in a reconciliation data structure. Further, the reconciliation data structure is examined to identify inconsistency between the data stored in the memory of the first computer system and the data transferred to the second computer system. If such an inconsistency is identified, the data transfer is cancelled. If inconsistency is not identified, the data transfer is confirmed. | 03-17-2011 |
20110066602 | MAPPING DATASET ELEMENTS - Mapping one or more elements of an input dataset to one or more elements of an output dataset includes: receiving in an interface one or more mapped relationships between a given output and one or more inputs represented by input variables, at least one of the mapped relationships including a transformational expression executable on a data processing system, the transformational expression defining an output of a mapped relationship based on at least one input variable mapped to an element of an input dataset; receiving in the interface identification of elements of an output dataset mapped to outputs of respective mapped relationships; generating output data from the data processing system according to the transformational expression based on input data from the input dataset associated with the element of the input dataset mapped to the input variable; determining validation information in response to the generated output data based on validation criteria defining one or more characteristics of valid values associated with one or more of the identified elements of the output dataset; and presenting in the interface visual feedback based on the determined validation information. | 03-17-2011 |
20110078123 | MANAGING DATA CONSISTENCY BETWEEN LOOSELY COUPLED COMPONENTS IN A DISTRIBUTED COMPUTING SYSTEM - Embodiments of the present invention provide a method, system and computer program product for maintaining distributed state consistency in a distributed computing application. In an embodiment of the invention, a method for maintaining distributed state consistency in a distributed computing application can include registering a set of components of a distributed computing application, starting a transaction resulting in changes of state in different ones of the components in the registered set and determining in response to a conclusion of the transaction whether or not an inconsistency of state has arisen amongst the different components in the registered set in consequence of the changes of state in the different ones of the components in the registered set. If an inconsistency has arisen, each of the components in the registered set can be directed to rollback to a previously stored state. Otherwise a committal of state can be directed in each of the components in the registered set. | 03-31-2011 |
20110078124 | Information creating apparatus, recording medium in which an information creating program is recorded, information creating method, node apparatus, recording medium in which a node program is recorded, and retrieval method - An information creating apparatus creates a leaf page information including one or more records with a key information to be compared with a retrieval key information inputted for retrieval of a record. The apparatus creates, based on the key information of the record included in the leaf page information, a judgment information used to judge a possibility that the leaf page information located in a position of child of the node page information located between the root and leaf page informations, and in lower positions than it, includes the record to be retrieved with the retrieval key information, and creates the node page information including the judgment information. The apparatus creates the root page information including the judgment information included in the node page information located in the positions of the child of the root page information, and stores the root, node and leaf page informations in a tree structure. | 03-31-2011 |
20110087639 | METHOD AND APPARATUS FOR AUTOMATICALLY ENSURING CONSISTENCY AMONG MULTIPLE SPECTRUM DATABASES - An apparatus and method of providing accurate and consistent open spectrum results for secondary devices from different geo-location databases is presented. The results, which may be independently derived by each database, are independent of the database queried. The comparison permits some amount of latitude in spatial and temporal consistency between the databases as errors are only indicated if the temporal or spatial discrepancies are pervasive. In addition, large percentages of different locations showing discrepancies when compared also lead to corrective action being taken. Corrective actions that may be taken include forcing problematic databases to update, shunting requests by secondary devices in the problematic locations to acceptable databases or shutting down the problematic databases entirely. | 04-14-2011 |
20110113017 | Supporting Internal Consistency Checking with Consistency Coded Journal File Entries - Example systems, methods, and apparatus economize generating and processing incremental journal files while maintaining internal consistency. One example method determines whether a sequence number associated with a first inode description in a disaster recovery (DR) journal entry is out of sequence with a second corresponding inode description in a DR metadump. The example method controls a DR journal process to provide a file system inconsistency signal and to suspend application of the DR journal entry to the DR metadump. The suspending and signaling can occur upon determining that a first access time independent verification code computed from the first inode description does not match a second access time independent verification code computed from the second inode description. | 05-12-2011 |
20110119238 | IMAGING APPARATUS - An imaging apparatus is capable of recording a first image file and a second image file which differs from the first image file in a recording format and which needs to be managed by a management file. The imaging apparatus includes an imaging unit that converts a subject optical image into an image signal, a signal processor that creates based on the image signal the first image file, or image data including the second image file and a management file associated with the second image file, and a controller that controls the signal processor. The controller checks consistency between the management file and the second image file, and controls the signal processor such that, when the management file is not consistent with the second image file, creation of the image data is inhibited but creation of the first image file is allowed. | 05-19-2011 |
20110145206 | ATOMIC DELETION OF DATABASE DATA CATEGORIES - A device maintains, in a database, a plurality of data items, each data item of the plurality of data items being associated with a respective category. The device associates, in the database, a first counter value with each data item, the first counter value indicating a number of times the respective category has been deleted from the database at a time when the data item was stored in the database. The device associates, in the database or another database, a second counter value with the respective category, the second counter value indicating a current value for a number of times the respective category has been deleted from the database. The device selectively deletes, from the database, one or more data items of the plurality of data items from the database based on the first counter values and the second counter value. | 06-16-2011 |
20110153575 | SYSTEM AND METHOD FOR RULE-DRIVEN CONSTRAINT-BASED GENERATION OF DOMAIN-SPECIFIC DATA SETS - A data generation system provides for generating domain-specific, context-sensitive data collections as synthetic data for testing the performance of data processing systems. Within the data generation system, a composition module defines a data generation template containing a plurality of fields each capable of holding one or more values according to specifications defined for predetermined data types. An evaluation module sorts the fields in an order of dependency so that fields whose values affect the values in other of the fields are ordered before the fields whose values are affected by values in other fields. A data generation module populates the fields with values and retrieves a subset of the values populating the plurality of fields for generating each of a plurality of data sets, which are written into memory and made accessible for use in testing data processing systems. | 06-23-2011 |
20110167049 | FILE SYSTEM MANAGEMENT TECHNIQUES FOR COMPUTING ENVIRONMENTS AND SYSTEMS - Disclosed file system management techniques can augment and/or enhance a file management system (e.g., a conventional file system) provided for organizing data stored in computer readable storage medium (e.g., a HDD). Data and metadata can be written to a file system space of a file system without using a file management system and without incorporating the data into the file system. However, the metadata can include information allowing the written data to be (later) incorporated into the file system and without having to use the file system, thereby allowing write performance to be enhanced. Generally, metadata can provide additional information including data (e.g., integrity data) that cannot be provided or efficiently provided by the file management system to augment a limited or reduced file system. Integrity data can be especially useful for error recovery (e.g., after a system failure). | 07-07-2011 |
20110178994 | CONTACTLESS IC MEMORY ON REMOVEABLE MEDIA - Method, system, and computer program product embodiments for recording data on a contactless integrated circuit (IC) memory associated with a data storage cartridge are provided. In one exemplary embodiment, an index of a plurality of files to be recorded on a storage media of the data storage cartridge is parsed with a table of contents (TOC) profile file to build a table of contents (TOC) specific to an owning application of the plurality of files. The TOC is written to the contactless IC memory. | 07-21-2011 |
20110184919 | SYSTEM AND METHOD FOR PRESERVING ELECTRONICALLY STORED INFORMATION - A system and method for collection of electronically stored information (ESI) from Windows based desktops and laptops is disclosed that are under the control of remote custodians. The system and method include an external persistent memory storage device and a software application tool that is loaded onto the persistent memory storage device. The external persistent memory storage device is connected to the computer system hosting the persistent memory storage device to be examined, for example, by way of a USB or Ethernet port. Once connected to the computer system hosting the persistent memory storage device to be examined, a Quick Start program, which, when opened, allows the required processing to be methodically performed. Documentation is provided for completing information regarding the chain of custody of the external persistent memory storage device. The documentation may be imprinted on a security receptacle for receiving the external persistent memory storage device. The security receptacle is configured to protect the persistent memory storage device from electrostatic discharge and to indicate if the bag or container was tampered with after it was sealed. | 07-28-2011 |
20110184920 | SYSTEM AND METHOD FOR PROVIDING HIGH AVAILABILITY DATA - An embodiment relates to a computer-implemented data processing system and method for storing a data set at a plurality of data centers. The data centers and hosts within the data centers may, for example, be organized according to a multi-tiered ring arrangement. A hashing arrangement may be used to implement the ring arrangement to select the data centers and hosts where the writing and reading of the data sets occurs. Version histories may also be written and read at the hosts and may be used to evaluate causal relationships between the data sets after the reading occurs. | 07-28-2011 |
20110191304 | SYSTEM AND METHOD FOR EXPORT AND IMPORT OF METADATA LOCATED IN METADATA REGISTRIES - A method for transferring metadata including: separating, using a processor, objects in a metadata registry into system-defined objects and user-defined objects, identifying, using the processor, a consistent set of the user-defined objects to export based on relationships of the user-defined objects with other objects, and exporting, using the processor, the consistent set of user-defined objects. A method for transferring metadata may also include: receiving, using a processor, a consistent set of user-defined objects for import into a metadata registry; and importing, using the processor, the set of user-defined objects into the metadata registry, the importing comprising validating the consistency of the set of user-defined objects. | 08-04-2011 |
20110196847 | CONFLICT MANAGEMENT IN A VERSIONED FILE SYSTEM - Multiple files in a versioned file system are grouped to form a fusion unit on a server. The fusion unit is exposed to a client as a browsable folder having separate files. When the server receives an indication of a change to file belonging to the fusion unit, the server determines whether the change to the file causes a conflict on the fusion unit. If the change does cause a conflict, then the conflict is reported; otherwise the fusion unit is updated to incorporate the change. | 08-11-2011 |
20110225126 | Method, Apparatus and Software for Maintaining Consistency Between a Data Object and References to the Object Within a File - A technique for maintaining consistency between a data object and references to the object in a file. An indication that a source object has changed is received. One or more of the changes made to the source object are identified. A file comprising one or more references related to the source object is analyzed to identify those references that may be inconsistent with the changes made to the source object. | 09-15-2011 |
20110238632 | VALIDATING AGGREGATE DOCUMENTS - Embodiments described herein are directed to validating an aggregate document. An instance signature can be generated for a first instance of a data page retrieved for inclusion in the aggregate document and can be compared to a baseline signature associated with a second instance of the data page. A similarity value can be calculated in response to the comparison. The similarity value indicates a degree of similarity between the first instance and the second instance of the data page. Based on the similarity value it can be determined whether to delete or bypass the data page in the aggregate document. | 09-29-2011 |
20110238633 | ELECTRONIC FILE COMPARATOR - The invention concerns a method of comparing by a comparator tool a pair of electronic data files each comprising a plurality of data elements, the method comprising: identifying at least one data element in each of said files; replacing the values of said at least one identified data elements in each of said files by a same reference value; comparing the files to detect differences between values of the data elements; and generating an output report indicating said differences. | 09-29-2011 |
20110252005 | DISTRIBUTED SYSTEM HAVING A SHARED CENTRAL DATABASE - A system for managing electronic information in a distributed system includes a shared central database for which a plurality of servers transmits data for storage. The shared central database is configured to store central schema information used for accessing the one or more data stores of the central database. Local databases each reference at least a portion of the central schema information for accessing the central database. Upon receiving a request for information, a local database directs the request to the central database based on the referenced schema information. The central database processes the request and transmits the data to the local database from which data was requested. | 10-13-2011 |
20110258166 | Removal of Invisible Data Packages in Data Warehouses - In accordance with one embodiment of the disclosed technology, inconsistencies are detected between various records relating to data that has been associated with an identification tag. Data packages associated with the inconsistencies may then be removed. In accordance with another aspect of the disclosed technology, requests relating to data packages associated with inconsistencies in the various stored records are identified and removed. The disclosed technology may be implemented in data warehouses. | 10-20-2011 |
20110258167 | XBRL SERVICE SYSTEM AND METHOD - Relationships between XBRL hypercubes, including implicit relationships, may be automatically determined based on shared dimensions. Once such relationships are understood, “generic” software (software that is not specific to a particular taxonomy) may be built to provide some or all of the following functionalities: determine, enforce, and/or encourage referential integrity; deduce (graph) ordered relationships between hypercubes; make inferences about those relationships; “join” or “split” hypercubes; create isomorphic and/or homomorphic views of a hypercube for user presentation; and/or assemble and order primary items (attached to hypercubes) in a logical (graph) order. | 10-20-2011 |
20110270806 | CHECKING OF A COMMUNICATION SYSTEM FOR AN AIRCRAFT UNDER DEVELOPMENT - The invention relates to a method and a device for checking a communication system ( | 11-03-2011 |
20110270807 | Method In A Database Server - Method, apparatus and computer program for storing by a database server a plurality of data entries containing data related to a plurality of applications. The database server receives a request to modify the content of a data entry and checks a validation rule related to the entry before performing the modification. The validation rule contains information usable for determining valid data contents that can be stored in said entry, and information for building the validation rule is received from one or more application servers serving said applications. When a back end database server system stores data related to a plurality of applications, data validation check mechanisms are simplified, as information for performing validity checks are received from the application servers in the database system. | 11-03-2011 |
20110282848 | METHOD FOR ESTABLISHING A DATA SEQUENCE FOR GENERATING A TRIP - Method for establishing, by means of a computer, a data sequence for generating a trip, the said method comprising the use of a first set of N data, each data item (POI | 11-17-2011 |
20110295815 | Proactive Detection of Data Inconsistencies in a Storage System Point-in-Time Copy of Data - Embodiments of the invention relate to testing a storage system point-in-time copy of data for consistency. An aspect of the invention includes receiving system and application event information from systems and applications associated with point-in-time copies of data. The system and application event information is associated with each of point-in-time copies of data. At least one point-in-time copy of data is selected for testing. The system and application event information is compared with inconsistency classes to determine tests for testing the point-in-time copy of data. The point-in-time copy of data is tested. | 12-01-2011 |
20110295816 | ALTERATION DETECTING APPARATUS AND ALTERATION DETECTING METHOD - According to one embodiment, an alteration detecting apparatus includes an input unit, a storage unit, an output unit, and an alteration detecting unit. The input unit inputs a file. The storage unit stores the file. The output unit outputs the file. The alteration detecting unit produces first alteration detection data that is uniquely determined, from the file on the basis of an alteration detection data production process in response to an input of the file, stores the file and the first alteration detection data in the storage unit, produces second alteration detection data that is uniquely determined, from the file stored in the storage unit on the basis of the alteration detection data production process in response to an output request for the file, compares the first alteration detection data with the second alteration detection data and detects alteration of the file on the basis of the compared result. | 12-01-2011 |
20110307454 | System And Method For Independent Verification And Validation - This invention is to solve the high cost of the IV&V process by simplifying the complex logistics of project management. This invention is an intelligent project management system that can serve multiple projects. The system comprises at least one computer system with at least one operating system. The system is remotely accessible by a user via a computing device. The system contains at least one web server, at least one email server, and at least one search engine facility. Several components are executing online or offline in the system. The system can generate various views of a particular project. The system continuously monitors and periodically reports the state of a project. It automatically validates the mapping between multiple project documents. The system automatically provides authentication and authorization of the user. It provides online and offline intelligent assistance to the user on matters related to a project. The system assures the security of its computer(s) and the integrity of the project(s) it is serving. The system includes databases which stores project data, project documents, and user information. | 12-15-2011 |
20110313978 | PLAN-BASED COMPLIANCE SCORE COMPUTATION FOR COMPOSITE TARGETS/SYSTEMS - A method and apparatus for plan-based compliance score computation is provided. Compliance-specific target results are stored. The compliance results include, for each target, a subset of target-specific compliance results for a rule subset of compliance rules. Each target-specific compliance result of the result subset includes a compliance value. The compliance value represents compliance to a compliance rule of the rule subset. An execution plan is generated. The execution plan generates data that measures compliance to a first compliance standard. For each target-specific compliance result, an execution plan step is generated for computing the compliance value of the respective compliance rule of the respective target. | 12-22-2011 |
20110313979 | PROCESSING RELATED DATASETS - Processing related datasets includes receiving over an input device or port records from multiple datasets, the records of a given dataset having one or more values for one or more respective fields; and processing records from each of the multiple datasets in a data processing system. The processing includes: analyzing at least one constraint specification stored in a data storage system to determine a processing order for the multiple datasets, the constraint specification specifying one or more constraints for preserving referential integrity or statistical consistency among a group of related datasets that includes the multiple datasets, applying one or more transformations to records from each of the multiple datasets in the determined processing order, where the transformations are applied to records from a first dataset of the multiple datasets before the transformations are applied to records from a second dataset of the multiple datasets, and the transformations applied to the records from the second dataset are applied based at least in part on results of applying the transformations to the records from the first dataset and at least one constraint between the first dataset and the second dataset specified by the constraint specification, and storing or outputting results of the transformations to the records from each of the multiple datasets. | 12-22-2011 |
20110320412 | USING REPEATED INCREMENTAL BACKGROUND CONSISTENCY CHECKING TO DETECT PROBLEMS WITH CONTENT CLOSER IN TIME TO WHEN A FAILURE OCCURS - Provided are techniques for identifying an incremental consistency checking job. During a run of the incremental consistency checking job, one or more queries are issued for a set of content holding objects in an object repository. For each of the issued one or more queries, whether content in the set of content holding objects in the object repository and associated content elements in the content repository is consistent is verified; in response to determining that content is not consistent, one or more inconsistencies are recorded; in response to determining that a desired number of content elements to process in each time interval has been reached and not all of the content holding objects in the object repository have been processed, the incremental consistency checking job is scheduled for a subsequent run; and, in response to determining that all of the content holding objects in the object repository have been processed, the incremental consistency checking job is marked as complete and a new incremental consistency checking job is scheduled. | 12-29-2011 |
20110320413 | Detection of Obscured Copying Using Discovered Translation Files and Other Operation Data - Systems and methods that automatically compare sets of files to determine what has been copied even when sophisticated techniques for hiding or obscuring the copying have been employed. The file compare system comprises a file compare program that uses various operational data and user interface options to detect illicit copying, highlight and align matching lines, and to produced a formatted report. A discovered translations file is used to match translated tokens. Other operation data files specify rules that the file program then uses to improve its results. The generated report contains statistics and full disclosures of the discovered translations used and the other methods used in creating the exhibits. The system includes a bulk compare program that automatically detects likely file pairings and candidates for validation as suspected translations, which can be used on iterative runs. The user is given full control in the final output and the system automatically reforms the reports and recalculations the statistics for consistent and accurate final presentation. | 12-29-2011 |
20110320414 | METHOD, SYSTEM AND COMPUTER-READABLE STORAGE MEDIUM FOR DETECTING TRAP OF WEB-BASED PERPETUAL CALENDAR AND BUILDING RETRIEVAL DATABASE USING THE SAME - The present disclosure relates to a method, system and software executable by a processor associated with non-transitory computer-readable storage medium for detecting a trap of web-based calendar pages and building a retrieval database. According to an aspect of the disclosure, detecting a trap of web-based calendar pages includes clustering, by a clustering module, URLs corresponding to web pages stored in a database according to a predetermined standard, generating a regular expression by analyzing a date pattern included in a clustering result, and detecting, a cluster suspected of being a trap of web-based perpetual calendar pages using the generated regular expression. | 12-29-2011 |
20120030182 | HIERARCHICAL MULTIMEDIA PROGRAM COMPOSITION - A computer-based method for media composition of a family of related time-based media programs. The method involves creating a master program with time-based elements of video and/or audio as well as time-based and non-time-based metadata, creating a derivative program that includes derivative elements, defining an inheritance relationship between the master program and the derivative program that specifies elements of the master program to be inherited by the derivative program, and causing the derivative program to inherit the specified elements from the master program in accordance with the inheritance relationship. User interfaces are provided for creating, editing, and viewing hierarchical trees of related programs. | 02-02-2012 |
20120036112 | System of and Method for Entity Representation Splitting Without The Need for Human Interaction - Disclosed is a system for, and method of, determining whether records and entity representations should be delinked. The system and method need no human interaction in order to calculate parameters and utilizing formulas used for the delinking decisions. | 02-09-2012 |
20120066185 | SYSTEM AND METHOD FOR DATABASE INTEGRITY CHECKING - A method is disclosed for checking the integrity of a database through a test of database integrity information provided in the database and integrity information provided external to the database. The integrity information may be provided in a configuration file. | 03-15-2012 |
20120095970 | IDENTIFYING UNREFERENCED FILE SYSTEM COMPONENTS - A list of data structures (e.g., inodes) can be accessed, and the data structures in the list can be examined. If a data structure is examined, a counter value associated with the data structure is changed to a generation number that is associated with the examination. Subsequently, the counter values can be used to identify unreferenced data structures. More specifically, the counter value for an unreferenced data structure will be different from the generation number for the most recently performed examination. | 04-19-2012 |
20120095971 | ONLINE FILE SYSTEM CONSISTENCY CHECK - A lock is acquired on a data structure. Content in the data structure is read and verified while the lock is held. The lock is then released, and then the file system components that are referred to by the data structure are verified. In essence, a file system consistency check of the file system components is performed offline in the background while the data structure remains accessible. | 04-19-2012 |
20120109903 | HALLOWEEN PROTECTION IN A MULTI-VERSION DATABASE SYSTEM - Mitigating problems related to the Halloween problem including where update operations potentially allow the record to be visited more than once during the operation. A method includes accessing an instance of a data store operation statement. The instance of the data store operation statement is executed causing an update or delete to an old version of data store record or creation of a data store record resulting in a new version of the data store record in the case of an update or creation of a data store record and a deleted version of the data store record in the case of a delete in the data store. The instance of the data store operation statement is correlated with the new version of the data store record or the deleted version of the data store record. | 05-03-2012 |
20120109904 | Media File Storage - Methods, systems and program products for replacing a master media file. Data indicates characteristics of a first user's multiple media files. At least one of the multiple media files matches content in a master media file. The content in the matching media file is of a second quality that is higher than the first quality of the master media file. A server system stores the matching media file in place of the master media file. The server system receives a request from a second user for content matching the master media file, and accesses quality parameters that indicate the second user can access a version of the content at a third quality that is less than the second quality. A media file that contains the requested content at the third quality is generated and sent to the second user. | 05-03-2012 |
20120109905 | IDENTIFYING AND REPRESENTING CHANGES BETWEEN EXTENSIBLE MARKUP LANGUAGE (XML) FILES - This disclosure is directed to techniques for providing comparing first and second XML files to one another. According to these techniques, a computing device (e.g., a version control service executing on the computing device), may be configured generate at least two edit transcripts that each include one or more operational changes that may be applied to data elements of the first XML file to arrive at data elements of the second XML file (or vice versa). The computing device may select at least one optimal edit transcript based on which of the number of operational changes of the at least two edit transcripts. | 05-03-2012 |
20120109906 | METHOD FOR IDENTIFYING LOGICAL DATA DISCREPANCIES BETWEEN DATABASE REPLICAS IN A DATABASE CLUSTER USING ENHANCED TRANSACTION LOGGING - A method and system for monitoring and maintaining the consistency of replicated databases in a shared-nothing database cluster architecture is presented. In order to improve the ability of the system to maintain data consistency between the various database replicas in the cluster, an enhanced relational database management system is described that: (a) tags each data change record in the transaction log for a given managed database with a unique transaction identifier that is associated with the transaction request that initiated the data change; and, (b) tags each data change record in the transaction log for a given managed database with a client identifier that identifies the client that submitted the transaction request that initiated the data change. The enhanced relational database management system also includes an extended client interface that makes the unique transaction identifier for each transaction request available to the client application that submitted the transaction request. | 05-03-2012 |
20120117035 | FILE SYSTEM CONSISTENCY CHECK ON PART OF A FILE SYSTEM - A file system that includes multiple logical devices can be subdivided into multiple containers. The containers each include respective non-overlapping sets of the logical devices. An amount of memory allocated to a container is dynamic. A set of the containers can be selected for a file system consistency check. The file system consistency check is performed on only the set of the containers instead of on the entire file system. | 05-10-2012 |
20120124010 | COMPRESSION SCHEME FOR IMPROVING CACHE BEHAVIOR IN DATABASE SYSTEMS - The apparatuses and methods described herein may operate to identify, from an index structure stored in memory, a reference minimum bounding shape that encloses at least one minimum bounding shape. Each of the at least one minimum bounding shape may correspond to a data object associated with a leaf node of the index structure. Coordinates of a point of the at least one minimum bounding shape may be associated with a set of first values to produce a relative representation of the at least one minimum bounding shape. The set of first values may be calculated relative to coordinates of a reference point of the reference minimum bounding shape such that each of the set of first values comprises a first number of significant bits fewer than a second number of significant bits representing a second value associated with a corresponding one of absolute coordinates of the point. | 05-17-2012 |
20120130960 | ESTIMATION OF ERRORS IN ATTRIBUTE VALUES OF AGGREGATED DATABASES - An apparatus ( | 05-24-2012 |
20120136838 | MECHANISM FOR PERFORMING AUTOMATED DATA INTEGRITY VERIFICATION TESTING FOR FILE SYSTEMS - A mechanism for performing automated data integrity verification testing for file systems is described. A method of embodiments of the invention includes initiating a temporary termination of connection between a computer system and a storage medium that is coupled to a file system. The method further includes restoring the connection between the computer system and the storage medium, transmitting data records including transactions indicating data blocks reported to have been committed to the storage device, and facilitating data verification testing at the computer system, the data verification testing including reconciling the data records with contents of files of the file system. The data records represent data blocks that are notified as being committed to the storage medium, and the contents of the files represent data blocks actually committed to the storage device. | 05-31-2012 |
20120136839 | User-Driven Conflict Resolution Of Concurrent Updates In Snapshot Isolation - Devices, methods and systems for reconciling data conflicts between concurrent updates made in snapshot isolation are disclosed. Conflict resolution between first and second user transactions may be performed by determining that at least a portion of second user data is in conflict with at least a portion of the first user data, identifying the specific data from each of the first and second user data that is in conflict, displaying the specific data in conflict on a user interface of the second user and allowing the second user to resolve the conflict by choosing which of the specific data in conflict is correct. Upon the second user choosing which data is correct, the user interface and the database may be updated to reflect this selection. Related systems, methods, and articles of manufacture are also described. | 05-31-2012 |
20120136840 | UNIVERSAL DATA DISCERNMENT - An contextual artificial intelligence system is disclosed. Intelligent business objects enable dynamic data object interaction and encapsulation of user context. Data is rationalized and data objects evolve by way of an artificial intelligence assisted process of self-discovery. Significant data is identified based upon factors such as cost, revenue and outcome and contextually significant result sets are automatically generated for users. | 05-31-2012 |
20120150820 | SYSTEM AND METHOD FOR TESTING DATA AT A DATA WAREHOUSE - A system and method for performing testing of data at a data warehouse is provided. The methodology of the invention describes steps to develop and further invoke one or more data quality-accuracy test cases from a framework. The data quality-accuracy test cases check the sanity of the data stored at the data warehouse. The one or more data quality-accuracy test cases are developed based on at least one predefined strategy, which in turn are stored in the framework. The methodology further executes the developed one or more data quality-accuracy test cases as either batch or independently, based on the requirements of the test. Thereafter, the methodology maintains traceability of the executed test at the data warehouse, incorporating details from the development of the one or more data quality-accuracy test cases to the final output of the test. | 06-14-2012 |
20120150821 | CONFIGURATION INFORMATION MANAGEMENT DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND CONFIGURATION INFORMATION MANAGEMENT METHOD - When receiving a change of property information that is a key for performing property information integration, a FCMDB refers to the information stored in a property management information DB. Then, when property information that is a key after change is identical with the property information before change in the same configuration item, the FCMDB maintains the property information of the property information DB with respect to the configuration item. On the other hand, when the two property information data are not identical to each other, the FCMDB integrates property information for each configuration item on the basis of the key after change and registers the result in a property information storage unit. | 06-14-2012 |
20120150822 | METHOD AND SYSTEM FOR PERSONALITY COMPARISION VIA PUBLIC CONSENSUS - A method and system of comparing data sets related to personality traits to identify various comparison results. The method and system include determining, by a processing device, a plurality of data sets. The data sets include information related to a self-evaluation report for a first user based upon the first user's answers to a set of questions. The data sets also include information related to anonymous, aggregated data received from other users. The first user may select a context for performing a comparison of two or more of the data sets, including a self-evaluation report, an aggregated public perception of the first user, or an aggregated public perception of another user. The system performs the comparison to produce comparison results. The comparison results provide the first user with information related to their individual personality and/or information related to an existing or potential relationship between the first user and another user. | 06-14-2012 |
20120173495 | Computer Readable Medium, Systems, and Methods of Detecting a Discrepancy in a Chain-of-title of an Asset - A computer readable medium includes instructions that, when executed by a processing system, cause the processing system to receive data corresponding to an asset from a data source. The data indicates a chain-of-title of the asset. The computer readable medium further includes instructions that, when executed by the processor, cause the processor to process the data to detect a discrepancy in the chain-of-title in response to receiving the data and generate an output in response to detecting a discrepancy in the chain-of-title | 07-05-2012 |
20120185445 | SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR IDENTIFYING IDENTICAL FILES - A system, method and computer program product for identifying identical files using content signatures are provided. A content signature is generated within an indexed archive system for a file received at an information source client in a network. The generated content signature is compared with content signatures associated with files that already exist within the network. It is then determined whether the content signature for the received file matches that of an existing file in the network. Where there is a match, the metadata for the received file is examined to determine if the received file was independently created from the existing file with matching content signature. If the metadata confirms the independent creation, a control action is taken. | 07-19-2012 |
20120191666 | 3D DATA RECOVERY DEVICE, 3D DATA RECOVERY METHOD, AND 3D DATA RECOVERY PROGRAM - A problem is that if one of the recording mediums is media formatted and management information is deleted, etc. when left-eye image (ex. first data) and right-eye image (ex. second data) constituting 3D data are recorded to different recording mediums, then 3D image cannot be properly reproduced. A 3D data recovery device comprises interfaces | 07-26-2012 |
20120209818 | Incremental testing of a navigation database - A navigation system utilizes a testing package tailor made for an incremental update to a map database. An incorrect incremental update may corrupt a navigation database. Testing an incrementally updated database after updating allows a corrupted database to be detected before the corrupted database is used by the map database system. Map tiles associated with a list of recompiled objects are used to populate a table. A test script is created from the list map tiles and, when executed, checks the validity of references in the map database associated with the map tiles. The test script generates a return value that indicates whether errors occurred, the type of the errors, the quantity of errors, or any combination thereof. The navigation system analyzes the errors and determines whether to finalize or roll back the update. | 08-16-2012 |
20120209819 | METHOD OF MANUFACTURING AN INFORMATION HANDLING SYSTEM - A method of manufacturing an information handling system having at least one hardware component, e.g. motherboard, bearing a unique identifier (component ID) in software-readable form. The method comprises generating a digital identifier (system trackcode) which defines the hardware and software configuration of the item, storing the system trackcode in association with the component ID in a manufacturing database such that the component ID can be used as a key to retrieve the associated system trackcode. During manufacture the component ID is read from the motherboard and used to retrieve the associated system trackcode from the database. | 08-16-2012 |
20120215747 | DATA UPLOADING METHOD, DATA DOWNLOADING METHOD, AND DATA SYSTEM - The present invention provides a data uploading method, a data downloading method, and a data system. The uploading method includes: receiving a data uploading request of a user and obtaining a content ID of data to be uploaded; determining, according to the content ID, whether the data to be uploaded is already stored; and if the data to be uploaded is not stored, uploading the data to be uploaded to a local data center and storing the data to be uploaded. According to the embodiments of the present invention, a data traffic load between different networks is reduced and response efficiency is increased; uniform management and quick query of content copies in different networks are realized, and the number of distribution of copies of the same content in the network in the system is reduced. | 08-23-2012 |
20120221533 | HIERARCHICAL DATA COMPRESSION TESTING - A hierarchical compression tester and associated method stored in a computer readable medium employs a grid-based storage capacity wherein a storage unit is defined by a grouping of data blocks. Each data block is stored in one of a plurality of storage devices. Each stored data block has a data portion and a data integrity field (DIF) including a data reliability qualifier (DRQ) indicating whether the respective data portion is valid. The tester also has a logical device allocation map that includes a storage unit descriptor array that identifies one or more storage units corresponding to a selected logical address. The logical device allocation map has a DIF array that identifies whether any of the data blocks in the one or more storage units corresponding to the selected logical address includes invalid data. | 08-30-2012 |
20120226668 | MANAGING DATABASE RECOVERY TIME - Managing database recovery time. A method includes receiving user input specifying a target recovery time for a database. The method further includes determining an amount of time to read a data page of the database from persistent storage. The method further includes determining an amount of time to process a log record of the database to apply changes specified in the log record to a data page. The method further includes determining a number of dirty pages that presently would be read in recovery if a database failure occurred. The method further includes determining a number of log records that would be processed in recovery if a database failure occurred. The method further includes adjusting at least one of the number of dirty pages that presently would be read in recovery or the number of log records that would be processed in recovery to meet the specified target recovery time. | 09-06-2012 |
20120246122 | INTEGRATING DATA-HANDLING POLICIES INTO A WORKFLOW MODEL - A method and system for integrating data-handling policies into a computer-implemented workflow model is provided. In one embodiment, a workflow editor implemented using one or more processors may include a privacy manager module configured to permit a business process designer to integrate data handling policies into a workflow model. A privacy manager module, or simply a privacy manager, may also be configured to execute a consistency check with respect to newly-created and existing data handling policies to determine whether there is a conflict among any of the data-handling policies associated with tasks and data objects of the workflow model. | 09-27-2012 |
20120254130 | SYSTEM AND METHOD FOR MAINTAINING CONSISTENT POINTS IN FILE SYSTEMS USING A PRIME DEPENDENCY LIST - According to one embodiment, a request is received for obtaining a consistent point of data stored in a file system of a storage system having a plurality of storage units. In response to the request, retrieving a prime dependency list from a first prime segment stored in a first of the storage units, where the prime dependency list includes information identifying at least a second prime segment stored in a second of the storage units. The first and second prime segments collectively form a prime segment representing a consistent view of the file system. Each of the prime segments listed in the prime dependency list is ascertained in an attempt to generate the consistent point of data. | 10-04-2012 |
20120284238 | METHOD AND SYSTEM FOR DATA REDUCTION - A “forward” delta data management technique uses a “sparse” index associated with a delta file to achieve both delta management efficiency and to eliminate read latency while accessing history data. The invention may be implemented advantageously in a data management system that provides real-time data services to data sources associated with a set of application host servers. A host driver embedded in an application server connects an application and its data to a cluster. The host driver captures real-time data transactions, preferably in the form of an event journal that is provided to the data management system. In particular, the driver functions to translate traditional file/database/block I/O into a continuous, application-aware, output data stream. A given application-aware data stream is processed through a multi-stage data reduction process to produce a compact data representation from which an “any point-in-time” reconstruction of the original data can be made. | 11-08-2012 |
20120296878 | FILE SET CONSISTENCY VERIFICATION SYSTEM, FILE SET CONSISTENCY VERIFICATION METHOD, AND FILE SET CONSISTENCY VERIFICATION PROGRAM - A check code generating means 10 generates, based on metadata of files satisfying a designated condition, a first check code uniquely representing a characteristic of a first file set whose components are files satisfying the condition. Moreover, the check code generating means | 11-22-2012 |
20120330900 | DATABASE SAMPLING - The present subject matter relates to systems and methods for database sampling. The method comprises identifying at least one query table and one or more associated tables amongst a plurality of tables in a production database, based on filtering criteria. Further, the method comprises generating a key value list for the at least one query table and each of the one or more associated tables based on an order indicated by an order list. Based on the generated key value list, the sample data is extracted in a reverse order indicated by the order list, from the at least one query table and each of the one or more associated tables. | 12-27-2012 |
20120330901 | VALIDATION OF INGESTED DATA - Methods and systems for validating ingested data are disclosed. In accordance with the methods and systems, data elements can be received for storage in slots of an individual descriptor in a storage medium. In addition, at least one validation test can be selected based on a weighting of the data elements that indicates a respective degree of importance of the data elements. The selected validation test or tests can be applied to the data elements stored in the slots to generate respective validation results. Further, a validation score indicating a sufficiency of the stored data elements can be generated based on the validation results. | 12-27-2012 |
20130018850 | System And Method For Product Customization Synchronization - A computer implemented method for configuring multiple products within a defined grouping of user-configurable products including providing a computer-implemented database of user-configurable products at a first computer, each user-configurable product including design parameters distinguishing the user-configurable product from other user-configurable products, each design parameter including a range of values, providing a listing of user-configurable products based on one or, more product configuration selections provided by a user, receiving a selection of a plurality of user-configurable products from the listing of user-configurable products, the selection including identifying a grouping to be associated with the each user-configurable product to create at least two virtual product groups. Each grouping includes at least two user-configurable products. The method further includes receiving a user change at least one design parameter for user-configurable products of a selected grouping and modifying the configuration of all user-configurable products within the selected grouping to modify the design parameters. | 01-17-2013 |
20130024430 | Automatic Consistent Sampling For Data Analysis - A method, computer program product, and system for analyzing data within one or more databases, comprising selecting one or more databases for analysis, each database comprising one or more database objects comprising one or more data values, applying a function to each data value in each database object within the one or more databases, where the function produces function values limited to a predetermined range, identifying for analysis the data values producing a certain function value within the predetermined range to form a sampled data set, and analyzing the sampled data set to determine relationships between the database objects within and across the one or more databases. | 01-24-2013 |
20130031061 | FRAUD ANALYSIS IN A CONTACT DATABASE - A system and method of identifying fraudulent data in a contact database is disclosed herein. In some embodiments, a set of contact records is received where each of the contact records includes a set of contact field values corresponding to a set of contact fields. Some embodiments determine whether a similar content pattern exists in the contact records using at least one of the set of contact field values. In some embodiments, a determination is made as to whether an unusual content pattern exists in the contact records using at least one of the set of contact field values. The set of contact records is flagged when at least one of the similar content pattern or the unusual content pattern is determined to exist in the contact records. | 01-31-2013 |
20130036097 | DATA FINGERPRINTING FOR COPY ACCURACY ASSURANCE - Systems and methods are disclosed for efficiently creating a data fingerprint to identify or characterize contents of a data object by using a selection function to select a plurality of non-contiguous regions from the data object, the selected regions each having a small number of bytes relative to the number of bytes in the data object and being distributed throughout the data object so that the selected regions comprise a sparse subset of the data of the data object yet provide a significant probability of including bytes that change if the data object were modified; and performing a hash operation on the data to produce a fingerprint based on the sparse subset of the data object. The data fingerprint thereby efficiently provides an indication of the contents of the data object, so that comparing data fingerprints can determine if the data objects are different if the corresponding fingerprints are different. | 02-07-2013 |
20130036098 | SUCCESSIVE DATA FINGERPRINTING FOR COPY ACCURACY ASSURANCE - Systems and methods are disclosed for checking the data integrity of a data object copied between storage pools by comparing data fingerprints of data objects, comprising scheduling a series of successive copy operations over time for copying a data object from a source data store to a target data store; generating a partial fingerprint of the data object at the source data store that creates a fingerprint from a subset of the data object; sending the partial fingerprint of the data object to the target data store; sending any new data contents to the target data store; and creating a partial fingerprint of the data object at the target data store and comparing it to the received partial fingerprint to determine if they differ, thereby allowing incremental verification that the copy of the data object at the target data store is the same as at the source data store. | 02-07-2013 |
20130036099 | NAVIGATION SYSTEM WITH USER GENERATED CONTENT MECHANISM AND METHOD OF OPERATION THEREOF - A method of operation of a navigation system includes: receiving a change request with a proposed change for an item; verifying a validity of the change request based on a confidence level meeting or exceeding a change threshold with a control unit; and updating a target element of the item based on the validity of the proposed change for avoiding an incorrect update to the target element for displaying on a device. | 02-07-2013 |
20130041872 | CLOUD STORAGE SYSTEM WITH DISTRIBUTED METADATA - A method and system is disclosed for providing a cloud storage system supporting existing APIs and protocols. The method of storing cloud storage system (CSS) object metadata separates object metadata that describes each CSS object as a collection of named chunks with chunk locations specified as a separate part of the metadata. Chunks are identified using globally unique permanent identifiers that are never re-used to identify different chunk payload. While avoiding the bottleneck of a single metadata server, the disclosed system provides ordering guarantees to clients such as guaranteeing access to the most recent version of an object. The disclosed system also provides end-to-end data integrity protection, inline data deduplication, configurable replication, hierarchical storage management and location-aware optimization of chunk storage. | 02-14-2013 |
20130046738 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR PROVIDING CONFLICTING POINT OF INTEREST INFORMATION - A method to provide an improved method for providing first POI information and second POI information which conflicts with the first POI information, and providing an accuracy confidence level of each of the first POI information and the second POI information. Embodiments may further solicit feedback (e.g. a selection) from a user regarding the user's determination of which of the first POI information and the second POI information is accurate. The method may also include updating the accuracy confidence level of each of the first information and the second information in response to receiving the selection. | 02-21-2013 |
20130066840 | METHOD AND APPARATUS FOR RESOLVING INCOHERENT DATA ITEMS - An approach is provided for resolving incoherent data items of a user spanning multiple different groups and/or services. An incoherency platform determines an incoherency of a first data item and at least a second data item that are shared by a first user in a first group and at least a second group, respectively. The platform also processes the incoherency, the first data item, the at least a second data item, or a combination thereof to cause, at least in part, a generation of a resolved data item. The platform further causes, at least in part, a substitution of the resolved data item for the first data item, the at least a second data item, or a combination thereof. | 03-14-2013 |
20130073523 | Purity Analysis Using White List/Black List Analysis - Memoizable functions may be identified by analyzing a function's side effects. The side effects may be evaluated using a white list, black list, or other definition. The side effects may also be classified into conditions which may or may not permit memoization. Side effects that may have de minimus or trivial effects may be ignored in some cases where the accuracy of a function may not be significantly affected when the function may be memoized. | 03-21-2013 |
20130073524 | DATABASE SYNCHRONIZATION AND VALIDATION - Systems and methods for verifying data in a distributed database using different automated check operations at different times during the database read and update cycles. Various functions may be performed including executing a first check during update operations of the database. A second check may also be executed during the update operation of the database, and be implemented as an execution thread of an update daemon. A third check may be executed at a time interval between update functions of the update daemon. A fourth check may be executed during a time that the database is not being updated. Integrity of data in the database may be verified by a computer processor based on the first, second, third, and fourth checks. | 03-21-2013 |
20130080402 | METHOD AND SYSTEM FOR VEHICLE ON-BOARD PARAMETER VALIDATION - A system and method for validating data stored in one or more vehicle electronic units includes confirming that a data validation package is present in a first electronic control unit on-board the vehicle, comparing data in the validation package to data stored by at least one target electronic control unit on-board the vehicle, logging any discrepancies between the data in the validation package and the data stored by the at least one target electronic control unit, and wirelessly transmitting a message from the first electronic control unit identifying any discrepancies in the data stored by the at least one electronic control unit to a remote location. | 03-28-2013 |
20130080403 | FILE STORAGE APPARATUS, FILE STORAGE METHOD, AND PROGRAM - A file storage apparatus comprises: duplication determination unit that determines whether file supplied from client apparatus and file stored in storage unit coincide with each other in same format, and stores the file supplied from client apparatus in the storage unit if the files do not coincide in the same format; and storage management unit that associates, if duplication determination unit determines that the files coincide in the same format, format of the file supplied from the client apparatus with the file stored in the storage unit, reads file stored in the storage unit in response to file read request from client apparatus, converts, if format associated with the read file exists, the read file into the format, and provides the converted file. | 03-28-2013 |
20130086002 | ENFORCING TEMPORAL UNIQUENESS OF INDEX KEYS UTILIZING KEY-VALUED LOCKING IN THE PRESENCE OF PSEUDO-DELETED KEYS - Techniques are described for identifying conflicts between a prospective temporal key and an index of temporal keys, the index sorted based on a time value associated with each of the temporal keys. Embodiments determine whether a first temporal key within the index of temporal keys conflicts with the prospective temporal key. Here, the keys within the index may be sorted based upon a respective time value associated with each of the keys. Upon determining that the first temporal key conflicts with the prospective temporal key, the prospective temporal key is designated as conflicting with at least one existing temporal key in the index of temporal keys. | 04-04-2013 |
20130086003 | MERGING PLAYLISTS FROM MULTIPLE SOURCES - The present technology resolves playlist version conflicts resulting from modifications made to a playlist version, stored on a client device and in a cloud locker, when the client device and the cloud locker are in a disconnected state. The present technology is a heuristic for determining how to resolve such version conflicts. Upon reconnection of the client and cloud locker, the server, associated with cloud locker attempts to reconcile any version discrepancies resulting from user-initiated changes. In one embodiment, when the server determines that one of the playlists on the client or server is a superset of the other, the superset is selected and saved to both the client and cloud locker, while the subset version is deleted. | 04-04-2013 |
20130086004 | UPDATING A PERFECT HASH DATA STRUCTURE, SUCH AS A MULTI-DIMENSIONAL PERFECT HASH DATA STRUCTURE, USED FOR HIGH-SPEED STRING MATCHING - A representation of a new rule, defined as a set of a new transition(s), is inserted into a perfect hash table which includes previously placed transitions to generate an updated perfect hash table. This may be done by, for each new transition: (a) hashing the new transition; and (b) if there is no conflict, inserting the hashed new transition into the table. If, however, the hashed new transition conflicts with any of the previously placed transitions, either (A) any transitions of the state associated with the conflicting transition are removed from the table , the hashed new transition is placed into the table, and the removed transitions are re-placed into the table, or (B) any previously placed transitions of the state associated with the new transition are removed, and the transitions of the state associated with the new transition are re-placed into the table. | 04-04-2013 |
20130086005 | HASH POINTER CHECKING FOR HIERARCHICAL DATABASE LOGICAL RELATIONSHIP - A method of checking consistency of pointers in a hierarchical database includes reading segment information recorded on the hierarchical database and determining a type of each segment and pointer included in each segment. The method also includes extracting parent pointers and twin pointers from child segments and extracting a child pointer from the parent segment. The method also includes calculating a first hash value b a combination of a storage location address of the parent segment and a value of the child pointer and a combination of the values of the parent pointers and the twin pointers included in the child segments, and a second hash value from a combination of storage location addresses of the child segments and the values of the parent pointers included in the child segments. The method further includes indicating a consistency error when the first hash value and the second hash value differ. | 04-04-2013 |
20130091101 | SYSTEMS AND METHODS FOR NETWORK ASSISTED FILE SYSTEM CHECK - Methods and a processing system directed to a network assisted file system checker are described. In one embodiment the checker system is a network assisted checker that employs virtual storage devices that is storage is backed by files, and optionally the files backing the virtual storage devices may be remote files accessed over a network through a network file sharing protocol such as, but not limited to, NFS or CIFS. A device driver module introduces the virtual storage device to the local operating system supporting the checker process, and the device driver maps read and write requests made by the checker process to the virtual storage device onto files being supported by the network file sharing system. | 04-11-2013 |
20130097123 | Method and System for Determining Eligible Communication Partners Utilizing an Entity Discovery Engine - A method and system for determining eligible communication partners utilizing an entity discovery engine is provided. The entity discovery engine coordinates the discovery of eligible communication partners. The entity discovery engine enables participants to discover other communication partners through the application of inputs. Starting with a data set of potential communication partners, the entity discovery engine uses inputs to identify eligible communication partners from the data set of potential communication partners. Inputs include policies that are applied broadly to limit categories of potential communication partners from being suggested as eligible communication partners. Identified eligible communication partners are suggested to enable communication relationships. Suggested eligible communication partners may be selected by a user or by an electronic communication device for initiating a communication relationship. In this manner, the entity discovery engine enables the discovery of new communication partners. | 04-18-2013 |
20130103653 | SYSTEM AND METHOD FOR OPTIMIZING THE LOADING OF DATA SUBMISSIONS - A system and method for detecting changes in data records based on summary values calculated on input data and existing data in a database is provided. An input data record including indicative data and financial data may be received. The indicative data may be normalized. A summary value may be calculated based on the normalized data to determine if any differences between the input record and existing data exist. If an existing summary value corresponding to the input record does not exist, the calculated summary value and financial data may be stored. If an existing summary value corresponding to the input record exists, the calculated summary value and the existing summary value may be compared to determine if they are equivalent. The calculated summary value and financial data may be stored if the summary values are not equivalent. The financial data may be stored if the summary values are equivalent. | 04-25-2013 |
20130117240 | ACCESSING CACHED DATA FROM A PEER CLOUD CONTROLLER IN A DISTRIBUTED FILESYSTEM - The disclosed embodiments provide a system that archives data for a distributed filesystem. Two or more cloud controllers collectively manage distributed filesystem data that is stored in one or more cloud storage systems; the cloud controllers cache and ensure data consistency for the stored data. During operation, a cloud controller receives a request from a client for a data block of a file stored in the distributed filesystem. Upon determining that the requested data block is not currently cached in the cloud controller, the cloud controller sends a peer cache request for the requested data block to a peer cloud controller in the distributed filesystem. | 05-09-2013 |
20130124485 | ELECTRONIC DEVICE, STORAGE MEDIUM, AND METHOD FOR DETECTING COMPATIBILITY OF FILES OF THE ELECTRONIC DEVICE - In a method for detecting compatibility of files of an electronic device, the method obtains a size and a format of an original file in response to moving the original file from an original storage area to a destination, and determines whether the original file and the destination are compatible with each other. If the original file and the destination are compatible with each other, the method receives the original file to the destination. If the original file and the destination are incompatible with each other, the method displays a prompt that the original file and the destination are incompatible on a display device of the electronic. | 05-16-2013 |
20130132351 | COLLECTION INSPECTOR - A computer program product for providing a collection context includes computer-readable instructions embodied on tangible, non-transient media and operable when executed to identifying a collection of items. An indication to inspect one or more items in the collection can be received, and an inspection interface for inspection of the one or more items can be provided, the inspection interface providing at least data about the one or more items and a list of the items in the collection. | 05-23-2013 |
20130138615 | SYNCHRONIZING UPDATES ACROSS CLUSTER FILESYSTEMS - Embodiments of the invention relate to synchronization of data in a shared pool of configurable computer resources. An image of the filesystem changes, including data and metadata, is captured in the form of a consistency point. Sequential consistency points are created, with changes to data and metadata in the filesystem between sequential consistency captured and placed in a queue for communication to a target filesystem at a target site. The changes are communicated as a filesystem operation, with the communication limited to the changes captured and reflected in the consistency point. | 05-30-2013 |
20130138616 | SYNCHRONIZING UPDATES ACROSS CLUSTER FILESYSTEMS - Embodiments of the invention relate to synchronization of data in a shared pool of configurable computer resources. An image of the filesystem changes, including data and metadata, is captured in the form of a consistency point. Sequential consistency points are created, with changes to data and metadata in the filesystem between sequential consistency captured and placed in a queue for communication to a target filesystem at a target site. The changes are communicated as a filesystem operation, with the communication limited to the changes captured and reflected in the consistency point. | 05-30-2013 |
20130151478 | VERIFYING CONSISTENCY LEVELS - A method for verifying a consistency level in a key-value store, in which a value is stored in a cloud-based storage system comprising a read/write register identified by a key. At a centralized monitor node, a history of operations including writes and reads performed at the key is created, and a distance between a read of a value at the key and a latest write to the key is determined. It can then be ascertained whether the distance satisfies a relaxed atomicity property. | 06-13-2013 |
20130151479 | METHOD FOR VERIFYING CONVERSION, APPARATUS AND PROGRAM OF THE SAME - Apparatus for verifying conversion | 06-13-2013 |
20130151480 | MANAGING DATABASE RECOVERY TIME - Managing database recovery time. A method includes receiving user input specifying a target recovery time for a database. The method further includes determining an amount of time to read a data page of the database from persistent storage. The method further includes determining an amount of time to process a log record of the database to apply changes specified in the log record to a data page. The method further includes determining a number of dirty pages that presently would be read in recovery if a database failure occurred. The method further includes determining a number of log records that would be processed in recovery if a database failure occurred. The method further includes adjusting at least one of the number of dirty pages that presently would be read in recovery or the number of log records that would be processed in recovery to meet the specified target recovery time. | 06-13-2013 |
20130159259 | Providing Feedback Regarding Validity of Data Referenced in a Report - In an embodiment, a method is provided for providing feedback regarding validity of data referenced in a report. In this method, the report is accessed, and this report references data stored in a data source separate from the report. A profile of metadata associated with referencing the data source is read from the report, and this profile is verified against a current state of the data source. A mismatch is detected based on the verification and a digital watermark is added to a rendering of the report. This digital watermark informs the mismatch associated with the referenced data. | 06-20-2013 |
20130159260 | EMBEDDING CONTROLLERS AND DEVICES WITH DATA TO FACILITATE UP-TO-DATE CONTROL AND CONFIGURATION INFORMATION - An industrial automation system comprising a processor with an updating component coupled to automation devices via a network. The updating component reads control information from machine readable representations of the devices and populates a data structure with the control information. The updating component also updates configuration information of a device from data stored in a file object and/or the data structure, further allowing this transfer to be fragmented into a plurality of messages if the configuration information exceeds a threshold. As well, a vendor deployment methodology is provided that embeds devices and firmware for devices with a Device Type Manager (DTM) prior to deployment and can optionally allow post deployment updates to the DTM. | 06-20-2013 |
20130166514 | Verifying Authenticity of Input Using a Hashing Algorithm - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for verifying a message based on application of a hashing algorithm. In one aspect, a method includes identifying a file and a key value and processing the file using multiple nonlinear functions to generate an output hash value, where the operations performed by the nonlinear functions are modified based on the key value. The file can then be verified based on the output hash value. | 06-27-2013 |
20130166515 | GENERATING VALIDATION RULES FOR A DATA REPORT BASED ON PROFILING THE DATA REPORT IN A DATA PROCESSING TOOL - In one embodiment, the method includes profiling a data file comprising one or more fields of data. The one or more fields of data contain an item of data; that is, a character, or group of characters that are related. Further, the method includes generating one or more profiling attributes based on profiling the data file. In an example, the one or more profiling attributes refer to profiling information relating to pattern, structure, content and format of data. Further, the method includes selecting at least one of the generated one or more profiling attributes and generating a validation rule based on the selected at least one profiling attribute. | 06-27-2013 |
20130166516 | APPARATUS AND METHOD FOR COMPARING A FIRST VECTOR OF DATA ELEMENTS AND A SECOND VECTOR OF DATA ELEMENTS - A data processing apparatus includes a comparison unit configured to perform an element comparison process performing a comparison of a first data element at a first index in the first vector with a second data element at a second index in the second vector. A hazard vector generation unit is configured to populate a hazard vector at an index determined by the first index with a value determined by the second index. The comparison unit performs the element comparison process by iteratively comparing data elements of the first vector with each element of a subset of the second vector. It then determines the subset of the second vector as those data elements at indices in the second vector which are less than a current index of the first vector and which are greater than previously determined values of the second index for which the match condition was true. | 06-27-2013 |
20130185265 | METHOD FOR HORIZONTAL SCALE DELTA ENCODING - Data can be transferred between computers at remote sites by transferring the data itself, or by transferring files showing how data at an originating site can be recreated from data already present at a receiving site. As part of the data transfer, a determination can be made as to what is the most appropriate way for the transfer to take place. Further, in cases where data is not transferred directly between originating and receiving sites, it is possible that some preparatory steps might be performed to improve the efficiency of the transfers to the receiving sites when they do take place. Additional efficiencies can be obtained in some cases by using the parallel processing capabilities provided by a cloud based architecture. | 07-18-2013 |
20130204846 | ENFORCING TEMPORAL UNIQUENESS OF INDEX KEYS UTILIZING KEY-VALUED LOCKING IN THE PRESENCE OF PSEUDO-DELETED KEYS - Techniques are described for identifying conflicts between a prospective temporal key and an index of temporal keys, the index sorted based on a time value associated with each of the temporal keys. Embodiments determine whether a first temporal key within the index of temporal keys conflicts with the prospective temporal key. Here, the keys within the index may be sorted based upon a respective time value associated with each of the keys. Upon determining that the first temporal key conflicts with the prospective temporal key, the prospective temporal key is designated as conflicting with at least one existing temporal key in the index of temporal keys. | 08-08-2013 |
20130226879 | Detecting Inconsistent Data Records - A computer-implemented method for detecting a set of inconsistent data records in a database including multiple records, comprises selecting a data quality rule representing a functional dependency for the database, transforming the data quality rule into at least one rule vector with hashed components, selecting a set of attributes of the database, transforming at least one record of the database selected on the basis of the selected attributes into a record vector with hashed components, computing a dot product of the rule and record vectors to generate a measure representing violation of the data quality rule by the record. | 08-29-2013 |
20130226880 | INFORMATION PROCESSING SYSTEM, MEMORY DEVICE, INFORMATION PROCESSING APPARATUS, AND METHOD OF CONTROLLING INFORMATION PROCESSING SYSTEM - A shared memory device transmits information indicating the validity of data to be stored to each information processing apparatus, and acquires information indicating the validity of data that each shared memory device stores. Further, the shared memory device acquires information indicating the on-line state between each information processing device and each shared memory device, and determines whether the multiplexing of the data that the shared memory device stores is guaranteed based on the acquired information. | 08-29-2013 |
20130232123 | DRIFT DETECTION AND NOTIFICATION - A drift condition, or change, in a data structure can be detected and communicated to one or more subscribers. Data structure can be monitored by periodic configurable polling of a data source or on demand polling. Upon detection of a change in the in the data structure, subscribers can be notified of the change and optionally other information such as the identity of the object that changed and nature of the change. | 09-05-2013 |
20130238566 | STORAGE DEVICE, HOST DEVICE, AND STORAGE SYSTEM - A storage device includes a first storage area in which data can be read out and rewritten and file data is stored, a second storage area in which data can be read out and appended to an unwritten area and a first calculated value for detecting falsification which is calculated from the file data, and a controller that performs access control on the first storage area and the second storage area. The controller includes a frontend unit that receives a command from an external host device and accesses the first storage area and the second storage area, and a falsification detection notification unit that determines, without reading out the first calculated value to the host device, whether the first calculated value matches a second calculated value for detecting falsification which is calculated from the file data and notifies the host device of the determination result. | 09-12-2013 |
20130238567 | METHODS AND APPARATUS FOR COMPLEMENTING USER ENTRIES ASSOCIATED WITH EVENTS OF INTEREST THROUGH CONTEXT - Data validation techniques are provided. For example, such techniques complement user entries associated with events of interest through context. In one aspect of the invention, a technique for processing one or more user entries associated with one or more events of interest includes the following steps/operations. Context associated with the one or more events of interest is obtained. At least a portion of the obtained context is associated with one or more user entries representing events of interest. At least a portion of the one or more user entries is evaluated, responsive to at least a portion of the context. An indication of the one or more events of interest is provided, responsive to the evaluation. | 09-12-2013 |
20130262402 | Validating Data - A method of validating data that includes receiving a bill of material and a part provisioning dataset and extracting at least one of part information from the bill of material or part provisioning information from the part provisioning dataset. The method further includes comparing the part information to the part provisioning information and determining a compatibility between the part information and the part provisioning information. | 10-03-2013 |
20130268494 | DATA GOVERNANCE MANAGER FOR MASTER DATA MANAGEMENT HUBS - Improved data governance solutions to enterprise-level master data storage hubs are provided by implementing data governance functionality with regard to a master data hub. Data governance functionality is provided by providing visibility into the data quality the data of an enterprise. | 10-10-2013 |
20130290274 | ENHANCED RELIABILITY IN DEDUPLICATION TECHNOLOGY OVER STORAGE CLOUDS - Methods and systems for enhancing reliability in deduplication over storage clouds are provided. A method includes: determining a weight for each of a plurality of duplicate files based on parameters associated with a respective storage device of each of the plurality of duplicate files; and designating one of the plurality of duplicate files as a master copy based on the determined weight. | 10-31-2013 |
20130297568 | System and Method for Organizing Data - A system and method for organizing raw data from one or more sources uses an improved mechanism for identifying duplicate data between fields (e.g., columns) in the databases. The fields may be similar fields within a single database or similar or identical fields within a pair of databases and as organized as arrays or field vectors. The present invention sorts each of the field vectors and if necessary, partitions them by common value. A number of comparisons required to identify the duplicate data between the field vectors is reduced by feeding back a difference between the compared values. This difference is used to adjust indices into the field vectors for subsequent comparison. | 11-07-2013 |
20130304710 | METHOD AND SYSTEM FOR ANOMALY DETECTION IN DATA SETS - A method and apparatus for detecting and segmenting anomalous data in an input data set such as an image is described, which makes use of a normalised distance measure referred to as a zeta distance score. A test data point from an input test data set is compared with its corresponding nearest neighbouring standard data points in standard data sets representing variation in normal or expected data values, and the average distance from the test data point to the standard data points is found. An additional average distance measure representing the average distance between the different nearest neighbouring corresponding standard data points is also found, and a normalised distance measure obtained by finding the difference between the average distance from the test data point to the standard points and the average distance between the nearest neighbouring standard data points themselves. Where the input test data set is an image then a zeta distance score map can be found. By then thresholding the zeta distance scores obtained for the input data set using an appropriate threshold, anomalous data in the data set with a high zeta distance score can be identified, and segmented. | 11-14-2013 |
20130304711 | IDENTIFYING A COMPROMISED ENCODED DATA SLICE - A method begins by processing module in response to a read command, issuing at least a read threshold number of read requests regarding a set of encoded data slices and receiving at least the read threshold number of encoded data slices. The method continues where the processing module selects a unique combination of encoded data slices and decodes the unique combination to produce a recovered data segment. The method continues where the processing module verifies an integrity value for the recovered data segment and indicates whether the unique combination is valid. The method continues where the processing module selects other combinations producing more recovered data segments for further validity verification. The method continues where the processing module utilizes a verified recovered data segment as a response to the read command and identifies a compromised encoded data slice. | 11-14-2013 |
20130325817 | LINEAR SWEEP FILESYSTEM CHECKING - A filesystem checker identifies a metadata block in a filesystem and determines a number of pointers pointing to the metadata block and a number of pointers embedded in the metadata block. The filesystem checker records the number of pointers pointing to the metadata block and the number of pointers embedded in the metadata block in a filesystem checker array. The filesystem checker verifies a consistency of the filesystem using data recorded in the filesystem checker array. | 12-05-2013 |
20130325818 | LOGO-ENABLED INTERACTIVE MAP INTEGRATING SOCIAL NETWORKING APPLICATIONS - A logo-enabled interactive map integrating social networking applications is provided. The interactive map may be configured to help end users discover and share information (e.g., events, deals, news occurrences, etc.) associated with a plurality of venues. | 12-05-2013 |
20130325819 | AUGMENTING METADATA USING USER ENTERED METADATA - In one embodiment, a method obtains metadata associated with a media program. The method receives user entered metadata from a first user for an object in a frame of the media program and compares the user entered metadata from the first user with user entered metadata from second users for the object. Then, the method verifies that the user entered metadata from the first user and the second users should be associated as augmenting metadata for the object in the media program based on the comparison. Upon verifying, the method performs: determining metadata storage including metadata for one or more other objects in the media program and storing the user entered metadata for the object in the media program in the metadata storage for the media program as the augmenting metadata. | 12-05-2013 |
20130332426 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - According to one embodiment, an information processing apparatus includes a nonvolatile memory, a calculation module and a storage module. The nonvolatile memory has a data region as a subject of falsification detection and a hash value storage region in which a hash value of the data region is written. The calculation module calculates the hash value from the data. The storage module stores the calculated hash value in the hash value storage region. According to another embodiment, an information processing method includes: providing a nonvolatile memory which has a data region as a subject of falsification detection and a hash value storage region in which a hash value of the data region is written; calculating the hash value from the data; and storing the calculated hash value in the hash value storage region. | 12-12-2013 |
20130346375 | Equivalence Classes Over Parameter State Space - An approach is provided in which an equivalence class generator selects a configurable module that includes control points and configuration parameters. The configuration parameters define a parameter state space of the configurable module. The equivalence class generator utilizes the control points to generate equivalence classes, which include class representatives that indicate values for the configuration parameters. Next, one of the class representatives are selected and verified from each of the equivalence classes. In turn, the verification of the class representatives verifies the parameter state space of the configurable module. | 12-26-2013 |
20140006360 | MANAGEMENT APPARATUS AND MANAGEMENT METHOD | 01-02-2014 |
20140012818 | DATA PROCESSING - Disclosed are methods and apparatus for processing correlated metadata (e.g., programmatic metadata relating to one or more episodes of a television show). Mappings, or correlations, between chunks of the metadata that originated from a particular data source and the metadata clusters may be determined and displayed, e.g., on a graphical user interface. Using this display, a user (i.e., a human operator) may detect inconsistencies in the correlated metadata. An inconsistency may be an incorrect mapping, the mapping of more than one of the metadata chunks that originated from the same data source to the same metadata cluster, or that one or more of the metadata chunks have not been mapped to a metadata cluster. The mappings may then be edited so as to remove detected inconsistencies. | 01-09-2014 |
20140012819 | Automatic Consistent Sampling For Data Analysis - A method, computer program product, and system for analyzing data within one or more databases, comprising selecting one or more databases for analysis, each database comprising one or more database objects comprising one or more data values, applying a function to each data value in each database object within the one or more databases, where the function produces function values limited to a predetermined range, identifying for analysis the data values producing a certain function value within the predetermined range to form a sampled data set, and analyzing the sampled data set to determine relationships between the database objects within and across the one or more databases. | 01-09-2014 |
20140019423 | DATA LINEAGE ACROSS MULTIPLE MARKETPLACES - Tracking lineage of data. A method may be practiced in a network computing environment including a plurality of interconnected systems where data is shared between the systems. A method includes accessing a dataset. The dataset is associated with lineage metadata. The lineage metadata includes data indicating the original source of the data, one or more intermediary entities that have performed operations on the dataset, and the nature of operations performed on the dataset. A first entity performs an operation on the dataset. As a result of performing a first operation on the dataset, the method includes updating the lineage metadata to indicate that the first entity performed the operation on the dataset. The method further includes providing functionality for determining if the lineage metadata has been compromised in that the lineage metadata has been at least one of removed from association with the dataset, is corrupted, or is incomplete. | 01-16-2014 |
20140019424 | IDENTIFIER VALIDATION AND DEBUGGING - Methods, systems, and apparatus, including computer programs encoded on a computer-readable storage medium, and including a method are provided where the method includes receiving information from a web publisher related to a presentation of a web property responsive to a user request, the information including an identifier associated with the content presented and an identifier associated with the user that viewed the web property; validating the information, including comparing the information to a separate information source that is provided by the web publisher including determining when the information is properly encoded and/or formatted; and when the information is unable to be validated, storing a record indicative of any invalid information. | 01-16-2014 |
20140025643 | MAINTAINING OBJECT AND QUERY RESULT CONSISTENCY IN A TRIPLESTORE DATABASE - A database management data processing system has been provided. The system can include a host computing system that includes at least one server with memory and at least one processor. The system further includes a database coupled to the host computing system and a database management system (DBMS) executing in the host computing system and managing access to the database through a statement table implemented as a triplestore. Finally, the system includes a triplestore management module coupled to the DBMS. The module includes program code enabled to retrieve from the triplestore a record for a number of rows provided for a common subject in order to validate consistency of data read from the statement table for the particular subject. | 01-23-2014 |
20140025644 | GARBAGE COLLECTION AWARE DEDUPLICATION - Mechanisms are provided for improving the efficiency of garbage collection in a deduplication system by intelligently managing storage of deduplication segments. When a duplicate segment is identified, a reference count for an already maintained segment is incremented only if the already maintained segment has the same lifecycle as the identified duplicate segment. In some instances, an already maintained segment is assumed to have the same lifecycle if it is not stale or the age is not significantly different from the age of the newly identified duplicate. If the already maintained segment is has a different lifecycle, the new segment is stored again even though duplicates are already maintained. | 01-23-2014 |
20140032504 | PROVIDING A MEASURE REPRESENTING AN INSTANTANEOUS DATA CONSISTENCY LEVEL - Based on events corresponding to operations performed with respect to a data store, a measure is computed that represents an instantaneous consistency level, at a point in time, of data that is subject to the operations. | 01-30-2014 |
20140046910 | SMART CONTENT OPTIMIZATIONS BASED UPON ENTERPRISE PORTAL CONTENT META-MODEL - The disclosure generally describes computer-implemented methods, software, and systems for optimizing enterprise portal content. One computer-implemented method includes receiving a content analysis request associated with a content repository, analyzing, using at least one computer, content objects associated with the content repository for inconsistencies with a meta-model, receiving content optimization suggestion data, modifying, by operation of at least one computer, the content repository content objects using the content optimization suggestion data, and receiving optimization status data. | 02-13-2014 |
20140059013 | Ensuring integrity of security event log upon download and delete - A cloud deployment appliance includes a mechanism to enable permitted users to move event records reliably from an internal event log of the appliance to a data store located external to the appliance while ensuring the integrity of event records. The mechanism ensures that the event records are not tampered with in storage or during download. Further, the approach ensures that no event records can be removed from the appliance internal storage before being successfully downloaded to the external data store. | 02-27-2014 |
20140059014 | VALIDATING SYSTEM AND METHOD - A computer reads an element information list from a file stored in a database of the computer, and generates first element identifiers according to the element information list. The computer marks a second element identifier in content of the file, in response to a determination that the first element identifier is different from the second element identifier corresponding to the first element identifier. | 02-27-2014 |
20140067771 | Management of a Scalable Computer System - A method and system for remotely managing a scalable computer system is provided. Elements of an associated tool are embedded on a server and associated console. A service processor for each partition is provided, wherein the service processor supports communication between the server and the designated partition. An operator can discover and validate availability of elements in a computer system. In addition, the operator may leverage data received from the associated discovery and validation to configure or re-configure a partition in the system that support projected workload. | 03-06-2014 |
20140067772 | METHODS, APPARATUSES AND COMPUTER PROGRAM PRODUCTS FOR ACHIEVING EVENTUAL CONSISTENCY BETWEEN A KEY VALUE STORE AND A TEXT INDEX - An apparatus for reconciling data inconsistencies between indexes may include a processor and memory storing executable computer code causing the apparatus to at least perform operations including retrieving first metadata from a key value store in response to receipt of a request for data associated with a user. The computer program code may further cause the apparatus to retrieve second metadata from a text index in response to querying the text index for the second metadata. The second metadata may correspond to the first metadata of the key value store. The computer program code may further cause the apparatus to evaluate the first metadata of the key value store and the second metadata of the text index to determine whether there are any differences between the first metadata and the second metadata. Corresponding methods and computer program products are also provided. | 03-06-2014 |
20140089270 | METHODS FOR DETERMINING EVENT COUNTS BASED ON TIME-SAMPLED DATA - A method for determining event counts for a database system includes capturing samples for the active sessions based on a pre-defined sampling frequency and identifying events from the captured samples. The method further includes determining the wait time for each of the identified events and determining an event count for the active sessions using a harmonic mean. The harmonic mean is a summation of the maximum of either one or the ratio of the sampling frequency to the determined wait time for each of the identified events. | 03-27-2014 |
20140089271 | MEMORY ADDRESS ALIASING DETECTION - Method and apparatus to efficiently detect violations of data dependency relationships. A memory address associated with a computer instruction may be obtained. A current state of the memory address may be identified. The current state may include whether the memory address is associated with a read or a store instruction, and whether the memory address is associated with a set or a check. A previously accumulated state associated with the memory address may be retrieved from a data structure. The previously accumulated state may include whether the memory address was previously associated with a read or a store instruction, and whether the memory address was previously associated with a set or a check. If a transition from the previously accumulated state to the current state is invalid, a failure condition may be signaled. | 03-27-2014 |
20140108357 | SPECIFYING AND APPLYING RULES TO DATA - Validation rules are specified for validating data included in fields of elements of a dataset. Cells are rendered in a two-dimensional grid that includes: one or more subsets of the cells extending in a direction along a first axis, each associated with a respective field, and multiple subsets of the cells extending in a direction along a second axis, one or more of the subsets associated with a respective validation rule. Validation rules are applied to at least one element based on user input received from at least some of the cells. Some cells, associated with a field and a validation rule, can each include: an input element for receiving input determining whether or not the associated validation rule is applied to the associated field, and/or an indicator for indicating feedback associated with a validation result based on applying the associated validation rule to data included in the associated field. | 04-17-2014 |
20140108358 | SYSTEM AND METHOD FOR SUPPORTING TRANSIENT PARTITION CONSISTENCY IN A DISTRIBUTED DATA GRID - A system and method can support transient partition consistency in a distributed data grid. A cluster node in the distributed data grid can maintain a storage data structure and an index data structure. The storage data structure can store data in one or more partitions maintained on the cluster node, and the index data structure contains a plurality of indexes, wherein each index supports indexing at least one data grid operation on the one or more partitions. Furthermore, the distributed data grid ensures consistency between the storage data structure and the index data structure for the data stored in the one or more partitions maintained on the cluster node | 04-17-2014 |
20140114931 | MANAGEMENT OF ANNOTATED LOCATION AWARE ASSETS - According to one general aspect, a method may include storing, in a memory device, a plurality of floor maps, each floor map indicating the structural layout of a respective predefined physical location. The method may include storing, in a memory device, a plurality of point-of-interest (POI) data structures. Each POI data structure may include a physical location of an associated POI. The method may include receiving a floor map request from a client computing device, wherein the floor map request includes a requested location. The method may include based upon the location included by the floor map request, selecting a selected floor map and a selected subset of the plurality of POI data structures. The method may include transmitting, to the client computing device, a response to the floor map request based upon the selected floor map and the selected POI data structures. | 04-24-2014 |
20140122444 | FIRST TOUCH CONFIGURATION - A method of customization of software configuration includes generating and saving user information relating software features, when the software features are requested by a user for the first time. The computer system executes instructions to allow the user to input and adjust the user information. The user information is reviewed and adjustments to configurations of the software features based on the saved user information. Then, the computer system executes the software features requested by the user, according to the implemented adjustments to the configurations of the software features. | 05-01-2014 |
20140122445 | DATABASE ANALYZER AND DATABASE ANALYSIS METHOD - A database analyzer includes a data sorting unit sorting a data group acquired from an analysis target database based on data values in a table column and storing it as analysis target data in a storage unit; a data pattern creation processing unit creating a group for each data value based on differences between the data values and storing a data pattern in the storage unit; a data pattern judgment processing unit for judging validity of the data pattern; and a data pattern transformation processing unit for reconstructing the data pattern with respect to constituent elements of each group included in the data pattern by transforming each group in accordance with a specified conversion rule for converting the constituent elements, which are conceptually similar to each other, into the same constituent element, and storing it in the storage unit if a negative result is obtained for the validity judgment. | 05-01-2014 |
20140129526 | VERIFYING DATA STRUCTURE CONSISTENCY ACROSS COMPUTING ENVIRONMENTS - According to one aspect of the present disclosure a system and technique for verifying data structure consistency across computing environments is disclosed. The system includes: a processor and a compatibility tool. The compatibility tool is executable by the processor to: generate a first signature for a data structure corresponding to a first computing environment; and generate a second signature for the data structure corresponding to a second computing environment. The processor is operable to compare the first and second signatures and, responsive to a disparity between the first and second signatures, indicate a change to the data structure between the first and second computing environments. | 05-08-2014 |
20140129527 | VERIFYING DATA STRUCTURE CONSISTENCY ACROSS COMPUTING ENVIRONMENTS - According to one aspect of the present disclosure, a method and technique for verifying data structure consistency across computing environments is disclosed. The method includes: generating a first signature for a data structure corresponding to a first computing environment; generating a second signature for the data structure corresponding to a second computing environment; comparing the first and second signatures; and responsive to a disparity between the first and second signatures, indicating a change to the data structure between the first and second computing environments. | 05-08-2014 |
20140149360 | Usage of Filters for Database-Level Implementation of Constraints - Embodiments relate to methods and apparatuses implementing database-level consistency checking in a declarative manner. A consistency engine within the database layer may access one or more consistency rules in the form of a table or executable program code. Based upon application of these consistency rules to records comprising combinations of data characteristics, the consistency engine may determine the validity or invalidity those records. Consistency rules may implement a ‘check’ method, and also a ‘derive’ method allowing derivation of data characteristics (targets) in a record from other characteristics (sources) in the record. Filters may be used to split data records to sets of records having all fields assigned, and those having ‘not assigned’ fields. Consistency rules used for derivation methods can be nested. Also, in certain embodiments a consistency engine may use filtering techniques for constraint checking including multi-level derivations in a declarative way. | 05-29-2014 |
20140149361 | CONFLICT MARKUP TOLERANT INPUT STREAM - A device receives a conflicted file, with a structured data format, that includes a conflict marker that does not comply with the structured data format. The conflict marker identifies first edited information and second edited information included in the conflicted file. The first edited information and the second edited information comply with the structured data format, and include information that has been modified in different versions of a shared file to create the conflicted file. The device detects that the conflicted file includes the conflict marker, and identifies, based on the detected conflict marker, the first edited information and the second edited information. The device determines that at least one of the first edited information or the second edited information is to be provided to the application for processing, and provides, based on the determining, the first edited information or the second edited information to the application for processing. | 05-29-2014 |
20140156604 | Method and System for Maintaining Derived Data Sets - A first data set is derived from a second data set. The first data set is stored in a database of derived data sets. The second data set is updated without updating the first data set, such that the first data set and the second data are inconsistent. The first data set is deleted or updated during batch processing of the database of the derived data sets. | 06-05-2014 |
20140201165 | REWRITING RELATIONAL EXPRESSIONS FOR DIFFERENT TYPE SYSTEMS - A computer determines that the type of one or more of a relational operator and operands of a relational expression originated in a first type system, and determines the sign of at least one of the operands. The computer rewrites the relational expression based on the sign of at least one of the operands, sends the rewritten relational expression for evaluation in a second type system, and receives the evaluated rewritten relational expression after evaluation in the second type system. The computer can rewrite the relational expression by generating a group of terms joined disjunctively, as well as by generating a group of conjunctive terms joined disjunctively. | 07-17-2014 |
20140201166 | REWRITING RELATIONAL EXPRESSIONS FOR DIFFERENT TYPE SYSTEMS - A computer determines that the type of one or more of a relational operator and operands of a relational expression originated in a first type system, and determines the sign of at least one of the operands. The computer rewrites the relational expression based on the sign of at least one of the operands, sends the rewritten relational expression for evaluation in a second type system, and receives the evaluated rewritten relational expression after evaluation in the second type system. The computer can rewrite the relational expression by generating a group of terms joined disjunctively, as well as by generating a group of conjunctive terms joined disjunctively. | 07-17-2014 |
20140236904 | BATCH ANALYSIS - A computing device is configured to receive a parameter from a user device. The parameter may include a requirement for a batch, stored by the computing device, to properly process batch information. The computing device is configured to test the batch by using the parameter to generate a test result before processing the batch information; and store the test result. | 08-21-2014 |
20140244597 | HALLOWEEN PROTECTION IN A MULTI-VERSION DATABASE SYSTEM - Mitigating problems related to the Halloween problem including where update operations potentially allow the record to be visited more than once during the operation. A method includes accessing an instance of a data store operation statement. The instance of the data store operation statement is executed causing an update or delete to an old version of data store record or creation of a data store record resulting in a new version of the data store record in the case of an update or creation of a data store record and a deleted version of the data store record in the case of a delete in the data store. The instance of the data store operation statement is correlated with the new version of the data store record or the deleted version of the data store record. | 08-28-2014 |
20140258242 | File System and Method of Operating Thereof - A method for maintaining consistency among metadata elements (MDEs) of a logical object, includes: configuring a child MDE to include a correlation value uniquely indicative of a parent MDE. The parent MDE includes a reference to the child MDE; determining an order of performing at least two write operations included in a transaction related to the logical object: at least one write operation with respect to the parent MDE and at least one write operation with respect to the child MDE; the determined order assures that the child MDE is indicated as existing and includes the first correlation value, as long as the parent MDE exists; upon a first access to the parent MDE, subsequent to the transaction, verifying consistency between the parent MDE and the child MDE, using the first correlation value; and deleting the parent MDE if the verifying of consistency is unsuccessful. | 09-11-2014 |
20140258243 | ONLINE SYSTEM, APPARATUS, AND METHOD FOR OBTAINING OR APPLY FOR INFORMATION PROGRAMS, SERVICES AND/OR PRODUCTS - Methods, systems, and apparatuses, including computer programs encoded on computer-readable media, for receiving, from a user, user data associated with one or more fields of a user profile associated with the user. The user profile includes a plurality of predefined fields. A request to create a requisition is received from a producer. The request includes requisition fields that include one or more of the plurality of predefined fields of the user profile. A request for the requisition is received via a smart link from the user. Requisition user data from the user profile corresponding to the requisition fields is determined. The requisition user data includes a portion of the received user data. The requisition and the requisition user data are sent to the user. An indication giving permission for the producer to access the requisition user data is received. The requisition user data is provided to the producer. | 09-11-2014 |
20140279938 | ASYNCHRONOUS ERROR CHECKING IN STRUCTURED DOCUMENTS - Systems and method are described for performing asynchronous error checking on a structured document. In accordance with the systems/methods, a first thread, such as a main application thread of a document editor, parses the document to identify one or more new elements included therein and create copies of the one or more new elements. A second thread, such as a background thread, applies error checking to the copies of the one or more new elements to generate error results corresponding to the one or more new elements. The first thread the uses the error results to indicate errors in association with the one or more new elements. | 09-18-2014 |
20140279939 | METHOD FOR PROPAGATING INFORMATION BETWEEN A BUILDING INFORMATION MODEL AND A SPECIFICATION DOCUMENT - A method includes receiving, at a computer system, data exported from a Building Information Model (BIM). The exported data is indicative of a first plurality of sets of units and values, each set associated uniquely with a different element defined in the BIM. The computer system parses a specification document to identify a second plurality of sets of units and values, each set associated uniquely with a different element listed in the specification document. The computer system compares the first plurality of sets of units and values to the second plurality of sets of units and values, and automatically identifies a set of units and values that is a member of only one of the first plurality of sets of units and values and the second plurality of sets of units and values. An inconsistency between the elements listed in the specification document and the elements defined in the BIM is registered automatically. | 09-18-2014 |
20140279940 | SELF-GUIDED VERIFICATION OF AN ITEM - A method of providing a level of certification of an attribute of an item is disclosed. A requirement is determined for a level of certification for an attribute of an item. A notification is provided of an evidence item that is to be submitted to evaluate the level of certification of the attribute of the item. The evidence item is received. The level of certification of the attribute of the item is determined based on the received evidence item. | 09-18-2014 |
20140279941 | Managing Multiple Sets of Metadata - Apparatuses, systems, and methods are disclosed for managing multiple sets of metadata. A method includes maintaining a first set of metadata on a volatile recording medium and a second set of metadata on a non-volatile recording medium. The first and second sets of metadata are associated with one or more logical addresses for data stored on the non-volatile recording medium. The first and second sets of metadata relate to a state of the data. A method includes updating the second set of metadata in response to a first operation performed on the data. The second set may be updated based on the first operation. A method includes updating the first set of metadata in response to a subsequent operation performed on the data. The first set may be updated based on the first operation. | 09-18-2014 |
20140279942 | STRUCTURING DATA - Among other things, a machine-based method is described. The method comprises recording object classes of an object model, producing an object representation for data of two or more data sources based on a mapping of data formats of the data sources to the object classes of the object model, and producing mapped data from the data sources. The mapped data is available in objects of the object classes and is comparable in the object representation. At least two of the data sources have different data formats. | 09-18-2014 |
20140279943 | FILE SYSTEM VERIFICATION METHOD AND INFORMATION PROCESSING APPARATUS - An information processing apparatus includes an identifying unit and a verifying unit. The identifying unit identifies, among a plurality of unit storage areas in a volume storing therein one or more pieces of management object information managed by a file system and one or more pieces of management information corresponding one-to-one with the management object information pieces and used to manage the corresponding management object information pieces, one or more unit storage areas whose information has been updated within a predetermined time frame. The verifying unit verifies the consistency between the management object information pieces and the management information pieces in the file system using the information of the identified unit storage areas. | 09-18-2014 |
20140279944 | SQL QUERY TO TRIGGER TRANSLATION FOR MAINTAINING CONSISTENCY OF CACHE AUGMENTED SQL SYSTEMS - An SQL query-to-procedure translation system may be used in connection with a relational database management system (RDBMS) that is augmented by a cache and a cache management system that manages the cache. The query-to-procedure translation system may include a data processing system that has at least one computer hardware processor and a configuration that, in response to a query issued by an application program for data from the relational database management system: intercepts the query; generates code that determines if data requested by the query that may be in the cache has changed; and registers the code as a procedure with the RDBMS. | 09-18-2014 |
20140279945 | MATCHING TRANSACTIONS IN MULTI-LEVEL RECORDS - Identifying matching transactions. First and second log files contain operation records of transactions in a transaction workload, each file recording a respective execution of the transaction workload, the method comprising. A first record location in the first file and an associated window of a defined number of sequential second record locations in the second file are advanced one record location at a time. Whether each operation record of a complete transaction at a first record location has a matching operation record at one of the record locations in the associated window of second record locations is determined. If so, the complete transaction in the first file and the transaction that includes the matching operation records in the second file are identified as matching transactions. | 09-18-2014 |
20140317066 | METHOD OF ANALYSING DATA - The present invention disclosure provides a method of analysing data. In a first step a plurality of data records is provided, each data record having a plurality of data elements and having a property. At least some data elements of each data record are selected. In a next step, the selected data elements are grouped in a plurality of groups such that each group has data elements that are a part of one of the data records and such that for a group that has data elements of more than one data record, each data element or property is similar or identical to at least one of the data elements or properties, respectively, of each other data record of that group. A group of interest and a reference group are determined from the plurality of groups. The group of interest has at least one data element of interest and the reference group has data elements or properties that are similar or identical with data elements or properties, respectively, of the group of interest. In a further step, the group of interest is compared with the reference group such that from the reference group information concerning the data element of interest can be derived. | 10-23-2014 |
20140324786 | Anomaly detection in chain-of-custody information - A method includes receiving first vehicle log data related to modification of a first software part at a first vehicle. The method also includes receiving first ground log data of a first ground system. The first ground log data indicates first chain-of-custody information regarding the first software part. The method further includes analyzing the first vehicle log data and the first ground log data based on baseline data to detect an anomaly. The method also includes sending a notification in response to detecting the anomaly. | 10-30-2014 |
20140324787 | Analyzing Large Data Sets to Find Deviation Patterns - Operations, such as data processing operations, can be improved by applying clustering and statistical techniques to observed behaviors in the data processing operations. | 10-30-2014 |
20140330792 | APPLICATION OF TEXT ANALYTICS TO DETERMINE PROVENANCE OF AN OBJECT - A computer identifies a first source of information that includes unstructured text and one or more keywords associated with an object. The computer retrieves the unstructured text included in the first source. The computer identifies provenance information of the object that is included in one or more segments of the unstructured text. The computer adds the identified provenance information of the object to a timeline. | 11-06-2014 |
20140379668 | AUTOMATED PUBLISHED DATA MONITORING SYSTEM - An automated published data monitoring system implements a content validation service capable of validating published data in accordance with programmable criteria. A root data location is provided and validation of such data includes crawling a hierarchical organization of additional data. Deserializers are specific to identified collections of data and deserialize data into strongly typed data structures that are programmatically validatable. Deserializers register themselves to handle collections of data identified based upon the location and domain of such data. Additionally, validators are specific to types of data structures and programmatically validate such data structures including validating their type and their correctness, the latter as compared to statically or dynamically defined limits. Validators register themselves to handle specified types of data structures originating from specific data collections. Content can be validated in accordance with either a depth-first or breadth-first validation. | 12-25-2014 |
20140379669 | Feedback Optimized Checks for Database Migration - Example systems and methods of database migration optimized by feedback are presented. In one example, a migration of database data from a first to a second database by multiple concurrent processes may be initiated on a computing system. Processing time of at least some of the processes may be monitored during the migration. Based on this monitoring, at least one portion of the database data being migrated by one of the concurrent processes may be segmented into multiple segments, wherein each of the multiple segments may be migrated by a separate one of the concurrent processes. Also, a load on the computing system may be monitored during the migration. Based on this monitoring, a number of the concurrent processes may be adjusted. In other examples, consistency checking for subsequent database migrations may be based on consistency checking results for the current migration. | 12-25-2014 |
20150012500 | SYSTEM AND METHOD FOR READING FILE BLOCKS - A system and method for reading file blocks includes reading an inode associated with the file from the file system, the inode including one or more first block pointers, determining a height of a file tree associated with the file, and determining whether a value of a second block pointer selected from the one or more first block pointers is consistent with the file having been stored using a block allocation pattern. When the value of the second block pointer is consistent with the file having been stored using the block allocation pattern the method further includes pre-fetching a plurality of file blocks based on the block allocation pattern, verifying that the pre-fetched file blocks are consistent with the file tree, and retrieving one or more data blocks of the file. In some examples, the block allocation pattern corresponds to the file being stored in streaming order to consecutively and contiguously located blocks. | 01-08-2015 |
20150012501 | METHOD AND APPARATUS FOR PROVIDING DATA CORRECTION AND MANAGEMENT - An approach is provided for determining at least one entity specified in at least one data record. The approach further involves determining one or more data sources available from the at least one entity. The approach further involves processing and/or facilitating a processing of the one or more data sources to determine information for a verification, an update, or a combination thereof of the at least one data record. | 01-08-2015 |
20150026134 | FAST PCA METHOD FOR BIG DISCRETE DATA - This disclosure is related to further approximating multiple data vectors of a dataset. The multiple data vectors are initially approximated by one or more stored principle components. A processor performs multiple iterations of determining an updated estimate of a further principle component based on the multiple data vectors that are initially approximated by the one or more stored principle components. The processor performs this step such that the updated estimate of the further principal component further approximates the dataset. In each iteration the processor constrains the updated estimate of the further principal component to be orthogonal to each of the one or more stored principal components. The data vectors of the dataset are not manipulated but remain the same data vectors that are approximated by the stored principal components. | 01-22-2015 |
20150032701 | ENFORCING TEMPORAL UNIQUENESS OF INDEX KEYS UTILIZING KEY-VALUED LOCKING IN THE PRESENCE OF PSEUDO-DELETED KEYS - Techniques are described for identifying conflicts between an index of temporal keys and a prospective temporal key. The prospective temporal key specifies a prospective range of time. Embodiments scan the index to identify a first temporal key that potentially conflicts with the prospective temporal key. The first temporal key specifies a first range of time and is identified based on a comparison between the first range of time and the prospective range of time. Embodiments determine whether the prospective temporal key conflicts with any temporal keys in the index, where the prospective temporal key conflicts with the first temporal key if the first range of time overlaps with the prospective range of time and the first temporal key is not a pseudo-deleted key, and such that the prospective temporal key does not conflict with any temporal keys if the temporal key does not conflict with the first temporal key. | 01-29-2015 |
20150039567 | PROTECTING STORAGE DATA DURING SYSTEM MIGRATION - Provided are techniques for determining whether a character code point value of a first plurality of character code point values corresponds to a second character code point value from a second plurality of character code point values, first value associated with a first encoding version and the second value associated with a second encoding. In response to the first value does not corresponding to any of the second character code point values, a determination is made as to whether the value corresponds to a third character code point value of a third plurality of code point values stored in a character value record table (CVRT). In response the value corresponding to the third value, an entry in the CVRT that associates the character with the third value is made; and the character is stored in conjunction with an application associated with the second encoding using the third value. | 02-05-2015 |
20150039568 | Low-Overhead Enhancement of Reliability of Journaled File System Using Solid State Storage and De-Duplication - A mechanism is provided in a data processing system for reliable asynchronous solid-state device based de-duplication. Responsive to receiving a write request to write data to the file system, the mechanism sends the write request to the file system, and in parallel, computes a hash key for the write data. The mechanism looks up the hash key in a de-duplication table. The de-duplication table is stored in a memory or a solid-state storage device. Responsive to the hash key not existing in the de-duplication table, the mechanism writes the write data to a storage device, writes a journal transaction comprising the hash key, and updates the de-duplication table to reference the write data in the storage device. | 02-05-2015 |
20150046406 | METHOD AND DEVICE FOR DATA MINING ON COMPRESSED DATA VECTORS - A method for data mining on compressed data vectors by a certain metric being expressible as a function of the Euclidean distance is suggested. In a first step, for each compressed data vector, positions and values of such coefficients having the largest energy in the compressed data vector are stored. In a second step, for each compressed data vector, the coefficients having not the largest energy in the compressed data vector are discarded. In a third step, for each compressed data vector, a compression error is determined in dependence on the discarded coefficients in the compressed data vector. In a fourth step, at least one of an upper and a lower bound for the certain metric is retrieved in dependence on the stored positions and the stored values of the coefficients having the largest energy and the determined compression errors. | 02-12-2015 |
20150058300 | Report Generation for a Navigation-Related Database - Systems, devices, features, and methods for updating a geographic database, such as a navigation-related database, and/or reporting discrepancies associated with geographic data of the geographic database are disclosed. For example, one method comprises capturing a photograph of an observed geographic feature in a geographic region. Comment information corresponding to the observed geographic feature may be stored. The comment information is indicative of a discrepancy between the observed geographic feature and the geographic data corresponding to the geographic region. The comment information may be associated with the photograph to generate a report, and the report is transmitted. | 02-26-2015 |
20150066870 | Correlation of Maximum Configuration Data Sets - A method of correlating data for multiple product configurations is provided comprising enhancing, by a processor, data set definition to accommodate data models of data sets describing multiple product configurations. The method also comprises comparing, by the processor, values of the data sets utilizing at least one matching algorithm and effectivity expressions identifying relevant rows for comparison in the data sets. The method also comprises enhancing, by the processor, the at least one matching algorithm to identify perfect and partial matches between the data sets wherein values of all data contained in the data sets are compared in one single operation comprising simultaneous validation of engineering data for the multiple product configurations. | 03-05-2015 |
20150074063 | METHODS AND SYSTEMS FOR DETECTING DATA DIVERGENCE AND INCONSISTENCY ACROSS REPLICAS OF DATA WITHIN A SHARED-NOTHING DISTRIBUTED DATABASE - Methods and systems are disclosed for detecting data divergence or inconsistency across replicas of data maintained in replica nodes in a shared-nothing distributed computer database system. The replica nodes communicate with a coordinator node over a computer network. The method includes the steps of: (a) receiving an operation at the coordinator node; (b) transmitting the operation to the replica nodes to be executed by each replica node to generate an operation result and a hash representation of the operation or of the operation result; (c) receiving the operation result and the hash representation generated by each of the replica nodes; and (d) determining whether the operation resulted in data divergence or inconsistency by detecting when the hash representations received from the replica nodes are not all the same. | 03-12-2015 |
20150081648 | Method of Composing an Integrated Ontology - A method of composing an integrated ontology is provided. The method makes use of feature models for the process of modular development of ontologies. The feature models are adapted to assist the user in the process of selection of appropriate and combinable ontologies from an ontology repository for a given use case scenario. The usage of a feature model provides a view of all possible combinations, i.e. all combinations that do not result in an inconsistent ontology. The feature model is specifying dependencies between different ontologies thereby expediting the selection of appropriate ontologies. The usage of reasoning techniques provides for an in-situ consideration of restrictions for the selection of ontologies. | 03-19-2015 |
20150088835 | FACILITATING DETERMINATION OF RELIABILITY OF CROWD SOURCED INFORMATION - Reliability of data reports can be determined by a device that receives a number of reports from different sources. One method includes: receiving data reports from devices. The data reports are associated with an occurrence of an event. The method also includes determining reliability data representing reliability of the data reports. The reliability can be determined based on one or more different defined characteristics such as the location at which a data report was generated relative to the location of the event, whether the data report was the most recently-received data report and/or the number of data reports reporting that an event is ongoing relative to the number of data reports reporting that the event is no longer ongoing. The method can also include determining whether a data report includes information indicative of a false positive report or a false negative report. | 03-26-2015 |
20150088836 | DATA ENRICHMENT USING HETEROGENEOUS SOURCES - A data enrichment system may include an attribute relevance module to measure relevance of an attribute to a data object to be enriched. The data object may include the attribute including a known or an unknown value. An output value confidence module may calculate a confidence of an output value of a source used for enrichment of the data object. The output value may represent the known and/or unknown values of the attribute. The system may use the measured relevance of the attribute and the calculated confidence of the output value to determine assignment of the known or unknown values to the attribute. | 03-26-2015 |
20150106342 | SYSTEM AND METHOD OF DETECTING CACHE INCONSISTENCIES - A system and method of detecting cache inconsistencies among distributed data centers is described. Key-based sampling captures a complete history of a key for comparing cache values across data centers. In one phase of a cache inconsistency detection algorithm, a log of operations performed on a sampled key is compared in reverse chronological order for inconsistent cache values. In another phase, a log of operations performed on a candidate key having inconsistent cache values as identified in the previous phase is evaluated in near real time in forward chronological order for inconsistent cache values. In a confirmation phase, a real time comparison of actual cache values stored in the data centers is performed on the candidate keys identified by both the previous phases as having inconsistent cache values. An alert is issued that identifies the data centers in which the inconsistent cache values were reported. | 04-16-2015 |
20150120678 | AUTOMATICALLY CORRECTING INVALID SCRIPTS IN WEB APPLICATIONS - According to an aspect, a method for correcting an invalid script in a web application includes determining an invalid reference in an invalid script. A storage location is determined in a database corresponding to the invalid reference based on a data relationship mapping, wherein the data relationship mapping indicates the correspondence between the reference and a storage location in the database. An up-to-date value at the storage location is queried and the queried up-to-date value is determined to be the correct value of the invalid reference. | 04-30-2015 |
20150120679 | SYSTEM AND METHOD FOR IDENTIFYING AN INDIVIDUAL FROM ONE OR MORE IDENTITIES AND THEIR ASSOCIATED DATA - The one or more non-transitory computer readable storage mediums storing one or more sequences of instructions are provided. The one or more non-transitory computer readable storage mediums executed by one or more processors causes (i) obtaining a associated data of an individual from one or more identities, (ii) extracting information from the associated data to obtain an extracted information, (iii) standardizing the extracted information to obtain a standardized extracted information, (iv) obtaining additional information associated with the one or more identities based on the standardized extracted information, (v) calculating a confidence level for the additional information, (vi) comparing, the additional information with trustworthy information from a database to verify an accuracy of the additional information, and (vii) identifying the individual from the one or more identities and the associated data based on the confidence level and the accuracy. | 04-30-2015 |
20150127620 | OBJECT LOSS REPORTING IN A DATA STORAGE SYSTEM - In response to receiving a request from a client to store an object, a key-durable storage system may assign the object to a volume in its data store, generate a key for the object (e.g., an opaque identifier that encodes information for locating the object in the data store), store the object on one disk in the assigned volume, store the key redundantly in the assigned volume (e.g., using a replication or erasure coding technique), and may return the key to the client. To retrieve the object, the client may send a request including the key, and the system may return the object to the client. If a disk fails, the system may determine which objects were lost, and may return the corresponding keys to the appropriate clients in a notification. The system may be used to back up a more expensive object-redundant storage system. | 05-07-2015 |
20150134622 | System, Method and Computer Program Product for Identification of Anomalies in Performance Indicators of Telecom Systems - A system and method for identifying anomalies in indicators, such as key performance indicators (KPIs) of a telecom system are disclosed. The method can learn over time behavior of the indicator and can statistically identify what should be considered anomalous. Learning can be performed on a per indicator basis that each presents different statistical qualities. The method can associate the indicator to a profile, such as one of several statistical distributions and can operate accordingly. Association may be determined by the correlation of the indicator to statistical distribution. The method can identify correlations between indicators when identifying the statistical distribution and especially when the associated statistical distribution is an unidentified profile. The method can include comparison of actuals versus prediction and sending alerts when anomalies are found. The system can be configured to receive data points respective of indicators and implement the method while continuously determining data points constituting anomalies. | 05-14-2015 |
20150149420 | CENTRALIZED METHOD TO RECONCILE DATA - Embodiments of the invention are directed to a system, method, and computer program product for centralized data reconciliation. The system typically including a memory, a processor, and a data reconciliation module configured to compare data feed metadata received from a source application and a target application to determine a successful data transmission. In some embodiments, the system is configured to receive metadata from a source application and target application; compare the metadata received from the source application and the metadata received from the target application; determine if there is a mismatch between the metadata received from the source application and the metadata received from the target application based on comparing the metadata received from the source application and the metadata received from the target application; and initiate the presentation of the mismatch to a user. | 05-28-2015 |
20150149421 | VALIDATION OF WEB-BASED DATABASE UPDATES - A system includes reception of a request to modify the data of a database, the request including first data, execution of processing to fulfill the request, determination, during execution of the processing, that a validation exit is associated with a current state of the processing, storage of the first data in a local temporary table in response to the determination, passage of the local temporary table to the validation exit, and execution of the validation exit to validate the first data based on the local temporary table and on the data of the database. | 05-28-2015 |
20150293965 | Method, Apparatus and Computer Program for Detecting Deviations in Data Sources - The present disclosure describes a method and an apparatus for detecting deviations in data sources, each data source comprising a plurality of data posts, each data post comprising a number of data values. The method comprises identifying ( | 10-15-2015 |
20150302039 | METHODS AND SYSTEMS OF ARCHIVING MEDIA FILES - Methods and systems of archiving media files are provided. Media files may be archived such that only the difference between a media file and a base media file is stored. The base media file is archived. The media file to be archived and the base media file may have common attributes such as video codec, resolution, frame rate, and/or color space. A media file to be archived may be compared to the base media file to determine any difference. Bit-to-bit analysis or frame-to-frame analysis may be performed to identify the differences between a media file to be archived and a base media file. The differences may be extracted from the media file to be archived. A difference media file may be created to store the difference of a media file with respect to the base media file. A record may be created to store the actual location where the difference is extracted from a media file to be archived. | 10-22-2015 |
20150302042 | DATA ANALYSIS APPARATUS AND DATA ANALYSIS METHOD - The present invention aims to provide a data analysis apparatus capable of clustering appropriately even when there is an exceptional datum resulted from an experimental error and the like. In the data analysis apparatus according to the invention, a cluster range parameter for stretching a cluster boundary is determined in advance according to the range of an experimental error which an experimental error datum describes. In the process of clustering, an exceptional datum which does not belong to any cluster is determined to belong to a cluster when an area at a distance determined by the cluster range parameter from the exceptional datum is contained in the cluster, and the exceptional datum is determined to form an independent cluster when even the area at the distance is not contained in any cluster (see FIG. | 10-22-2015 |
20150302045 | Determining Whether a Data Storage Is Encrypted - A method, program and/or system for determining whether a data storage is encrypted. A file is written through a first path to the data storage. The file is read through a second path from the data storage. First data known to have been written in the file is compared to second data that has been read from the file. When the first data matches the second data, the first path is determined not to have encrypted the file when writing to the data storage. When the first data does not match the second data, the first path is determined to have encrypted the file when writing to the data storage. | 10-22-2015 |
20150324414 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - A data storage unit ( | 11-12-2015 |
20150324418 | STORAGE DEVICE AND DATA MIGRATION METHOD - As a method for migrating data of a volume adopting a snapshot function to a new storage system, in order to perform migration without depending on a method for compressing snapshot data of a migration source storage system, and without stopping transmission and reception of data between the host computer and the storage system, at first, after migrating data of a volume being the source of snapshot (PVOL), migration is performed sequentially from newer generations. At this time, migration target data of each SVOL is all the data within the migration source storage system. The SVOL data copied to a migration destination storage is compared with one-generation-newer SVOL data within the migration destination storage system, and based on the comparison result, a difference management information is created. If there is difference, a VOL allocation management table is updated, and difference data is stored in the area allocated within the pool. | 11-12-2015 |
20150331895 | CONTROL LOGIC ANALYZER AND METHOD THEREOF - A control logic analyzer for controlling a plurality of devices is provided. The control logic analyzer comprises: a control logic decomposer configured to analyze control logics from different sources to identify devices involved in the control logics from the plurality of devices, and decompose the control logics into control instructions to be executed by the identified devices; and a potential conflict searcher configured to search a database for storing decomposed control logics and determine whether there is any potential conflict between the current control logic and the control logics previously stored in the database. | 11-19-2015 |
20150331899 | DATA STORAGE RESOURCE ALLOCATION BY PERFORMING ABBREVIATED RESOURCE CHECKS OF CERTAIN DATA STORAGE RESOURCES TO DETERMINE WHETHER DATA STORAGE REQUESTS WOULD FAIL - A resource allocation system begins with an ordered plan for matching requests to resources that is sorted by priority. The resource allocation system optimizes the plan by determining those requests in the plan that will fail if performed. The resource allocation system removes or defers the determined requests. In addition, when a request that is performed fails, the resource allocation system may remove requests that require similar resources from the plan. Moreover, when resources are released by a request, the resource allocation system may place the resources in a temporary holding area until the resource allocation returns to the top of the ordered plan so that lower priority requests that are lower in the plan do not take resources that are needed by waiting higher priority requests higher in the plan. | 11-19-2015 |
20150363425 | SOLID STATE DISK, DATA MANAGEMENT METHOD AND SYSTEM THEREFOR - The present invention is applied to the technical field of solid-state storage, and provided are a solid state disk and a data management method and system. The data management method for a solid state disk comprises: saving the written data in a solid state disk after adding a timestamp to written data; receiving a mark command for marking the data to be invalid, marking an address range corresponding to the invalid data, and saving the marked information in the solid state disk after adding a timestamp to the marked information; comparing the timestamp of the marked information with the timestamp of the data in the marked address range after starting the solid state disk; if the timestamp of the data in the marked address range is earlier than the timestamp of the marked information, marking the address range as invalid, otherwise not marking the address range as invalid. | 12-17-2015 |
20150370789 | MULTIMEDIA FILE STORAGE SYSTEM AND RELATED DEVICES - A multimedia file storage system, a related source-end device and a destination-end device are disclosed. The source-end device includes a source-end storage device for storing an original multimedia file; a source-end decoding circuit for decoding only a part of multiple B-pictures of the original multimedia file to form a source-end representative file; a source-end computing circuit for conducting a Hash algorithm computation based on the source-end representative file to generate a source-end check value; and a transmitting circuit for transmitting the source-end check value and the original multimedia file. | 12-24-2015 |
20150370845 | STORAGE DEVICE DATA MIGRATION - A method for migrating files from a source server to a target server are disclosed. The method includes determining file property information for one or more data files on the source server. One or more data file entries are created in a file property table with the file property information for each data file. A data file entry is selected from the file property table. The file property information of the selected data file entry from the file property table is compared to the file property information of the corresponding data file stored in the source server to determine whether there is a match. In response to determining a mismatch, the data file in the source server is copied to the target server. The data file copied to the target server is verified to be the same as the data file of the source server. | 12-24-2015 |
20150370849 | Methods and systems for automatic selection of classification and regression trees - The present invention provides a method and system for automatically identifying and selecting preferred classification and regression trees. The invention is used to identify a specific decision tree or group of trees that are consistent across train and test samples in node-specific details that are often important to decision makers. Specifically, for a tree to be identified as preferred by this system, the train and test samples must both agree on key measures for every terminal node of the tree. In addition to this node-by-node criterion, an additional tree selection method may be imposed. Accordingly, the train and test samples rank order the nodes on a relevant measure in the same way. Both consistency criteria may be applied in a fuzzy manner in which agreement must be close but need not be exact. | 12-24-2015 |
20150379059 | ACCOMMODATING CONCURRENT CHANGES IN UNDERLYING DATA DURING REPORTING - Methods, systems, and computer-readable storage media for receiving user input indicating a value of a first setting of one or more settings, the first setting defining a data integrity scenario that is be applied during a query session with a database system, the data integrity scenario defining data sources for reading data in response to one or more navigation requests, if a concurrent change occurs in the database system, receiving a query, reading, data from one or more data sources based on the query and the first setting, selectively caching at least a portion of the data based on the first setting, and providing a result for display to a user that submitted the query. | 12-31-2015 |
20160004718 | USING BYTE-RANGE LOCKS TO MANAGE MULTIPLE CONCURRENT ACCESSES TO A FILE IN A DISTRIBUTED FILESYSTEM - The disclosed embodiments disclose techniques for using byte-range locks to manage multiple concurrent accesses to a file in a distributed filesystem. Two or more cloud controllers collectively manage distributed filesystem data that is stored in the cloud storage systems; the cloud controllers ensure data consistency for the stored data, and each cloud controller caches portions of the distributed filesystem. During operation, a cloud controller receives from a first client a request to access a portion of the file. The cloud controller contacts the owning cloud controller for the portion of the file to request a byte-range lock for that portion of the file. The owning cloud controller returns a byte-range lock to the requesting cloud controller if no other clients of the distributed filesystem are currently locking the requested portion of the file with conflicting accesses. | 01-07-2016 |
20160012097 | CHECKING FRESHNESS OF DATA FOR A DATA INTEGRATION SYSTEM, DIS | 01-14-2016 |
20160012100 | PROFILING DATA WITH LOCATION INFORMATION | 01-14-2016 |
20160026648 | SYSTEM AND METHOD FOR ENSURING CODE QUALITY COMPLIANCE FOR VARIOUS DATABASE MANAGEMENT SYSTEMS - A system and computer-implemented method for ensuring code quality compliance for one or more Database Management Systems (DBMSs) is provided. The system comprises a user interface configured to prompt one or more users to select one or more options and provide information for configuring rules corresponding to coding standards and best practices. The system further comprises a rules registration module to register the configured rules in a repository for validation. Furthermore, the system comprises a source selector to provide options to the one or more users to select one or more DBMSs and a source manager to fetch database code from the one or more selected DBMSs. In addition, the system comprises one or more parsers to parse the fetched database code, a validator to validate the parsed code using the registered rules and a report manager to provide results of the validation to the one or more users. | 01-28-2016 |
20160026661 | SYSTEM AND METHOD FOR THE AUTOMATED GENERATION OF EVENTS WITHIN A SERVER ENVIRONMENT - A system and method is disclosed in which the buses of a server computer are monitored through server management software. A data structure for a monitored bus or group of buses is created and stored in a repository of data structures for other monitored devices within the server computer. As events, such as failure events, occur on one or more of the monitored buses, the event is recorded in an event log. Using the server management software, monitoring commands can be issued by the baseboard management controller to each monitored bus to check the status of the bus. | 01-28-2016 |
20160026675 | DATA PROCESSING METHOD, COORDINATOR, AND NODE DEVICE - Embodiments of the present invention provide a data processing method, a coordinator, and a node device. The coordinator receives a data frame sent by the node device, where the data frame includes service data; the coordinator determines whether the service data is within a set data confidence interval; and if the coordinator determines that the service data is beyond the data confidence interval, the coordinator sends a questioned-data frame to the node device, where the questioned-data frame carries the service data, so that the node device confirms correctness of the service data. | 01-28-2016 |
20160034493 | Systems and Methods for the Collection Verification and Maintenance of Point of Interest Information - Systems and methods for the collection, verification, and maintenance of point of interest information are provided. One example system includes a plurality of mobile collection devices respectively operated by a plurality of human collectors. Each of the mobile collection devices uploads to one or more intermediate servers information describing one or more attributes of a plurality of points of interest. The system includes a mobile verification device that receives, from the one or more intermediate servers, information uploaded by a first mobile collection device and provides an indication of an accuracy associated with the information received from the first mobile collection device. The system includes the one or more intermediate servers. The system includes one or more production servers. The information uploaded by the first mobile collection device is transcribed from the one or more intermediate servers to the one or more production servers. | 02-04-2016 |
20160034518 | DATA RESEARCH AND RISK MANAGEMENT ASSESSMENT APPLICATION - Systems, apparatus, and computer program products provide for a comprehensive platform in which users can gain access to data mapping and linkage information associated with multiple data sources, data systems, databases within the systems and the like. As such, the platform provides for time-efficient and reliable data management and research which aids the user in comprehending the connections between data from different data sources and included within different data systems, and the downstream impact (i.e., the impact of the data on other data fields) and upstream data source(s) (i.e., the secondary data fields used to calculate the data filed) of such data. | 02-04-2016 |
20160034519 | SYSTEM AND METHOD FOR VERIFYING THE CONTENTS OF FORMS RELATIVE TO A SEPARATE DATASET - A method is provided for verifying the contents of forms, comprising: receiving a dataset from a client, the dataset associated with a transaction; transmitting the dataset to a document vendor to be entered into and complete a transaction document form; receiving the completed transaction document form from the document vendor; generating a code uniquely associating the completed document with the dataset; printing the code onto the completed document; transmitting the completed document to the client; after the document has been executed, receiving the executed document from the client and separately the current dataset; using the code on the executed document, retrieving the stored transaction dataset; comparing the stored dataset with the dataset separately delivered as the current dataset; identifying all inconsistencies between the two datasets and storing these results as separate data; and transmitting a message to the client with the result of the comparison. | 02-04-2016 |
20160034520 | Apparatus and Method for Maintaining and Storing a Log of Status Information - A control circuit maintains and stores in a memory a log for a retail enterprise of information comprising the current status of various items including data preparation, aggregation of values, execution of statistical models, and development visualizations for each of a plurality of items that are offered for retail sale within the retail enterprise. By one approach the plurality of items represents only a subset of all items that are offered for retail sale by this retail enterprise. By another approach the plurality of items represents all items that are offered for retail sale by this retail enterprise. If desired, the aforementioned log comprises such information for each of a plurality of hierarchical user levels in the retail enterprise. | 02-04-2016 |
20160034521 | METHOD, DEVICE AND SYSTEM FOR RETRIEVING DATA FROM A VERY LARGE DATA STORE - Systems, methods and devices are provided for deploying data from an operational database with multi-version-concurrency-control, the method comprising: deriving a single SQL query statement for retrieving large amounts of related, heterogeneous data as output where the large amounts of data are internally self-consistent; transforming and decorating the single SQL query output to obtain deployment data; and transferring the deployment data to the deployment target. | 02-04-2016 |
20160042017 | System Of And Method For Entity Representation Splitting Without The Need For Human Interaction - Disclosed is a system for, and method of, determining whether records and entity representations should be delinked. The system and method need no human interaction in order to calculate parameters and utilizing formulas used for the delinking decisions. | 02-11-2016 |
20160048543 | SYSTEM AND METHOD FOR DETERMINING GOVERNANCE EFFECTIVENESS OF KNOWLEDGE MANAGEMENT SYSTEM - The present subject matter relates to a method, device and non-transitory computer readable medium for determining governance effectiveness of one or more knowledge artifacts. In one embodiment, governance effectiveness is determined by determining one or more parameters such as Intellectual Property IP effectiveness index, audit rights index, collaboration index, quality index of the knowledge artifacts. By determining the governance effectiveness of the knowledge artifacts, the system is able to continuously measure as to how the knowledge is being governed across various aspects like IP, Audit Rights, Collaboration and Quality on the knowledge artifacts. Further, the Knowledge management system is capable of adapting itself to the future changes or needs and also ensuring that the processes are being followed in line with standard protocols followed on the Knowledge trade. | 02-18-2016 |
20160048552 | SYSTEMS AND METHODS FOR ADAPTIVELY IDENTIFYING AND MITIGATING STATISTICAL OUTLIERS IN AGGREGATED DATA - The disclosed embodiments include computerized methods and systems that facilitate automated detection and precision correction of aggregated data collected by multiple, geographically dispersed mobile communications devices. In one embodiment, an apparatus detect a data outlier within portions of the aggregated data having numerical and/or categorical values. The apparatus may transmit information identifying the data outliner and a portion of the aggregated data that includes the data outlier to an additional communications device, which may present the aggregated data portion to a user in a manner that visually distinguishes the data outlined from other elements of aggregated data. In response to a request from the additional communications device, the apparatus may modify portions of the aggregated data in an effort to mitigate the data outlier. | 02-18-2016 |
20160055196 | METHODS AND SYSTEMS FOR IMPROVED DOCUMENT COMPARISON - A method for placing a document into a document family, the method including the steps of: determining at least one score associated with one or more document families, each score indicating a level of similarity between the document and the associated document family; in response to identifying at least one threshold document family, the or each threshold document family corresponding to a document family with at least one associated score meeting a predefined threshold: placing the document into the, or one of the, threshold document families; in response to identifying that each score fails to meet a predefined threshold: creating a new document family; and placing the document into the new document family. | 02-25-2016 |
20160055198 | COMPUTER DEVICE AND STORAGE DEVICE - A computer device for controlling a storage device based on non-volatile memory is provided. The computer device includes a file modification detector configured to detect whether a data structure in a database file has been deleted using an identifier recorded in the database file to indicate whether the data structure is deleted or not; and a command generator configured to generate an advanced-trim command including information corresponding to the deleted data structure and to transmit the command to the storage device. | 02-25-2016 |
20160055199 | MANAGING TEST DATA IN LARGE SCALE PERFORMANCE ENVIRONMENT - A method of processing a database can include comparing, using a processor, a delta file with a risk assessment criterion, wherein the delta file is generated from a first schema and a second and different schema, assigning a risk level to a change specified within the delta file according to the comparing, and applying the change of the delta file to a test database conforming to the first schema according to the assigned risk level. | 02-25-2016 |
20160063050 | Database Migration Consistency Checker - Following migration of data from one database to another, the contents of the source and target databases may be checked for consistency based on checksums computed for corresponding portions of the two databases. The origin of discrepancies may be determined iteratively by computing checksums for increasingly smaller sub-portions of portions whose checksums do not match. | 03-03-2016 |
20160070735 | INCREMENTAL DYNAMIC DOCUMENT INDEX GENERATION - A contextual index compendium that includes contextual index item generation rules that define document index entry generation transforms usable to transform text of the documents into embedded document index entries of document indexes within the documents is obtained by a processor. Using the document index entry generation transforms defined within the contextual index item generation rules in association with a document that includes embedded document index entries that are both embedded at locations of associated text distributed throughout the document and added as part of a document index within the document, new text of the document is programmatically transformed into at least one new document index entry in response to determining that at least one portion of the new text includes candidate text that is not already indexed within the existing embedded document index entries and the document index within the document. | 03-10-2016 |
20160070742 | OPTIMIZED NARRATIVE GENERATION AND FACT CHECKING METHOD AND SYSTEM BASED ON LANGUAGE USAGE - An optimized fact checking system analyzes and determines the factual accuracy of information and/or characterizes the information by comparing the information with source information. The optimized fact checking system automatically monitors information, processes the information, fact checks the information in an optimized manner and/or provides a status of the information. In some embodiments, the optimized fact checking system generates, aggregates, and/or summarizes content. | 03-10-2016 |
20160070743 | OPTIMIZED SUMMARIZING METHOD AND SYSTEM UTILIZING FACT CHECKING - An optimized fact checking system analyzes and determines the factual accuracy of information and/or characterizes the information by comparing the information with source information. The optimized fact checking system automatically monitors information, processes the information, fact checks the information in an optimized manner and/or provides a status of the information. In some embodiments, the optimized fact checking system generates, aggregates, and/or summarizes content. | 03-10-2016 |
20160070744 | SYSTEM AND METHOD FOR READING FILE BLOCKS - A system and method for reading file blocks includes reading an inode associated with the file from the file system, the inode including one or more first block pointers, determining a height of a file tree associated with the file, and determining whether a value of a second block pointer selected from the one or more first block pointers is consistent with the file having been stored using a block allocation pattern. When the value of the second block pointer is consistent with the file having been stored using the block allocation pattern the method further includes pre-fetching a plurality of file blocks based on the block allocation pattern, verifying that the pre-fetched file blocks are consistent with the file tree, and retrieving one or more data blocks of the file. In some examples, the block allocation pattern corresponds to the file being stored in streaming order to consecutively and contiguously located blocks. | 03-10-2016 |
20160078026 | PARALLEL CONTAINER AND RECORD ORGANIZATION - Provided are techniques for parallel container and record organization using buckets. In response to receiving an update to an entity in a file plan, a date associated with a disposition of the entity is determined and a reference to the entity is added to a bucket associated with the date. | 03-17-2016 |
20160078051 | DATA PATTERN DETECTING DEVICE, SEMICONDUCTOR DEVICE INCLUDING THE SAME, AND OPERATING METHOD THEREOF - A pattern detecting device includes a length comparison unit suitable for comparing lengths of compressed input data and compressed reference data; and a data comparison unit suitable for comparing the compressed input data and the compressed reference data. | 03-17-2016 |
20160078100 | GENERATING DATA PATTERN INFORMATION - A data storage system stores at least one dataset including a plurality of records. A data processing system, coupled to the data storage system, processes the plurality of records to produce codes representing data patterns in the records, the processing including: for each of multiple records in the plurality of records, associating with the record a code encoding one or more elements, wherein each element represents a state or property of a corresponding field or combination of fields as one of a set of element values, and, for at least one element of at least a first code, the number of element values in the set is smaller than the total number of data values that occur in the corresponding field or combination of fields over all of the plurality of records in the dataset. | 03-17-2016 |
20160085791 | TREE COMPARISON TO MANAGE PROGRESSIVE DATA STORE SWITCHOVER WITH ASSURED PERFORMANCE - Technologies are generally provided for progressive key value store switchover by evaluating a maturity of a migrated data store and allowing piecewise switching of substructure area query servicing from an origin data store to a destination data store. In some examples, abstractions of origin and destination tree structures may be compared to each other in order to generate an evaluation metric at substantially reduced performance evaluation load. The evaluation metric may target performance sampling while assuring a desired performance level with localized query servicing switchover. Piecewise data transfer may also be optionally enabled such that overall storage can be similar to the storage of a single data store copy while reducing an impact on existing data store services. | 03-24-2016 |
20160085793 | MODEL-DRIVEN DATA ENTRY VALIDATION - In various embodiments, methods, systems, and non-transitory computer-readable media are disclosed that allow developers to place client-side validation rules on user interface components using a desktop integration framework. The validation rules can be tied to translatable resources or model metadata. In one aspect, the validation rules metadata is provided separately from the document to which the validation rules will eventually be tied. | 03-24-2016 |
20160085795 | GROUPING EQUIVALENT CONTENT ITEMS - Systems and methods for identifying equivalent content items. A computer system may receive a description of a first content item, the description of the first content item comprising a first set of values for a plurality of content item characteristics. The computer system may compare the first content item to each of a plurality of content items. The comparing may comprise, for each combination of the first content item and one of the plurality of content items, identifying any characteristics from the plurality of content item characteristics for which first content item and the one of the plurality of content items has equivalent values. The computer system may identify at least one content item selected from the plurality of content items. The first content item and the at least one content item have equivalent values for a predetermined pattern of the plurality of content item characteristics. The computer system may write to the memory an indication of a group of equivalent content items comprising the first content item and the identified at least one content item. | 03-24-2016 |
20160085797 | ARCHIVAL DATA IDENTIFICATION - A request to retrieve a persistently stored data object is received, the request including a data object identifier that encodes at least storage location information and validation information related to the data object. The data object is retrieved using at least the storage location information to form a retrieved data object, and validation is performed using at least the validation information. | 03-24-2016 |
20160085798 | METHOD AND SYSTEM FOR STORING USER INFORMATION - According to an embodiment, in a first storing system, a record corresponding to an account number ID is stored. In the record, a primary key is the account number ID, and a value is an account number mode and an account number name of the account number. The second storing system obtains the account number and the account number ID from the first storing system, determines whether the account number satisfies the preset reverse-searching condition. If yes, the second storing system generates and stores a record. In the record, a primary key is the account number mode and the account number name of the account number, and a value is the account number ID. | 03-24-2016 |
20160092488 | CONCURRENCY CONTROL IN A SHARED STORAGE ARCHITECTURE SUPPORTING ON-PAGE IMPLICIT LOCKS - Presented systems and methods can facilitate efficient and effective information storage management. A system may include a plurality of nodes, shared storage and a centralized lock manager. A storage management method can include: receiving an access request to information, performing a lock resolution process; and performing an access operation (e.g., read, information update, etc.). The information can be associated with a shared storage component. The lock resolution process can include participating in a lock management process that manages a physical lock (P-lock), wherein the lock management process utilizes transaction information associated with an implicit lock process and proceeds without communication overhead associated with explicit requests for a logical lock. In one embodiment the lock resolution process includes participating in a conflict determination process to determine if there is a potential conflict with an information access request, wherein the conflict determination process utilizes the transaction information associated with the implicit lock process. | 03-31-2016 |
20160098442 | VERIFYING ANALYTICS RESULTS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for validating analytics results. One of the methods includes processing a subset of a dataset and polling an analytics system for a corresponding output subset and comparing the two subsets to validate the analytics system. | 04-07-2016 |
20160103867 | Methods for Identifying Denial Constraints - Computer implemented methods for identifying denial constraints are provided herein. The denial constraints can be used with a database schema R. A predicate space P can be generated for an instance I in the schema R. An evidence set Evi | 04-14-2016 |
20160110368 | DATA MANAGEMENT SYSTEM - In connection with processing asynchronous streams of aircraft telemetry data, data processing logic is developed to run on multiple aircraft, even if different avionics equipment are installed on the aircraft. An electronic inventory system tracks all data available on affected aircraft. A set of “global” data elements applicable to all aircraft in a fleet is defined and is tracked in the electronic inventory system, with relationship to the underlying native data elements and specific aircraft. The global units are derived as appropriate, for each specific aircraft avionics environment. An interface enables definition of data processing logic that is integrated with the electronic inventory system and ensures the general validity of the defined logic. The data processing logic is deployed to one or more aircraft in a function integrated with the electronic inventory system, to ensure the validity of the data processing logic for each aircraft specified as a deployment target. | 04-21-2016 |
20160110404 | Real-time Abnormal Change Detection in Graphs - A method is provided for detecting abnormal changes in real-time in dynamic graphs. The method includes extracting, by a graph sampler, an active sampled graph from an underlying base graph. The method further includes merging, by a graph merger, the active sampled graph with graph updates within a predetermined recent time period to generate a merged graph. The method also includes computing, by a graph diameter computer, a diameter of the merged graph. The method additionally includes determining, by a graph diameter change determination device, whether a graph diameter change exists. The method further includes generating, by an alarm generator, a user-perceptible alarm responsive to the graph diameter change. | 04-21-2016 |
20160110406 | VALIDATION OF DATA ACROSS MULTIPLE DATA STORES - Examples of the present disclosure describe validation of data on a client having a plurality of data stores. A data consistency component of the client queries a plurality of data stores of the client to identify a portion of data from each of the data stores. The data consistency component compares portions of data obtained from the plurality of data stores using stored knowledge data, maintained by the data consistency component. Based on the comparison of the portions of data, the data consistency component identifies if inconsistency exists across the plurality of data stores. Inconsistency identified for any of the plurality of data stores is reported. | 04-21-2016 |
20160117337 | CONCURRENT ACCESS AND TRANSACTIONS IN A DISTRIBUTED FILE SYSTEM - Embodiments described herein provide techniques for maintaining consistency in a distributed system (e.g., a distributed secondary storage system). According to one embodiment of the present disclosure, a first set of file system objects included in performing the requested file system operation is identified in response to a request to perform a file system operation. An update intent corresponding to the requested file system operation is inserted into an inode associated with each identified file system object. Each file system object corresponding to the inode is modified as specified by the update intent in that inode. After modifying the file system object corresponding to the inode, the update intent is removed from that inode. | 04-28-2016 |
20160124989 | CROSS PLATFORM DATA VALIDATION UTILITY - The present invention is directed to a system that enables an associate (a data specialist, an agent, an analyst, or the like) to efficiently and accurately validate customer data (e.g., determine if customer data between two or more sets of customer data is consistently accurate). In this way, the system of the present invention is configured to enable the associate to run automated tests (e.g., trials) where first customer data from a first customer data set is compared to second customer data from a second customer data set to determine one or more differences between the first and second customer data. After the comparison is complete, the system of the present invention is configured to generate a file (e.g., a third customer data set) that identifies the determined differences, and provides a standardized report summarizing the determined differences. | 05-05-2016 |
20160124993 | MODIFICATION AND VALIDATION OF SPATIAL DATA - A method for validating data changes made to a database is disclosed. The changes are made in the context of a transaction, and validation is performed using a rules database storing a plurality of rules. The method includes identifying a set of data entities affected by one or more data changes made in the context of the transaction. In response to an instruction to commit the transaction, data entities in the set of affected data entities are validated using rules from the rules database. The transaction is committed in dependence on the outcome of the validation. | 05-05-2016 |
20160125016 | MAINTAINING STORAGE PROFILE CONSISTENCY IN A CLUSTER HAVING LOCAL AND SHARED STORAGE - A per device state is introduced that indicates whether a storage device is shared clusterwide or not. The state may be populated by default based on detected device locality. Devices detected as local and those shared by only a subset of host machines in a cluster of machines may have the state set to “FALSE.” Devices which are shared by all the machines in a cluster may have the state set to “TRUE.” Locality of storage devices in a cluster may be modified using such state information. Operations upon other storage device state may be modified depending upon device sharing state. | 05-05-2016 |
20160132547 | APPARATUS AND METHOD FOR MANAGING APK FILE IN AN ANDROID PLATFORM - The present invention relates to an apparatus for managing an APK file in the Android platform in order to forestall an executable file in an APK file from being analyzed by reverse engineering or decompiling that comprises a file reader that reads an original .dex file in the APK file, a file modifier that modifies the original .dex file the file reader has read and stores the modified .dex file in a readable folder in the APK file, a file creator that accesses the folder to read and restore the original .dex file, creates a temporary .dex file that is can be loaded onto memory and adds the temporary .dex file to the APK file in order to create a protected APK file and a file executer that reads from the folder and restores, if the Android platform requests the protected APK file to be executed, the modified original .dex file by executing the temporary .dex file and loads the restored original .dex file onto memory in order to execute the protected APK file. | 05-12-2016 |
20160140151 | Data Resource Anomaly Detection - Anomaly detection is provided. A first component of a first data resource of a plurality of data resources is identified. Each data resource of the plurality of data resources includes one or more components. A score of the first component is determined based, at least in part, on underlying data of the first component and underlying data of one or more other components of data resources of the plurality of data resources that correspond to the first component. An interest level of the first data resource is determined. A relationship between the score of the first component and the interest level of the first data resource is modeled. | 05-19-2016 |
20160140163 | System and method for identifying non-event profiles - A system for avoidance records comprises an interface and a processor. An interface is configured to receive an abbreviated record associated with a non-event profile identifier. A processor is configured to determine a counter value associated with the non-event profile identifier and, in the event that the counter value is greater than a predetermined threshold, create and store an avoidance record. | 05-19-2016 |
20160140164 | COMPLEX EVENT PROCESSING APPARATUS AND COMPLEX EVENT PROCESSING METHOD - When a detecting complex event condition expression is changed, a rule comparing unit compares the complex event condition expressions before and after the change. The changed portion identifying unit identifies the changed portion based on the comparison result, and the parallel operating unit operates the complex event condition expressions before and after the change in parallel for the detecting complex event condition expression including the identified changed portion. In this manner, the complex event processing apparatus disclosed therein can dynamically change a detecting complex event condition expression used in the complex event processing. | 05-19-2016 |
20160147817 | DATA CREDIBILITY VOUCHING SYSTEM - A system, method and program product are provided for implementing a credibility vouching system (CVS). A CVS is disclosed that includes: credibility vouching system (CVS), comprising: a data aggregation system interface that provides a communication pathway for receiving event metadata (EM) records from a data aggregation system; a service provider interface and inquiry system that provides a communication pathway with a plurality of third party service providers to facilitate identification of a set of candidate nodes potentially responsible for a submitted EM record in the data aggregation system; a vouching request routing system for generating a vouching request and tasking at least one third party service provider to forward the vouching request to the set of candidate nodes; and a credibility scoring system that generates a credibility score for the submitted EM record based on a set of vouching responses received from the set of candidate nodes. | 05-26-2016 |
20160154814 | SYSTEMS AND METHODS FOR IMPROVING STORAGE EFFICIENCY IN AN INFORMATION HANDLING SYSTEM | 06-02-2016 |
20160171001 | SOURCE-TO-PROCESSING FILE CONVERSION IN AN ELECTRONIC DISCOVERY ENTERPRISE SYSTEM | 06-16-2016 |
20160171038 | TRACKING MODEL ELEMENT CHANGES USING CHANGE LOGS | 06-16-2016 |
20160179827 | ISOLATION ANOMALY QUANTIFICATION THROUGH HEURISTICAL PATTERN DETECTION | 06-23-2016 |
20160179859 | WALL ENCODING AND DECODING | 06-23-2016 |
20160179868 | METHODOLOGY AND APPARATUS FOR CONSISTENCY CHECK BY COMPARISON OF ONTOLOGY MODELS | 06-23-2016 |
20160179872 | System and Method for Providing High Availability Data | 06-23-2016 |
20160179874 | METHOD AND APPARATUS FOR PROVIDING MAP UPDATES FROM DISTANCE BASED BUCKET PROCESSING | 06-23-2016 |
20160188623 | SCAN OPTIMIZATION USING BLOOM FILTER SYNOPSIS - An illustrative embodiment for optimizing scans using a Bloom filter synopsis, defines metadata to encode distinct values in a range of values associated with a particular portion of a managed object in a database management system into a probabilistic data structure of a Bloom filter that stores an indicator, encoded in a fixed size bit map with one or more bits, indicating whether an element of the particular portion of the managed object is a member of a set of values summarized in the Bloom filter using a value of 1 or definitely not in the set using a value of 0. The Bloom filter is compressed to create a compressed Bloom filter. The Bloom filter is added to the metadata associated with the managed object and used when testing for values associated with predicates. | 06-30-2016 |
20160203057 | RESOURCE PLANNING FOR DATA PROTECTION VALIDATION | 07-14-2016 |
20160378793 | DATABASE COMPARISON SYSTEM - Embodiments of the present invention disclose a method, computer program product, and system for detecting changes in database schema. The embodiments may include receiving a first database schema. The embodiments may include creating a first value corresponding to the first database schema by utilizing a compressed value algorithm. The compressed value algorithm may create a single value corresponding to each database schema. The embodiments may include receiving a second database schema. The embodiments may include creating a second value corresponding to the second database schema by utilizing the compressed value algorithm. The embodiments may include determining whether there is a difference between the first database schema and the second database schema by comparing the first value and the second value. | 12-29-2016 |
20160378794 | DATABASE COMPARISON SYSTEM - Embodiments of the present invention disclose a method, computer program product, and system for detecting changes in database schema. The embodiments may include receiving a first database schema. The embodiments may include creating a first value corresponding to the first database schema by utilizing a compressed value algorithm. The compressed value algorithm may create a single value corresponding to each database schema. The embodiments may include receiving a second database schema. The embodiments may include creating a second value corresponding to the second database schema by utilizing the compressed value algorithm. The embodiments may include determining whether there is a difference between the first database schema and the second database schema by comparing the first value and the second value. | 12-29-2016 |
20160378816 | SYSTEM AND METHOD OF VERIFYING PROVISIONED VIRTUAL SERVICES - This disclosure relates to various systems, methods, architectures, mechanisms, or apparatuses for verifying or auditing that virtual services are correctly instantiated at a data center. Verification that virtual services are correctly instantiated at a data center may include comparing normalized datasets representing the actual instantiated services associated with a virtual services provisioning entity to normalized datasets representing the expected instantiated services associated with the provisioning entity. Verification that virtual services are correctly instantiated at a data center may include monitoring one or more communication channels of a virtual services control entity to identify a provisioning command intended for the provisioning entity, determining whether the identified provisioning command intended for the provisioning entity has been received by the provisioning entity, and generating an alert based on a determination that the identified provisioning command intended for the provisioning entity has not been received by the provisioning entity. | 12-29-2016 |
20160378817 | SYSTEMS AND METHODS OF IDENTIFYING DATA VARIATIONS - Systems and methods are provided for identifying data variations, which can include normalizing and validating data. In some embodiments, normalizing data may include converting the data from a first format into a second selected format. Data may also be validated by comparing the data to a rule set. Normalized data may be examined on a line-by-line basis, with each line of the normalized data checked for compliance with rules of the rule set. Compliance data identifying the results of comparing the data against the rule set may be generated and output. | 12-29-2016 |
20170235771 | SYSTEMS AND METHODS FOR ELECTRONIC MAIL COMMUNICATION BASED DATA MANAGEMENT | 08-17-2017 |
20170235785 | Systems and Methods for Robust, Incremental Data Ingest of Communications Networks Topology | 08-17-2017 |
20170235970 | SCALABLE DATA VERIFICATION WITH IMMUTABLE DATA STORAGE | 08-17-2017 |
20180025044 | UNMANNED VEHICLE DATA CORRELATION, ROUTING, AND REPORTING | 01-25-2018 |
20180025045 | ACCURACY OF LOW CONFIDENCE MATCHES OF USER IDENTIFYING INFORMATION OF AN ONLINE SYSTEM | 01-25-2018 |
20190146754 | OPTIMIZED CONSTRUCTION OF A SAMPLE IMPRINT FOR SELECTING A SAMPLE DATASET FOR COMPARISON TESTING | 05-16-2019 |
20190146965 | CROWDSOURCED VALIDATION OF ELECTRONIC CONTENT | 05-16-2019 |
20220138176 | ANALYSIS INFORMATION MANAGEMENT DEVICE AND ANALYSIS INFORMATION MANAGEMENT METHOD - Selection of a batch file that causes an analysis device to analyze a sample successively is received by a receiver. Batch analysis data that represents an analysis result and corresponds to the batch file, selection of which is received, is acquired from a database device. Standard information, for verifying validity of an analysis performed by the analysis device, which corresponds to the batch file, selection of which is received, is acquired by a standard information acquirer from the database device. A report that describes an analysis result represented by the batch analysis data and an evaluation result in regard to validity of an analysis performed by the analysis device is created by a creator based on the acquired batch analysis data and the acquired standard information. | 05-05-2022 |