Entries |
Document | Title | Date |
20100114838 | PRODUCT RELIABILITY TRACKING AND NOTIFICATION SYSTEM AND METHOD - Methods and apparatus are provided for tracking product reliability. A product removal database having removal data stored therein that are associated with one or more products is periodically accessed at a user-specified periodicity. An aircraft flight-hours database having time-in-flight data stored therein that are associated with each product is periodically accessed at the user-specified periodicity. One or more user-selected algorithms are executed, using at least a portion of the periodically accessed removal data, to determine if the criterion for a user-specified reliability parameter is met or not. If it is determined that the criterion for the user-selected reliability parameter is not met, then an alert is transmitted to a preset destination. | 05-06-2010 |
20100114839 | Identifying and remedying secondary privacy leakage - Secondary leakage of private information is identified and remedied. Internet activity of a first party can result in such secondary leakage of private information of a second party. Information about the second party that would not otherwise be known becomes public based simply on related information that has been placed on a public site of a third party by the first party. Such disclosure is detected and the victim may be notified about the location. The victim can then decide if such secondary leakage is acceptable. If not, the first party or the third party may be notified, the activity may be stopped and the offending information can be removed. | 05-06-2010 |
20100131471 | Correlating subjective user states with objective occurrences associated with a user - A computationally implemented method includes, but is not limited to: acquiring subjective user state data including at least a first subjective user state and a second subjective user state; acquiring objective context data including at least a first context data indicative of a first objective occurrence associated with a user and a second context data indicative of a second objective occurrence associated with the user; and correlating the subjective user state data with the objective context data. In addition to the foregoing, other method aspects are described in the claims, drawings, and text forming a part of the present disclosure. | 05-27-2010 |
20100138397 | SYSTEM AND METHOD FOR PROVIDING E-BOOK - A system for providing at least one electronic book is disclosed. The system can adjust the format of the electronic book in accordance with the specification of the reader and thereby make the electronic book compatible with the reader. Moreover, a method for providing at least one electronic book is also disclosed in specification. | 06-03-2010 |
20100145916 | MONITORING UPDATES INVOLVING DATA STRUCTURES ACCESSED IN PARALLEL TRANSACTIONS - The embodiments described herein provide techniques for monitoring updates involving data structures accessed in parallel transactions. In an example, objects may be stored in one of the data structures and such objects that may be accessed in multiple, parallel transactions. Counters are maintained in another data structure to track the stored objects. In an illustrative embodiment, this counter is based on a checksum that is derived from a sub key that uniquely identifies an object within a group of objects. | 06-10-2010 |
20100211550 | METHOD ALLOWING VALIDATION IN A PRODUCTION DATABASE OF NEW ENTERED DATA PRIOR TO THEIR RELEASE - A method of insuring the integrity of a plurality of updates brought in real-time to a production database concurrently used by one or more software applications is described. The production database includes a plurality of products participating to the definition of objects. The method first includes requesting the issuance of a unique filing number associated to a draft state version of the plurality of updates while keeping them invisible to the end-users of the production database. Then, a set of product items identified as a whole by the unique filing number are created or copied from the production database and gathered under the form of a meta-product on which the plurality of updates is applied. After updating, the meta-product is successively set into a customizable flow of one or more validation states in order to perform a cross-validation of the plurality of updates. After validation, the meta-product is set into a production state where the uniquely identified meta-product becomes immediately visible and useable by the end-users. | 08-19-2010 |
20100211551 | METHOD, SYSTEM, AND COMPUTER READABLE RECORDING MEDIUM FOR FILTERING OBSCENE CONTENTS - The present invention relates to a method and a system for filtering the harmful content (s) which includes a filter group for providing an optimized filter for each category; a matching engine for monitoring harmfulness of the content by matching it with existing lewd contents recorded in a pornographic content database and/or advertising contents recorded in an advertising content database; an interface part which provides a user with information on a degree of similarity between the inputted content and the harmful contents recorded in the above-mentioned database which is calculated through the matching process in order to increase the filtering accuracy; and information on a degree of harmfulness of the content calculated in the filter group; and information on a user who created or distributed the content. Accordingly, it is possible to filter adult contents or advertising contents with much higher accuracy by three harmful content blocking steps. | 08-19-2010 |
20100211552 | Apparatus, Method and System for Tracking Information Access - An apparatus, method and system to track information access over a communications network. The present disclosure teaches how to associate access credentials with content accesser in a global and persistent manner. Both content and people are registered with a Digital Object Identifier (DOI) handle system ( | 08-19-2010 |
20100223234 | SYSTEM AND METHOD FOR PROVIDING S/MIME-BASED DOCUMENT DISTRIBUTION VIA ELECTRONIC MAIL MECHANISMS - A content or document management system includes a content or document repository; a dedicated e-mail account; and a mail agent associated with the dedicated e-mail account. The mail agent processes a received e-mail message to determine a sender's identity; authenticates the identity of the sender and an authorization of the sender with respect to the content or document repository; parses, when a sender is authenticated and authorized, document request information from the e-mail message; and either stores a document to or retrieves a document from the content or document repository. The mail agent may authenticate the identity of the sender using a digital signature, or may authenticate the identity of the sender using an e-mail address in a FROM header field of the received e-mail message header. The mail agent may decrypt encrypted messages from the sender, and may sign and encrypt responses to the sender. | 09-02-2010 |
20100235329 | SYSTEM AND METHOD OF EMBEDDING SECOND CONTENT IN FIRST CONTENT - Apparatus and methods of aggregating content are disclosed. A data storage device includes a host interface, a controller coupled to the host interface, and a memory array coupled to the controller. The host interface is configured to enable the data storage device to be operatively coupled to the host device. First content includes a reference to a source of second content to be embedded in the first content. The first content is retrievable via access to a resource. Upon retrieval, the reference is replaced by the second content such that the second content is embedded in the first content. The controller is configured to receive data of the resource, such received data including the second content embedded in the first content. The controller is also configured to store the received data at the memory array and, when the data storage device is operatively coupled to the host device, provide the second content embedded in the first content to the host device in response to receiving a request for the first content. | 09-16-2010 |
20100268692 | VERIFYING DATA SECURITY IN A DISPERSED STORAGE NETWORK - An integrity record is appended to data slices prior to being sent to multiple slice storage units. Each of the data slices includes a different encoded version of the same data segment. An integrity indicator of each data slice is computed, and the integrity record is generated based on each of the individual integrity indicators, and may be, for example, list or a hash of the combined integrity indicators. When retrieving data slices from storage, the integrity record can be stripped off, a new integrity indicator of the data slice calculated, and a new integrity record created. The new integrity record can be compared to the original integrity record, and used to verify the integrity of the data slices. | 10-21-2010 |
20100299314 | IDENTIFYING AND USING CRITICAL FIELDS IN QUALITY MANAGEMENT - Methods and systems for identifying critical fields in documents, for example so that quality improvement efforts can be prioritized on the critical fields. One aspect of the invention concerns a method for improving quality of a data processing operation in a plurality of documents. A set of documents is sampled. An error rate for fields in the documents is estimated based on the sampling. Critical fields are identified based on which fields have error rates higher than a threshold. | 11-25-2010 |
20100332460 | METHOD AND APPARATUS FOR MANAGING FILE EXTENSIONS IN A DIGITAL PROCESSING SYSTEM - Methods and apparatuses for managing file extensions in a processing system. An exemplary method of managing file extensions in a digital processing system involves a user interface and a plurality of files, each file having a name that comprises a filename and an extension. The method includes associating a file with an indicator which is user selectable for a single file in a plurality of files in said digital processing system and which indicates how to display an extension of the file, and assigning a value to the indicator, and displaying a displayed name of the file in the user interface in a style determined by the indicator. | 12-30-2010 |
20100332461 | SYSTEM AND METHOD OF MASSIVELY PARALLEL DATA PROCESSING - A system and method of massively parallel data processing are disclosed. In an embodiment, a method includes generating an interpretation of a customizable database request which includes an extensible computer process and providing an input guidance to available processors of an available computing environment. The method further includes automatically distributing an execution of the interpretation across the available computing environment operating concurrently and in parallel, wherein a component of the execution may be limited to at least a part of an input data. The method also includes automatically assembling a response using a distributed output of the execution. | 12-30-2010 |
20110035362 | TERMINAL, WEB APPLICATION OPERATING METHOD AND PROGRAM - A terminal stores in a storage section content distributed from a server and a data access power for deleting a service that differ from a service to which the content belongs in association with each other. The terminal determines, when the stored content requests the deletion of the differing service that is indicated by the statement contained in the content, whether or not the content and the data access power are stored in the storage section in association with each other. When the terminal has determined that the content that requested the deletion of the differing service and the data access power are stored in the storage section in association with each other, the terminal deletes content that belongs to the differing service from the storage section. | 02-10-2011 |
20110040732 | APPROACH FOR SECURING DISTRIBUTED DEDUPLICATION SOFTWARE - The various embodiments of the present invention include techniques for securing the use of data deduplication activities occurring in a source-deduplicating storage management system. These techniques are intended to prevent fake data backup, target data contamination, and data spoofing attacks initiated by a source. In one embodiment, one technique includes limiting chunk querying to authorized users. Another technique provides detection of attacks and unauthorized access to keys within the target system. Additional techniques include the combination of validating the existence of data from the source by validating the data chunk, validating a data sample of the data chunk, or validating a hash value of the data chunk. A further embodiment involves the use of policies to provide authorization levels for chunk sharing and linking within the target. These techniques separately and in combination provide a comprehensive strategy to avoid unauthorized access to data within the target storage system. | 02-17-2011 |
20110055166 | FINGERPRINTING A DATABASE - A method comprising fingerprinting, by the at least one processor, a first copy of a database with a fingerprint. The fingerprint has at least one part in common with another fingerprint used in another copy of the database, and at least one part unique to the first copy of the database. The fingerprinting comprises swapping attributes between multiple records in the first copy of the database. | 03-03-2011 |
20110055167 | Apparatus, System, and Method for Identifying Redundancy and Consolidation Opportunities in Databases and Application Systems - Apparatuses, computer program products, and methods for identifying redundancy and consolidation opportunities in databases and application systems are disclosed. In one embodiment, the apparatus may include at least one meta data scanner. The apparatus may also include an enterprise meta data source. The apparatus may further include a meta data repository. The meta data repository receives system-specific meta data from the at least one meta data scanner. The meta data repository may also receive enterprise canonical data model meta data from the enterprise meta data source. The meta data repository may be configured to generate at least one individual system CRUD matrix that may then used to produce an enterprise canonical model CRUD matrix. The enterprise canonical model CRUD matrix may be analyzed by a data mining clustering algorithm. The clustering algorithm may group together modules and database elements that may be redundant and may indicate opportunities for consolidation. | 03-03-2011 |
20110060725 | SYSTEMS AND METHODS FOR GRID-BASED DATA SCANNING - A computing grid for performing scanning operations on electronic data in a networked computing environment. The data scanning operations may include scanning data for viruses or other malicious software code. The computing grid for performing data scanning operations may include one or more event detectors to detect data scanning events and one or more grid scanning elements to perform the data scanning operations. The computing grid may also include a grid coordinator to monitor the grid configuration, perform necessary updates to the grid, and to take pre-determined actions based on the results of the data scans. | 03-10-2011 |
20110071987 | FILE HANDLING FOR NAMING CONFLICTS - A file operations engine is provided that manages many user interactions with their files via a computer system. The operation engine may provide a user with the option to keep both files that have a file name conflict. It may further permit the user to rename a file involved with a file name conflict. The operations engine may also automatically rename one of the files of a file name conflict by appending a character to a root of the filename. The character may include the lowest integer available for the root in a destination for the files. The operations engine may provide the option to keep both files as part of a pre-calculation of potential errors for a requested operation. The operations engine may place file name conflicts in an error queue and permit the user to select an option to keep both files after the conflict is encountered. | 03-24-2011 |
20110078122 | Data set selection mechanism - A system for quickly identifying correlated data with a set of matching criteria in structured data. The data is presented as a user interface mechanism such as a slider that can be integrated into existing applications. The user can use the slider to identify the correlated data the user wishes to see, which is then output to the user through the application or a set of analytical tools. | 03-31-2011 |
20110087638 | FEED VALIDATOR - Methods, systems, and computer-readable media for generating feed schemas and validating feeds are provided. A user interface may be provided that displays the schema in one pane, while providing drop-down menus for defining new schema nodes in a separate pane. An interface for validating the schema may show the feed as it will be displayed on a webpage utilizing the feed. | 04-14-2011 |
20110106772 | DATA PROCESSING APPARATUS, DATA PROCESSING METHOD, PROGRAM, AND INTEGRATED CIRCUIT - A data processing apparatus ( | 05-05-2011 |
20110119237 | ADAPTATION DATA FILE PROVIDING APPARATUS, ADAPTATION DATA FILE PROCESSING APPARATUS, AND AIR TRAFFIC CONTROL SYSTEM AND METHOD - An adaptation data file providing apparatus is operated by at least one subsystem among subsystems configuring an air traffic control system that provides an air traffic control service. The apparatus includes: a data input unit for receiving adaptation data required to provide the air traffic control service; and a file generation unit for generating an adaptation file, which has a form to be transmittable to subsystems and is obtained by generating metadata for being used to check errors in the adaptation data and including the metadata and the adaptation data input through the data input unit therein. | 05-19-2011 |
20110125718 | Public Electronic Document Dating List - Systems and methods are disclosed which enable the establishment of file dates and the absence of tampering, even for documents held in secrecy and those stored in uncontrolled environments, but which does not require trusting a timestamping authority or document archival service. A trusted timestamping authority (TTSA) may be used, but even if the TTSA loses credibility or a challenger refuses to acknowledge the validity of a timestamp, a date for an electronic document may still be established. Systems and methods are disclosed which enable detection of file duplication in large collections of documents, which can improve searching for documents within the large collection. | 05-26-2011 |
20110131188 | METHOD AND SYSTEM FOR REAL TIME SYSTEM LOG INTEGRITY PROTECTION - A method and system for managing integrity of system log file data. The system comprises a first component which, using a hook in a kernel of an operating system, allows interception of a write operation by a file system on at least one log file; then the first component detects a change in the security context in which the record is written in the log file. At each change detected, the first component adds information in the log file including the context information. The system further comprises a second component which reads the log file and, using the information added by the first component, detects if the change of context is due to a malicious writing operation in the log file for instance done by an unauthorized user or process. | 06-02-2011 |
20110137873 | SYSTEMS AND METHODS OF PROFILING DATA FOR INTEGRATION - The present invention is generally directed to systems and methods for gathering information about nonnative data, comparing nonnative data elements to information defining nonnative data, comparing native data elements to information defining native data, establishing transformation rules, and integrating the nonnative and native data. | 06-09-2011 |
20110145205 | Packet Boundary Spanning Pattern Matching Based At Least In Part Upon History Information - An embodiment may include circuitry to determine, at least in part, based at least in part upon history information, whether one or more reference patterns are present in a data stream in a packet flow. The data stream may span at least one packet boundary in the packet flow. The history information may include a beginning portion of a packet in the data stream, an ending portion of the packet, and another portion of the data stream. The circuitry may overwrite the another portion of the history information with a respective portion of the data stream to be examined by the circuitry depending, at least in part, upon whether the circuitry determines, at least in part, whether the one or more reference patterns are present in the data stream. The respective portion may be relatively closer than the another portion is to a beginning of the data stream. | 06-16-2011 |
20110153573 | SYSTEM AND METHOD FOR VALUING AN IP ASSET BASED UPON PATENT QUALITY - A comprehensive platform for merchandising intellectual property (IP) and conducting IP transactions is disclosed. A standardized data collection method enables IP assets to be characterized, rated and valuated in a consistent manner. Project management, workflow and data security functionality enable consistent, efficient and secure interactions between the IP Marketplace participants throughout the IP transaction process. Business rules, workflows, valuation models and rating methods may be user defined or based upon marketplace, industry or technology standards. | 06-23-2011 |
20110153574 | METHOD FOR SAFEGUARDING THE INTEGRITY OF A RELATIONAL DATABASE IN CASE OF STRUCTURAL TRANSACTION EXECUTION - A method enables an administration of resources (content) in web packages. By automatically adding a prefix to a resource name causing a name conflict, even resources having the same name can be handled when installing a new web package by a virtual file system mapping the resources to which a prefix has been added to the physical content required for the web application. | 06-23-2011 |
20110161302 | Distributed File System and Data Block Consistency Managing Method Thereof - A distributed file system and a data block consistency managing method thereof are disclosed. The method comprises: a file location register generates the values of the counters corresponding to CHUNKs and the values of the counters are simultaneously stored in file access servers and a file location register; when writing data into a CHUNK, a file access client writes data into both the main and standby file access servers and revises the values of counters of CHUNKs in the file access servers into which data is written normally; the file location register takes the CHUNK whose counter has the maximal value as the normal and valid one according to the corresponding values of the counters of corresponding CHUNK reported by the main and standby file access servers. | 06-30-2011 |
20110167048 | METHOD AND SYSTEM FOR CLEARING LOG FILES OF SERVICE SYSTEM - A method and system for clearing log files of a service system are disclosed in the present invention. The method includes the following steps: configuring a log processing task item for each service of multiple services respectively; adding a log management task to a task list of an operating system; and the log management task clears a log file of each service in turn according to the log processing task item. According to the present invention, each time when a new service is added to a service system, or new log files are added to a current service, there is no need to develop a log processing module specially and modify the existing content about log processing, and test the log management module. Therefore, efficiency of development and testing can be improved effectively, and system update and maintenance be facilitated. | 07-07-2011 |
20110173161 | DATA STORAGE SYSTEM AND METHOD BY SHREDDING AND DESHREDDING - A system and method for data storage by shredding and deshredding of the data allows for various combinations of processing of the data to provide various resultant storage of the data. Data storage and retrieval functions include various combinations of data redundancy generation, data compression and decompression, data encryption and decryption, and data integrity by signature generation and verification. Data shredding is performed by shredders and data deshredding is performed by deshredders that have some implementations that allocate processing internally in the shredder and deshredder either in parallel to multiple processors or sequentially to a single processor. Other implementations use multiple processing through multi-level shredders and deshredders. Redundancy generation includes implementations using non-systematic encoding, systematic encoding, or a hybrid combination. Shredder based tag generators and deshredder based tag readers are used in some implementations to allow the deshredders to adapt to various versions of the shredders. | 07-14-2011 |
20110196844 | MANAGING STORAGE OF INDIVIDUALLY ACCESSIBLE DATA UNITS - Managing data includes: receiving at least one group of individually accessible data units over an input device or port, each data unit identified by a key value, with key values of the received data units being sorted such that the key value identifying a given first data unit that is received before a given second data unit occurs earlier in a sort order than the key value identifying the given second data unit; and processing the data units for storage in a data storage system. The processing includes: storing a plurality of blocks of data, each of one or more of the blocks being generated by combining a plurality of the data units; providing an index that includes an entry for each of the blocks, wherein one or more of the entries enable location, based on a provided key value, of a block that includes data units corresponding to a range of key values that includes the provided key value; and generating one or more screening data structures associated with the stored blocks for determining a possibility that a data unit that includes a given key value was included in the group of individually accessible data units. | 08-11-2011 |
20110196845 | ELIMINATION OF REDUNDANT OBJECTS IN STORAGE SYSTEMS - Provided are a method, system, and article of manufacture, wherein a data structure corresponding to a set of client nodes selected from a plurality of client nodes is generated. Objects from the selected set of client nodes are stored in the data structure. A determination is made that an object corresponding to a client node of the selected set of client nodes has to be stored. An additional determination is made as to whether the object has already been stored in the data structure by any client node of the selected set of client nodes. The object is stored in the data structure, in response to determining that the object has not already been stored in the data structure by any client node of the selected set of client nodes. | 08-11-2011 |
20110202508 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR VALIDATING ONE OR MORE METADATA OBJECTS - In accordance with embodiments, there are provided mechanisms and methods for creating, exporting, viewing and testing, and importing custom applications in a multi-tenant database environment. These mechanisms and methods can enable embodiments to provide a vehicle for sharing applications across organizational boundaries. The ability to share applications across organizational boundaries can enable tenants in a multi-tenant database system, for example, to easily and efficiently import and export, and thus share, applications with other tenants in the multi-tenant environment. | 08-18-2011 |
20110208701 | Computer-Implemented Systems And Methods For Flexible Definition Of Time Intervals - Systems and methods are provided for segmenting time-series data stored in data segments containing one or more data records. A combined segment error measure is determined based on a proposed combination of two candidate segments. An error cost to merge the two candidate segments is determined based on a difference between the combined segment error measure and a segment error measure of one of the segments. The two candidate segments are combined when the error cost to merge meets a merge threshold to generate a combined segment. | 08-25-2011 |
20110208702 | Method and System for Verifying Geographical Descriptiveness of Media File - The invention relates to a system for verifying geographical descriptiveness of a media file, such as a picture. The system comprises storage means adapted to store a media file, communication means adapted to send said media file to a plurality of user terminals, and to receive, from the user terminals, data indicating a guessed geographical position with which the user of each user terminal associates the media file. The system also comprises calculation means adapted to determine whether the media file is indicative of a geographical position based on the guessed geographical positions received from the plurality of user terminal. | 08-25-2011 |
20110213757 | SYSTEM AND METHOD FOR AUTOMATIC STANDARDIZATION AND VERIFICATION OF SYSTEM DESIGN REQUIREMENTS - A novel automatic standardization and verification process for system design requirements in a product development project is disclosed. In one embodiment, a method for automatic standardization and verification of system design requirements in a product development project using a standardization and verification tool embedded in a computer aided design (CAD) application includes obtaining a desired standardized requirement from a requirements database, retrieving compliance criteria from the standardized requirement, obtaining one or more components associated with the standardized requirement from one or more data sources, and obtaining relevant extracted and derived attributes from the one or more components, associated with the standardized requirement. The method further includes comparing the relevant extracted and derived attributes with the compliance criteria, determining whether the relevant extracted and derived attributes substantially meet the compliance criteria based on the outcome of the comparison, and generating a verification report based on the determination. | 09-01-2011 |
20110231373 | Taxonomy Mapping - Systems and methods for mapping extension taxonomy elements to a standard base taxonomy and thereafter making use thereof are provided. According to one embodiment, a list of base taxonomy elements is displayed on a display device. A taxonomy map is also displayed on the display device. The taxonomy map includes information regarding one or more extended taxonomy elements of a reporting entity that are not mapped to any base taxonomy elements. Responsive to one or more user input events corresponding to a selection of a base taxonomy element and corresponding to a request to map an extended taxonomy element to the selected base taxonomy element, the compatibility of the selected base taxonomy element with the extended taxonomy element is validated. If the compatibility is affirmed, then an association is formed between the extended taxonomy element and the selected base taxonomy element. | 09-22-2011 |
20110238631 | SUBMISSION OF METADATA CONTENT AND MEDIA CONTENT TO A MEDIA DISTRIBUTION SYSTEM - The disclosed embodiments related generally to the submission of metadata content and media content to a media distribution system. The media content can include, for example, audio, video, image, or podcast data. In accordance with one embodiment, a client submitting metadata content can validate the metadata content prior to submission of the metadata content and/or associated media content. A media distribution system receiving metadata content can also validate the metadata content. | 09-29-2011 |
20110258165 | AUTOMATIC VERIFICATION SYSTEM FOR COMPUTER VIRUS VACCINE DATABASE AND METHOD THEREOF - The present invention relates to a method and system for automatically verifying a computer vaccine database and, more particularly, to a method and system for automatically verifying a computer vaccine database, which is capable of automatically verifying and modifying a vaccine database mounted on a vaccine engine so that a normal program is not recognized as viruses or malicious codes by storing information about the normal program in the vaccine database in order to remove computer viruses or malicious codes. According to the present invention, a file set of the latest vaccine database can be rapidly collected and processed, and the problems of a vaccine database file provided by a vendor can be checked in advance. Accordingly, there are advantages in that a function of alarming error conditions and a process of reporting error in a vaccine database update process can be automated. | 10-20-2011 |
20110264631 | METHOD AND SYSTEM FOR DE-IDENTIFICATION OF DATA - A method and system for de-identification of data comprising a plurality of data elements. The method involves identifying one or more portions of the data based on a predefined identification condition. The predefined identification condition is expressed in terms of, but is not limited to, one or more characteristics of the data. Further, one or more de-identification data elements are generated corresponding to the one or more data elements of the one or more identified portions of the data. The one or more de-identification data elements are generated based on the one or more characteristics of the one or more portions of the data. Thereafter, the one or more portions of the data are replaced with the one or more de-identification data elements respectively. As a result, the format of the one or more de-identification data elements remains identical to the format of the one or more data elements. | 10-27-2011 |
20110270805 | CONCURRENT LONG SPANNING EDIT SESSIONS USING CHANGE LISTS WITH EXPLICIT ASSUMPTIONS - An approach is provided that receives a change request from a requestor. The change request includes metadata regarding the change, one or more changes, and one or more change assumptions corresponding to at least one of the changes. The change request is stored in a data store of pending requests. One or more systems are identified that correspond to each of the change assumptions. The identified systems are automatically queried with queries that correspond to the change assumptions. Query responses in response to the querying are received from the identified systems. The validity of each of the change assumptions is determined based on the received query responses. If the change assumptions are valid, then the changes included in the change request are processed. On the other hand, if at least one of the change assumptions is invalid, then the change request is rejected. | 11-03-2011 |
20110276541 | Information processing system - An information processing system for recording operational information in a log includes a log generating unit configured to generate the log in such a manner that a conversion target character string included in the log is recognizable; a log converting unit configured to convert the conversion target character string to an irrecoverable and unique character string; a log outputting unit configured to output the log including the converted character string; and a log collecting unit configured to collect the output log. | 11-10-2011 |
20110282846 | DATA INTEGRITY MECHANISM FOR EXTERNAL STORAGE DEVICES - A method for maintaining data integrity of a storage device is provided. A request is received to create an access monitoring session for a data range on a volume of the storage device. A session identification (ID) is determined for the access monitoring session for data range on the volume. An entry is created in an access monitoring session table for the session ID, and the entry adds the access monitoring session with session ID for the data range on the volume to the access monitoring session table. Request parameters are included in the request to create the access monitoring session. The request parameters denote access to the data range on the volume for the session ID and are stored in the access monitoring session table. Access is controlled to the data range on the volume for the session ID based on request parameters stored in the access monitoring session table. | 11-17-2011 |
20110289060 | INFORMATION PROCESSING DEVICE AND DATA SHREDDING METHOD - An object is to achieve efficient shredding of recording media. | 11-24-2011 |
20110295814 | Methods and Systems for Detecting Skewed Data in a Multitenant Database Environment - Detection of skew in an on-demand database services environment is provided. A request is generated to scan a multitenant database for skew indicated by relationship depth exceeding an expected limit. A database crawler calculates skew for tenant identifier for a particular table in the database. Any skew that is detected is identified for later resolution. | 12-01-2011 |
20110307452 | PERFORMING CODE ANALYSIS IN A MULTI-TENANT DATABASE SYSTEM - A system and method for performing code analysis in a database system. In one embodiment, a method includes receiving a request to scan code for a software application. The method further includes fetching metadata associated with a user, fetching the code for the software application, and scanning the code. | 12-15-2011 |
20110307453 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing apparatus including: a database in which content information, content-related information, and unit information are registered; a unit information receiving unit receiving the unit information, which was generated by connecting the content-related information to the content information, from an operation terminal; an overlapping registration confirming unit confirming whether unit information that overlaps the received unit information is registered in the database; an overlapping registration notifying unit notifying, via the operation terminal, an operator of the confirmation result; a credibility confirming unit operable when overlapping registration has not been found, to confirm credibility of the unit information based on verification information registered in one of the database and an external database; a credibility notifying unit notifying, via the operation terminal, the operator of the confirmation result; and a unit information registering unit registering the unit information whose credibility has been confirmed in the database. | 12-15-2011 |
20110313975 | VALIDATING FILES USING A SLIDING WINDOW TO ACCESS AND CORRELATE RECORDS IN AN ARBITRARILY LARGE DATASET - Data records in files may be validated by sequentially accessing the data records while allowing random data access within a sliding window. The data records may also be validated by caching record values. Variable-length record lists in one or more files may be reduced to fixed length record lists while accessing arbitrary record list items. | 12-22-2011 |
20110313976 | METHOD AND SYSTEM FOR PARTIAL SHADOW MIGRATION - A method for migrating files including receiving, from a client, a first FS operation request for a target FS, making a first determination that migration for a source FS is not complete and making a second determination that the first FS operation request specifies a directory and that a directory level attribute for the directory on the target FS specifies that the directory on the target FS is un-migrated. In response to the first and second determination obtaining, from the source FS, meta-data for content in the directory and creating, using the meta-data, a directory entry for a file in the directory on the target FS. The method further includes creating an on-disk space map for the file, creating an in-memory space map for the file, and servicing, after creating the on-disk space map and in-memory space map, the first FS operation request using the target FS. | 12-22-2011 |
20110320410 | SYSTEM AND METHOD FOR GENERATING DYNAMIC QUERIES - A first query is retrieved by a computing device. A second query is retrieved by the computing device, wherein the second query is linked to the first query. A derivative query is generated by the computing device based, at least in part, upon merging at least a portion of the second query with at least a portion of the first query, wherein generating the derivative query includes retrieving the first query and the second query prior to generation of the derivative query. The computing device determines whether the derivative query contains one or more conflicts. If it is determined that the derivative query contains one or more conflicts, the one or more conflicts in the derivative query are resolved by the computing device. | 12-29-2011 |
20110320411 | SYSTEM AND METHOD FOR A COMPUTER BASED FORMS LANGUAGE - A computational platform and related methods that generally combines the object model and the programming model into a single set of constructs (e.g., Forms, relations, entities, relationships). These constructs provide the characteristics of inheritance, linkage, immutability, versioning, and substitution in a single structure that can store the objects, processes, and instructions/ programs, and provide for convergence and divergence of information in information streams, a database graph, or a database web distributed across a set of nodes. | 12-29-2011 |
20120005169 | METHOD AND SYSTEM FOR SECURING DATA - Disclosed are methods and computer program product for securing data corresponding to one or more data fields of a form by providing data integrity, confidentiality and non-repudiation. The present invention includes providing one or more controls for enabling selection of at least one security type for each of the data fields corresponding to the form. Further, at least one security routine is implemented for the data fields to produce corresponding secured data. The at least one security routine corresponds to the selected at least one security type. Further, a system for securing the data is also disclosed. | 01-05-2012 |
20120005170 | System and method for improving integrity of internet search - A system and method are provided to receive a search query from a user, typically via a web browser, the Internet, and a web server. A search engine obtains a set of potential search results based on the search query. For each Internet domain or web site mentioned in the search results, a set of data sources is accessed to obtain information concerning the legitimacy of the business associated with the Internet domain or web site. The legitimacy information is used to reorder or to change or to augment the appearance or presentation of the search result for the Internet domain or web site. The processed search results are returned to the user. | 01-05-2012 |
20120011103 | SYSTEM AND METHOD FOR PROVIDING SEARCH SERVICE - Provided are a system and a method for providing a search service. The system and the method for providing the search service select a search attribute with respect to documents to be searched for based on a request from a user and provide a search service based on the selected search attribute. | 01-12-2012 |
20120011104 | SYSTEM AND METHODS FOR ASSISTING BUSINESSES IN COMPLIANCE WITH GAS EMISSIONS REQUIREMENTS - A system and method for calculating a value indicative of the amount of an undesirable constituent of a volatile gas stream that is removed from the atmosphere. Data received at a higher sampling rate is subjected to a plurality of validation processes and data that is determined to be faulty is then quarantined. Quarantined data can be replaced, however, an audit trail is generated to indicate what data has been replaced and the underlying rationale for the replacement data. | 01-12-2012 |
20120023071 | CONVERTING TWO-TIER RESOURCE MAPPING TO ONE-TIER RESOURCE MAPPING - Methods, systems and computer program products are provided for converting a two-tier resource mapping to a one-tier resource mapping. A first mapping from intermediate data buffer to a data destination may be determined. A second mapping from a data source to the intermediate data buffer may also be determined. An optimized mapping from the data source to the data destination may be generated based on the first and second mappings. The optimized mapping may then be used instead of the first and second mappings to collect data from the data source to the data destination, thereby resulting in a one-tier resource mapping. In some instances, the mappings are sets of one or more queries. | 01-26-2012 |
20120036110 | Automatically Reviewing Information Mappings Across Different Information Models - A computer-implemented method, system, and program product for automatically reviewing a mapping between information models. The method includes: receiving a mapping between an element in the first information model to an element in the second information model. Each element is associated with an element identifier and an element value, and the mapping signifies a relationship between the element in the first information model and the element in the second information model. The method further includes comparing the received mapping against one or more known indications of suspicious mappings to determine if the received mapping resembles one of the indications of suspicious mappings. If the received mapping is determined to be suspicious, identifying the received mapping as one that requires review. | 02-09-2012 |
20120036111 | VIRTUAL COLUMNS - Techniques are described herein for performing column functions on virtual columns in database tables. A virtual column is defined by the database to contain results of a defining expression. Statistics are collected and maintained for virtual columns. Indexing is performed on virtual columns. Referential integrity is maintained between two tables using virtual columns as keys. Join predicate push-down operations are also performed using virtual columns. | 02-09-2012 |
20120059802 | METHOD ALLOWING VALIDATION OF LARGE VOLUME OF UPDATES IN A LARGE PRODUCTION DATABASE OF NEW ENTERED DATA PRIOR TO THEIR RELEASE - A method of insuring the integrity of a plurality of updates brought in real-time to a large production database concurrently used by one or more software applications is described. The large production database includes a plurality of products participating to the definition of objects. The method first comprises the step of requesting the issuance of a unique filing number associated to a draft state version of the plurality of updates while keeping them invisible to the end-users of the large production database. Then, a set of product items identified as a whole by the unique filing number and on which the updates applies is created or copied in the large production database and gathered under the form of a meta-product on which the plurality of updates is applied. When updating is complete, the meta-product is successively set into a customizable flow of one or more validation states in order to perform a cross-validation of the plurality of updates. Finally, when validation is complete, the meta-product is set into a production state where the uniquely identified meta-product becomes immediately visible and useable by the end-users of the one or more software applications. | 03-08-2012 |
20120066184 | SPECULATIVE EXECUTION IN A REAL-TIME DATA ENVIRONMENT - Techniques are described for speculatively executing operations on data in a data stream in parallel in a manner that increases the efficiency of the stream-based application. In addition to executing operations in parallel, embodiments of the invention may determine whether certain results produced by the parallel operations are valid results and discard any results determined to be invalid. | 03-15-2012 |
20120072399 | NON-INTRUSIVE DATA LOGGING - Mediums, methods, and systems are provided for efficiently logging data. A model may include one or more logging points which process data, the data being stored in a log associated with the logging point. The logging point may request that a logging object store the data point. The logging object may include a reference to a vector for storing the data point. When two or more logging objects are associated with the same logged data points, the two or more logging objects may share the same vector. If an object logs a point which is not present in a shared vector, the object may update the object's reference so that the object references a different existing vector, or the object may create a new vector. The vectors may be compressed and/or made circular to achieve improved efficiency. | 03-22-2012 |
20120084266 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR VALIDATING ONE OR MORE METADATA OBJECTS - In accordance with embodiments, there are provided mechanisms and methods for creating, exporting, viewing and testing, and importing custom applications in a multitenant database environment. These mechanisms and methods can enable embodiments to provide a vehicle for sharing applications across organizational boundaries. The ability to share applications across organizational boundaries can enable tenants in a multi-tenant database system, for example, to easily and efficiently import and export, and thus share, applications with other tenants in the multi-tenant environment. | 04-05-2012 |
20120084267 | METHOD AND SYSTEM FOR CONTENT MANAGEMENT - Systems and methods are described which facilitate content management in a network environment. Content types can be modeled by end users based on data usage and automatically generated by a content management system based on a user-defined data model. From these content types, content type objects may be generated. The data may then be examined to acquire a key set, and a content instance object generated for each datum found which matches a content type. This content instance object can then be associated with the datum using one or more key values, saved, and subsequently used to manage the data. These methods and systems allow data to be migrated to a content management system without any modification to the existing data repository or its associated structures. | 04-05-2012 |
20120089577 | NONDISRUPTIVE OVERFLOW AVOIDANCE OF TUPLE VALIDITY TIMESTAMPS IN TEMPORAL DATABASE SYSTEMS - A first epoch column pair includes a first global identification (ID) having first maximum value. A second epoch column pair includes a second global identification having second maximum value. The first epoch column pair receives first snapshots, and the first global ID increases with each of the first snapshots. When first global ID reaches first maximum value minus 1, switch to the second epoch column pair. The second epoch column pair receives second snapshots, and second global ID increases with each of the second snapshots. First global ID and first epoch column pair are reset, based on conditions. When second global ID reaches first maximum value minus 1, switch back to first epoch column pair. The first epoch column pair again receives first snapshots, and the first global ID increases with each of the second snapshots. Second global ID and second epoch column pair are reset, based on conditions. | 04-12-2012 |
20120102002 | AUTOMATIC DATA VALIDATION AND CORRECTION - Techniques disclosed herein include systems and methods for data validation and correction. Such systems and methods can reduce costs, improve productivity, improve scalability, improve data quality, improve accuracy, and enhance data security. A data manager can execute such data validation and correction. The data manager identifies one or more anomalies from a given data set using both contextual information and validation rules, and then automatically corrects any identified anomalies or missing information. Identification of anomalies includes generating similar data elements, and correlating against contextual information and validation rules. | 04-26-2012 |
20120109900 | MARKETIZATION ANALYSIS - Various embodiments provide techniques for analyzing the marketization of products. In at least some embodiments, a marketized version of a product (e.g., a software application) is associated with a configuration file that indicates actual product element settings for the marketized version. According to some embodiments, techniques are provided for determining if the product element settings (e.g., expected behaviors) indicated in the configuration file match product element settings in a specification for the product and/or vice-versa. In at least some embodiments, techniques are provided for generating a specification file from a configuration file for a marketized version of a product. For example, product elements and product element settings can be selected from the configuration file and used to generate the specification file. The specification file can then be used to validate the product and/or other versions of the product, e.g., subsequent builds and/or marketizations of the product. | 05-03-2012 |
20120109901 | CONTENT CLASSIFICATION APPARATUS, CONTENT CLASSIFICATION METHOD, AND CONTENT CLASSIFICATION PROGRAM - Event occurrence information storing means stores event occurrence information in which an event into which a content is classified is associated with photographic acquisition information including shooting date information indicative of the date when the content was shot. Event occurrence information correcting means corrects the event occurrence information based on shooting date information for multiple years and a base year. On condition that the shooting date information on the content to be classified corresponds to the date of the event occurrence information corrected by the event occurrence information correcting means, event determination means determines an event determined to be likely among events corresponding to the date of the event occurrence information to be the event into which the content should be classified. | 05-03-2012 |
20120117034 | CONTEXT-AWARE APPARATUS AND METHOD - Disclosed herein is a context-aware apparatus and method. The context-aware apparatus includes a microblog monitoring unit, a web information collection unit, a microblog information collection unit, and a context-aware information creation unit. The microblog monitoring unit monitors the written information of one or more microblogs, and extracts at least one keyword corresponding to a set topic from the written information. The web information collection unit collects web information corresponding to the keyword from webpages. The microblog information collection unit collects microblog information corresponding to the written information including the keyword from the microblogs. The context-aware information creation unit creates context-aware information using the web information and the microblog information. | 05-10-2012 |
20120124008 | SYSTEM AND METHOD FOR GENERATING COLLECTION OF MEDIA FILES - A media collection generating system includes a location module, a scanning module, and a verifying module. The location module specifies one or more locations where media files are stored. The scanning module scans the media files in the one or more locations. The collection generator generates a collection of the scanned media files. The verifying module verifies the media files in the collection and if a media file is invalid, deletes the media file from the collection. | 05-17-2012 |
20120130958 | HETEROGENEOUS FILE OPTIMIZATION - Techniques are described herein that are capable of heterogeneously optimizing a file. Heterogeneous optimization involves optimizing regions of a file non-uniformly. For example, the regions of the file may be optimized to different extents. In accordance with this example, a different optimization technique may be used to optimize each region or subset of the regions. In one aspect, optimization designations are assigned to respective regions of a file based on access patterns that are associated with the respective regions. The file may be a database file, a virtualized storage file, or other suitable type of file. Each optimization designation indicates an extent to which the respective region is to be optimized. Each region may be optimized to the extent that is indicated by the respective optimization designation that is assigned to that region. | 05-24-2012 |
20120130959 | METHOD FOR CONTROLLING TIMES OF REFRESHING ETHERNET FORWARDING DATABASE - The present invention discloses a method for controlling the times of refreshing an Ethernet Forwarding Database, and the method includes: pre-configuring the time of an address refresh pause timer, receiving a first request for refreshing the Ethernet Forwarding Database; refreshing the Ethernet Forwarding Database and starting the address refresh pause timer; and receiving subsequent requests for refreshing the Ethernet Forwarding Database successively, detecting, each time when receiving the request for refreshing the Ethernet Forwarding Database, whether the address refresh pause timer expires, if yes, then proceeding to step | 05-24-2012 |
20120143829 | NOTIFICATION OF CONFIGURATION UPDATES IN A CLUSTER SYSTEM - A second node receives a message from a first node in a cluster environment. The message includes a unique identifier of a shared data storage device including a cluster configuration database that defines membership of nodes in a cluster. In response to receiving the message, the second node attempts to find the shared data storage device. In response to finding the shared data storage device, the second node locates and reads the cluster configuration database on the shared data storage device. The second node then assimilates a cluster configuration update indicated by the cluster configuration database. | 06-07-2012 |
20120143830 | INTERACTIVE PROOF TO VALIDATE OUTSOURCED DATA STREAM PROCESSING - A method for validating outsourced processing of a data stream arriving at a streaming data warehouse of a data service provider includes a proof protocol. A verifier acting on behalf of a data owner of the data stream may interact with a prover acting on behalf of the data service provider. The verifier may calculate a first root hash value of a binary tree during single-pass processing of the original data stream with limited computational effort. A second root hash value may be calculated using the proof protocol between the verifier and the prover. The prover is requested to provide certain queried values before receiving random numbers used to generate subsequent responses dependent on the provided values. The proof protocol may be used to validate the data processing performed by the data service provider. | 06-07-2012 |
20120143831 | AUTOMATIC CONVERSION OF MULTIDIMENTIONAL SCHEMA ENTITIES - A system and method for conversion of multidimensional schema entities from one type to another type are described. In various embodiments, a system receives a multidimensional schema entity of a first type and converts the multidimensional schema entity to a second type. The system receives user input and converts the multidimensional schema entity to the second type based on the input received from the user. In various embodiments, the system creates multidimensional schema entities automatically. In various embodiments, a method for converting multidimensional schema entities from one or more types to one or more other types is described. In various embodiments, a first multidimensional schema entity is analyzed and converted to a different type based on the analysis. In various embodiments, a multidimensional schema entity is created automatically based on input from two other multidimensional schema entities. In various embodiments, two multidimensional schema entities are merged in one multidimensional schema entity. | 06-07-2012 |
20120150819 | Trash Daemon - A method of managing a database system that includes a swarm database with nodes of processors and memory. The memory stores programs that can be executed on the processors. Determining data files to delete, moving the data files to delete to a trash directory, truncating using a trash daemon, larger files to delete to smaller sized file pieces and deleting the smaller sized file pieces by a local operation system. | 06-14-2012 |
20120158667 | ASSET MANAGER - A method may include automatically receiving content and metadata; automatically identifying a source metadata format of the metadata; automatically identifying a target metadata format; automatically selecting a data map to perform validation of the metadata and at least one of transforming or translating of the metadata based on the identifying of the source metadata format and the identifying of the target metadata format, wherein the transforming includes converting the metadata to the target metadata format and the translating includes converting a file type of the metadata to a target metadata file type; and automatically attempting to validate the metadata based on the data map; automatically performing the at least one of the transforming or the translating of a validated metadata when the metadata is validated based on the data map, wherein the transforming includes converting the validated metadata to the target metadata format including one or more extendible fields. | 06-21-2012 |
20120158668 | STRUCTURING UNSTRUCTURED WEB DATA USING CROWDSOURCING - A crowdsourcing data structuring system and method for capturing unstructured data from the Web and adding structure by placing the data in a document that is accessible by others in a cloud computing environment. Using crowdsourcing, the unstructured data is annotated, amended, and verified to add structure to the unstructured data. An anchor and update module convert the data to a pointer that links the document to the data at an information source and stores the pointer in the document rather than the data itself. The data displayed in the document is updated whenever the information source is updated. A contribution module allows users to add data to the document, a validation module allows users to determine the validity of the data linked to in the document, and an expert ranking module allows users to rank the expert or contributor of the data in the document. | 06-21-2012 |
20120166397 | DEVICE AND METHOD FOR MANAGING ENVIRONMENT OF SYSTEM - A method in which a system environment management device manages user environment information, the method includes: connecting to a first user terminal; searching for and reading personal information and environment setting information of the first user terminal; analyzing the personal information and environment setting information; determining whether the analyzed personal information and environment setting information is a common element; and storing, if the analyzed personal information and environment setting information is a common element, the analyzed personal information and environment setting information in a common profile storage unit of the system environment management device. | 06-28-2012 |
20120173492 | AUTOMATICALLY DETECTING THE ABILITY TO EXECUTE PROCESSING LOGIC AFTER A PARSER OR VALIDATION ERROR - In an embodiment of the invention, a method for error handling during document processing is provided. The method includes receiving a well-defined document as input to a computer program executing in memory of a computer, parsing the well-defined document and validating the well-defined document as conforming with a defined plan for the well-defined document, and responsive to detecting an error during parsing and validating, permitting use of the well-defined document to proceed notwithstanding the detected error if enough of the well-defined document conforms to the defined plan to satisfy programmatic input needs of the computer program, but otherwise terminating use of the well-defined document in the computer program. | 07-05-2012 |
20120173493 | METHOD AND APPARATUS FOR PROVIDING SAFEGUARDING AGAINST MALICIOUS ONTOLOGIES - A method for providing a mechanism for safeguarding against malicious ontologies may include causing examination of a received file associated with an ontology to determine a namespace marking for subjects, predicates and objects of each triple of the file that are to be stored in a database, utilizing relationship data corresponding to the namespace marking to identify triples whose subjects or objects do not correspond to the ontology, and determining whether the relationship data enables the triples whose subjects or objects do not correspond to the ontology to be considered as a valid data set for storage in the database. A corresponding apparatus and computer program product are also provided. | 07-05-2012 |
20120173494 | METHOD FOR DERIVING A HIERARCHICAL EVENT BASED DATABASE OPTIMIZED FOR PHARMACEUTICAL ANALYSIS - A computer implemented method for inferring a probability of a first inference absent from a database at which a query regarding the inference is received. Each datum of the database is conformed to the dimensions of the database. Each datum of the plurality of data has associated metadata and an associated key. The associated metadata includes data regarding cohorts associated with the corresponding datum, data regarding hierarchies associated with the corresponding datum, data regarding a corresponding source of the datum, and data regarding probabilities associated with integrity, reliability, and importance of each associated datum. The query is used as a frame of reference for the search. The database returns a probability of the correctness of the first inference based on the query and on the data. | 07-05-2012 |
20120179657 | HUMAN RESOURCES MANAGEMENT SYSTEM AND METHOD INCLUDING PERSONNEL CHANGE REQUEST PROCESSING - A method for processing personnel change requests (PCRs), the method including: identifying a wizard configured for a particular PCR of the PCRs, the PCR having multiple steps; invoking the wizard for guiding a user through first ones of the steps, the first ones of the steps prompting the user for a first set of data associated with the PCR; storing the first set of data in a temporary storage device; invoking the wizard for guiding another user through second ones of the steps, the second ones of the steps prompting the another user for a second set of data associated with the PCR; storing the second set of data in the temporary storage device; monitoring a status of the PCR; and transferring the first and second sets of data from the temporary storage device to a database in response to detecting a particular status of the PCR. | 07-12-2012 |
20120185441 | EFFICIENT DATA COLLECTION MECHANISM IN MIDDLEWARE RUNTIME ENVIRONMENT - A mechanism for efficient collection of data is described for runtime middleware environments. Two frequencies are used, a collection frequency (CF) to collect the data and an aggregation frequency (AF) to aggregate and persist the data in a repository. The collection cycle is a shorter time interval than the aggregation cycle. An agent residing in the container periodically collects a set of data upon every collection cycle from the components of the middleware system and caches the set of data locally. Upon every aggregation cycle, the agent applies an aggregation function to the collected set of data and persists the set of data into a repository after the aggregation function has been applied. The aggregation function is such that it resulting data represents the behavior of the runtime environment in the total duration of the aggregation cycle. | 07-19-2012 |
20120185442 | Write Failure Protection for Hierarchical Integrity Schemes - A method for data integrity protection includes arranging in an integrity hierarchy a plurality of data blocks, which contain data. The integrity hierarchy includes multiple levels of signature blocks containing signatures computed respectively over lower levels in the hierarchy, wherein the levels culminate in a top-level block containing a top-level signature computed over the hierarchy. A modification to be made in the data stored in a given data block is received. One or more of the signatures is recomputed in response to the modification, including the top-level signature. Copies of the given data block, and of the signature blocks, including a copy of the top-level block, are stored in respective locations in a storage medium. An indication that the copy is a valid version of the top-level block is recorded in the copy of the top-level block. | 07-19-2012 |
20120185443 | CONFIGURABLE FLAT FILE DATA MAPPING TO A DATABASE - Disclosed are a method and framework for mapping data from a data source to a data destination. The method comprises the step of providing a plurality of components for performing defined functions to map the data from the source to the destination. These plurality of components perform the steps of (i) reading data from the source, (ii) processing the read data according to a set of rules, and (iii) loading the processed data into the destination. Preferably the plurality of components perform the further steps of (iv) verifying the integrity of the read data, and (v) logging results into a file. Each of the components operates independently of the other of the components. | 07-19-2012 |
20120191665 | Integrated Distribution Management System Channel Adapter - Disclosed are various embodiments for communicating with an integrated distribution management system (IDMS). An IDMS often employs a communications protocol that is incompatible with a service oriented architecture. Accordingly, embodiments of the disclosure can allow utility computing systems in a service oriented architecture or in a messaging based environment to communicate with an IDMS. | 07-26-2012 |
20120197848 | VALIDATION OF INGESTED DATA - Methods and systems for validating ingested data are disclosed. In accordance with the methods and systems, data elements can be received for storage in slots of an individual descriptor in a storage medium. In addition, at least one validation test can be selected based on a weighting of the data elements that indicates a respective degree of importance of the data elements. The selected validation test or tests can be applied to the data elements stored in the slots to generate respective validation results. Further, a validation score indicating a sufficiency of the stored data elements can be generated based on the validation results. | 08-02-2012 |
20120197849 | RETRIEVING INFORMATION FROM A RELATIONAL DATABASE USING USER DEFINED FACETS IN A FACETED QUERY - A method, system and computer program product for retrieving information from a relational database using user defined facets in a faceted query may include receiving a faceted query and receiving at least one user defined facet group query. The method may also include filtering out facets in the faceted query that relate to metadata in the relational database. The method may additionally include associating each remaining facet in the faceted query with a corresponding user defined facet group query of the at least one user defined facet group query to provide a set of user defined facet groups. An SQL query may be generated for the faceted query using the set of user defined facet groups Information from the relational database may be retrieved responsive to the SQL query. | 08-02-2012 |
20120197850 | SYSTEM AND METHOD FOR GENERATING DYNAMIC QUERIES - A first query is retrieved by a computing device. A second query is retrieved by the computing device, wherein the second query is linked to the first query. A derivative query is generated by the computing device based, at least in part, upon merging at least a portion of the second query with at least a portion of the first query, wherein generating the derivative query includes retrieving the first query and the second query prior to generation of the derivative query. The computing device determines whether the derivative query contains one or more conflicts. If it is determined that the derivative query contains one or more conflicts, the one or more conflicts in the derivative query are resolved by the computing device. | 08-02-2012 |
20120203743 | RE-ESTABLISHING TRACEABILITY - A traceability link establishing method and system. The method includes retrieving by a computing system, mapping data comprising data associating elements of a source model to elements of a target model. The computing system retrieves the target model and elements of the target model. The computing system processes an element of the elements. The computing system retrieves first traceability links from the element. The computing system processes the traceability links. The computing system retrieves supplier data associated with the traceability links. The supplier data comprises data associated with a first supplier. The computing system verifies if the supplier comprises a valid supplier. The computing system stores results of the verifying process. The results indicate if the supplier comprises a valid supplier. | 08-09-2012 |
20120203744 | MAINTAINING DATA INTEGRITY ACROSS EXECUTION ENVIRONMENTS - Current computing solutions often involve the sharing of data across multiple computer implemented processes. To ensure data integrity throughout the execution environment, an executing process can make a request for data from a Data Provider. In response to the request, the Data Provider can bundle the data and one or more Validation Objects in a Data Object. The Data Object can be passed between executing processes, and at any point in the execution, an executing process can verify the integrity of the data by making a request to the Data Object. To facilitate the passing of Data Objects throughout a heterogeneous execution environment, a Data Object can create a representation of itself specific to the target system. The Data Objects are advantageous in that all of the necessary validation checks are centralized, thus decreasing maintenance costs and the possibility of error. | 08-09-2012 |
20120221530 | METHOD AND APPARATUS FOR VERIFYING STORED DATA - According to one aspect of the present invention, there is provided a method of verifying stored data that is associated with an owner. The method comprises selecting stored data to verify, generating, for an item of the selected data a unique key, associating the generated key with the corresponding data item and sending a communication to the owner associated with a selected data item, the communication including the generated key associated with that selected data item. The method further comprises receiving a response to the communication, the response identifying a key, determining from the response whether the data associated with the received key is valid; and associating the determination with the data in the database. | 08-30-2012 |
20120221531 | BOTTOM-UP OPTIMISTIC LATCHING METHOD FOR INDEX TREES - Methods, systems and computer program products for concurrency control in a hierarchical arrangement of nodes of a data structure by traversing a single search path in a hierarchical arrangement of nodes of a data structure, recording a version number for each node in the search path, identifying at least one node in the search path to be updated, latching the at least one node, reading a version number of the latched at least one node and comparing the recorded version number of the latched at least one node to the read version number of the latched at least one node. | 08-30-2012 |
20120221532 | INFORMATION APPARATUS - The present invention enables a unified way of accessing files generated by application programs configured to store contents in files in different formats, without using a conversion program. | 08-30-2012 |
20120233132 | METHODOLOGY TO ESTABLISH TERM CO-RELATIONSHIP USING SENTENCE BOUNDARY DETECTION - A method and system for splitting a text document into individual sentences using sentence boundary detection, and establishing co-relationships between terms which are present in the same sentence. A document corpus, or collection of text records, is provided, containing text with terms to be extracted. The text records in the document corpus are divided into individual sentences, using a set of rules for sentence boundary detection. The individual sentences are then analyzed to extract and correlate terms, such as parts and symptoms, symptoms and actions, or parts and failure modes. The correlated terms are then validated based on frequency of occurrence, with term pairs being considered valid if their frequency of occurrence exceeds a minimum frequency threshold. The validated term correlations can be used for fault model development, document classification, and document clustering. | 09-13-2012 |
20120239629 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR VALIDATING ONE OR MORE METADATA OBJECTS - In accordance with embodiments, there are provided mechanisms and methods for creating, exporting, viewing and testing, and importing custom applications in a multitenant database environment. These mechanisms and methods can enable embodiments to provide a vehicle for sharing applications across organizational boundaries. The ability to share applications across organizational boundaries can enable tenants in a multi-tenant database system, for example, to easily and efficiently import and export, and thus share, applications with other tenants in the multi-tenant environment. | 09-20-2012 |
20120246119 | SCALABLE COMPUTER ARRANGEMENT AND METHOD - A scalable computer arrangement and method enables the accessing of stored information by utilizing algorithms. The validity of the algorithms and/or retrieved data are determined by a validity management module. The algorithm and/or the retrieved data may be updated, whereby self correction occurs dynamically over time with changing stored information. In another embodiment, the computer arrangement and method enable networked computer systems each including hyper objects employing embedded algorithms or rules for accessing information across the network in a standardized manner, even though the networked computer system databases may employ different schema and formats. | 09-27-2012 |
20120246120 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR VALIDATING ONE OR MORE METADATA OBJECTS - In accordance with embodiments, there are provided mechanisms and methods for creating, exporting, viewing and testing, and importing custom applications in a multitenant database environment. These mechanisms and methods can enable embodiments to provide a vehicle for sharing applications across organizational boundaries. The ability to share applications across organizational boundaries can enable tenants in a multi-tenant database system, for example, to easily and efficiently import and export, and thus share, applications with other tenants in the multi-tenant environment. | 09-27-2012 |
20120246121 | SYSTEM FOR DISPLAYING GRAPHICAL NARRATIONS - An online network collects a dataset of an individual's information through a computer-implemented method. An individual enters a dataset of the information and a plurality of an individual's life events into a computer system. The dataset is arranged and converted into a graphical representation for display. The online database receives and stores the dataset. The database associates the dataset with the plurality of the member's life events and then the dataset and the life events are outputted into the graphical representation for display for a witness. | 09-27-2012 |
20120254126 | SYSTEM AND METHOD FOR VERIFYING CONSISTENT POINTS IN FILE SYSTEMS - According to one embodiment, in response to a request for verifying a first prime representing a consistent point of a file system of a storage system having a plurality of storage units, each of a plurality of prime segments collectively representing the first prime is examined to determine whether the corresponding prime segment has been previously verified. Each of the prime segments is stored in one of the storage units, respectively. At least a first of the prime segments that has not been previously verified is verified, without verifying a second of the prime segments that has been previously verified. The first prime, when at least the first prime segment has been successfully verified, can be used to construct the consistent point of the file system. | 10-04-2012 |
20120254127 | COMPUTER-IMPLEMENTED METHOD OF DETERMINING VALIDITY OF A COMMAND LINE - A method of determining a command line validity includes maintaining a block network address database including block network address information; receiving a command line from a terminal of a user; extracting network address information included in the command line; determining whether the network address information is the block network address information, with reference to the block network address database; generating log information associated with the command line in case that the network address information is not the block network address information as the result of the determination, in which the log information comprises at least one of the network address information included in the command line, input time point information with respect to the input time point of the command line, and request content information; recording the log information in a log database; and determining the validity of the command line by using the log information. | 10-04-2012 |
20120265734 | INCREMENTAL COMPILATION OF OBJECT-TO-RELATIONAL MAPPINGS - Aspects of the subject matter described herein relate to incrementally modifying schemas and mappings. In aspects, an indication of a change to a client schema is received and a compilation directive is received. The compilation directive may indicate how one or more entities or associations in the client schema are to be mapped to the store schema. In response to receiving the indication of the change and the compilation directive, mapping data and storage schema may be incrementally modified with incremental revalidation and incremental updating of query and update views. | 10-18-2012 |
20120265735 | METHODS AND APPARATUS TO GENERATE A TAG FOR MEDIA CONTENT - Example methods and apparatus to generate identifying tags for media content as described herein. An example method includes obtaining an identifier value associated with at least one of audio or video of received media content by at least one of: extracting the identifier value from at least one of the audio or the video or determining the identifier value based on inherent information of at least one of the audio or the video, generating a tag including the identifier value, and storing the tag with the media content to cause the tag to be distributed to a presentation location along with the media content. | 10-18-2012 |
20120284237 | METHODS AND SYSTEMS FOR VALIDATING INPUT DATA - Methods and systems for use in validating input data in a computing system. Input data associated with a destination software application, such as a database, is received at a computing system. The input data is forwarded to an intermediate software application, such as a web application. When the input includes one or more patterns, a query produced by the intermediate software application based on the input data is validated, such as by comparing the structure of the query to one or more expected query structures. If the validation succeeds, the query is forwarded to the destination software application. Otherwise, the query is discarded. | 11-08-2012 |
20120290542 | MANAGING LARGE DATASETS OBTAINED THROUGH A SURVEY-DATA-ACQUISITION PROCESS - The invention generally relates to enabling the management of survey data. One embodiment includes providing an upload description that describes characteristics of survey data to be uploaded, assigning a thread to process a group of files that store aspects of the survey data, dividing the file into data chunks, deriving from a given data chunk a corresponding data-integrity value and respectively associating the same with the given data chunk, communicating the data chunks to a remote storage device, utilizing the corresponding data-integrity values to ensure successful communication of the data chunk, and spatially storing the survey data such that it is retrievable upon a request that describes a geographic area of interest. | 11-15-2012 |
20120290543 | ACCOUNTING FOR PROCESS DATA QUALITY IN PROCESS ANALYSIS - A method and system comprising an analysis module to perform automated analysis of a process supported by a process system. The analysis is performed based on data quality information that indicates a quality of process data stored in datastores to facilitate performance of the process by consumption of the process data. The system may further include process model information defining a plurality of process activities, the automated analysis being based at least in part on the process model information. The data quality information may indicate the age, validity, completeness, integrity, consistency, and/or accuracy of the process data. The analysis may be to calculate a risk of failure of the process, or may be to diagnose a cause of process failure. | 11-15-2012 |
20120296876 | EVENT AUDITING FRAMEWORK - Various embodiments of systems and methods for event auditing framework are described herein. The auditing framework includes one or more auditees, an auditor, and a memory associated with the auditor. Each auditee is associated with a digitally signed file including metadata of one or more events authorized for the auditee. The auditor validates digital signature of the file when the auditee is registered with the auditor. After validation of the digital signature, the metadata of the authorized events is stored with respect to the auditee to enable the auditee perform the authorized events. The auditing framework is expandable in that new event types can be added or updated dynamically. The auditing framework also ensures consistency of events. | 11-22-2012 |
20120296877 | FACILITATING DATA COHERENCY USING IN-MEMORY TAG BITS AND TAG TEST INSTRUCTIONS - Fine-grained detection of data modification of original data is provided by associating separate guard bits with granules of memory storing original data from which translated data has been obtained. The guard bits indicating whether the original data stored in the associated granule is protected for data coherency. The guard bits are set and cleared by special-purpose instructions. Responsive to attempting access to translated data obtained from the original data, the guard bit(s) associated with the original data is checked to determine whether the guard bit(s) fail to indicate coherency of the original data, and if so, discarding of the translated data is initiated to facilitate maintaining data coherency between the original data and the translated data. | 11-22-2012 |
20120310897 | ELECTRONIC DEVICE AND INFORMATION PROCESSING METHOD - According to one embodiment, an electronic device includes: a content processor configured to process content recorded in a recording medium; a first controller configured to, before the content processor terminates processing of the content, perform control such that status information indicating a processing status of the content recorded in the recording medium is changed from first information indicating a processing status of the content before start of the processing performed by the content processor to second information indicating a processing status of the content after termination of the processing performed by the content processor; and a second controller configured to, when the processing of the content performed by the content processor is not normally terminated, perform control such that the status information is changed from the second information to the first information. | 12-06-2012 |
20120310898 | SERVER AND METHOD FOR MANAGING MONITORED DATA - A method executed by a processor of a server sets device parameters and system parameters in relation to the server and one or more monitoring devices, and collects data from each of the one or more monitoring devices according to the set device parameters and system parameters. The collected data is stored into a first queue, and read at a specified time interval, and then stored into a database. Any abnormality in the operation of the monitoring devices is stored into a second queue and processed in real-time. | 12-06-2012 |
20120310899 | SYSTEM AND METHOD FOR EFFICIENT DATA EXCHANGE IN A MULTI-PLATFORM NETWORK OF HETEROGENEOUS DEVICES - A normalization engine, system and method provide normalization of and access to data between heterogeneous data sources and heterogeneous computing devices. The engine includes connectors for heterogeneous data sources, and conduits for gathering a customized subset of data from the data sources, as required by a software application with which the conduit is compatible. Working together, the connector and conduit may gather large amounts of data from multiple data sources and prepare a subset of the data that includes only that data required by the application, which is particularly advantageous for mobile computing devices. Further, the conduit may process the subset data in various formats to provide normalized data in a single format, such as a JSON-formatted REST web service communication compatible with heterogeneous devices. As an intermediary, the normalization engine may further provide caching, authentication, discovery and targeted advertising to mobile computing and other computing devices. | 12-06-2012 |
20120323858 | LIGHT-WEIGHT VALIDATION OF NATIVE IMAGES - One or more identifiers that facilitate efficient native image validation can be generated and stored in an auxiliary file upon pre-compiling of an assembly. The native image can be validated against an assembly from which the native image is generated, among other files that influence the generated contents of the native image, based upon the auxiliary file and included identifiers. Additionally, native image validation can be performed in an increasing cost sequence associated with each identifier included within the auxiliary file. | 12-20-2012 |
20130006947 | CONFLICT RESOLUTION VIA METADATA EXAMINATION - A provided computing device detects a synchronization conflict between two versions of a file and may examine corresponding metadata fields. The computing device may characterize a nature of a difference between metadata fields as immutable, mergeable, or subsumable. Core metadata fields may be defined such that a nature of a difference, or conflict, is categorized as immutable. Non-core metadata fields may be defined such that a nature of a difference, or conflict, is characterized as either mergeable or subsumable. A conflict between corresponding mergeable non-core metadata fields may be resolved by merging values of the corresponding non-core metadata fields. A conflict between corresponding subsumable non-core metadata fields may be resolved by replacing a value of a non-core metadata field of an older of the two versions of the file with a value of a corresponding non-core metadata field of a younger of the two versions of the file. | 01-03-2013 |
20130013571 | MANAGEMENT OF OBJECT MAPPING INFORMATION CORRESPONDING TO A DISTRIBUTED STORAGE SYSTEM - Systems and methods for managing mapping information for objects maintained in a distributed storage system are provided. The distributed storage system can include a keymap subsystem that manages the mapping information according to object keys. Requests for specific object mapping information are directed to specific keymap coordinators within the keymap subsystem. Each keymap coordinator can maintain a cache for caching mapping information maintained at various information sources. To manage the cache, the keymap system can utilize information placeholders that replace previously cached keymap information while a request to modify keymap information is being processed by the information sources. Each keymap coordinator can process subsequently received keymap information read requests in the event an information placeholder is cached as the current cached keymap information. | 01-10-2013 |
20130018848 | DETERMINING AND PRESENTING PROVENANCE AND LINEAGE FOR CONTENT IN A CONTENT MANAGEMENT SYSTEMAANM Velasco; Marc B.AACI OrangeAAST CAAACO USAAGP Velasco; Marc B. Orange CA US - Methods and apparatus, including computer program products, implementing and using techniques for determining provenance and lineage for content elements in a content management system. An option to track provenance and lineage data for the content element is provided in response to a content element being entered into a content management system. A provenance metadata attribute and a lineage metadata attribute are associated with the content element in response to selecting the option to track provenance and lineage data. An extent of difference is determined between the original content element and the changed content element in response to a change of content being made to the content element. The provenance metadata attribute is updated to reflect the determined extent of difference. It is determined what user changed the content element, and the lineage metadata attribute is updated to reflect the user's involvement in changing the content element. | 01-17-2013 |
20130018849 | MANAGEMENT OF TEMPORAL DATA BY MEANS OF A CANONICAL SCHEMA - Computer programs embodied in computer-readable media that can use canonical schemas to persist data from non-temporal tables, effective-time tables, assertion-time tables, and bitemporal tables, and that can enforce temporal integrity constraints on those tables, are provided. In one embodiment, the canonical schemas are used by database tables. In another embodiment, they are used by the physical files which persist data from those tables. Temporal metadata is used to express temporal requirements. Thus, uni-temporal, bitemporal, and temporally-enabled non-temporal tables can be generated without altering existing data models or designing temporal features into new data models. Support is also provided for managing temporal data that exists in future assertion time, and for using episodes to enforce temporal referential integrity. | 01-17-2013 |
20130024428 | METHOD AND SYSTEM FOR A FAST FULL STYLE SYSTEM CHECK USING MULTITHREADED READ AHEAD - A method for file system checking in a storage device. The method includes executing a computer system having a plurality microprocessor cores, initiating a file system check operation by using a file system check agent that execute on the computer system and accesses a storage device, and validating a plurality of meta-data structures of the file system. The method further includes dividing and allocating the metadata structures among a plurality of worker threads. For each worker thread, data corresponding to the metadata structures is processed using a read ahead operation. file system check is processed to completion, wherein the read ahead operation feeds data corresponding to the metadata structures to each of the plurality of worker threads in parallel. | 01-24-2013 |
20130046737 | SURVEY SYSTEM AND METHOD - A survey system of the present invention includes a mobile-computing device having a user interface for receiving survey data from respondents, a management server including a survey data validation module, and a terminal having a user interface. The mobile-computing device includes a survey application which encodes the survey data with at least one identifier and transmits the encoded survey data to the management server. The survey data validation feature of the management server analyzes the encoded survey data for validity and determines a probability of authenticity of the survey data. Statistical survey results, as well as the probability of authenticity of the survey data is presented to an end user through the user interface of the terminal. | 02-21-2013 |
20130054538 | INVALIDATING STREAMS IN AN OPERATOR GRAPH - Techniques are disclosed for invalidating, at one or more processing elements, data streams containing data tuples. A plurality of tuples is received via a data stream, whereupon the data stream is determined to be invalid based on at least one tuple in the plurality of tuples. The data stream is then invalidated, and a message is issued that causes one or more data streams included in the stream-based computing system and related to the invalidated data stream to also be invalidated. | 02-28-2013 |
20130073522 | METHOD AND DEVICE FOR PROCESSING FILES OF DISTRIBUTED FILE SYSTEM - A method and device for processing files of distributed file system are disclosed, in which the method involves dividing the file into at least one data group according to the size of the file, and determining first mapping information from the file to the at least one data group, in which each of the at least one data group includes content blocks and verification block of file, and determining second mapping information from each of the at least one data group to data storage servers storing the each of the at least one data group ( | 03-21-2013 |
20130080398 | METHOD AND SYSTEM FOR DE-IDENTIFICATION OF DATA WITHIN A DATABASE - A method and system for de-identification of one or more data elements inside one or more tables of one or more databases is disclosed. The method includes generating one or more de-identified data elements inside the one or more databases. Upon generating the one or more de-identified data elements, the one or more data elements are updated with the one or more de-identified data elements. The updating of the one or more data elements is directly performed inside the one or more tables of the one or more databases. | 03-28-2013 |
20130080399 | DYNAMICALLY REDIRECTING A FILE DESCRIPTOR - An apparatus for dynamically redirecting a file descriptor includes an identification module, a disassociation module, and an association module. The identification module identifies a first executing process using a second executing process. The first executing process may include a file descriptor and the first executing process may be independent of the second executing process. The disassociation module disassociates the file descriptor from a first data stream using the second executing process without involvement of the first executing process. The association module associates the file descriptor with a second data stream using the second executing process without involvement of the first executing process in response to the disassociation module disassociating the file descriptor from the first data stream. | 03-28-2013 |
20130080400 | SPECULATIVE EXECUTION IN A REAL-TIME DATA ENVIRONMENT - Techniques are described for speculatively executing operations on data in a data stream in parallel in a manner that increases the efficiency of the stream-based application. In addition to executing operations in parallel, embodiments of the invention may determine whether certain results produced by the parallel operations are valid results and discard any results determined to be invalid. | 03-28-2013 |
20130103651 | TELEMETRY FILE HASH AND CONFLICT DETECTION - In one embodiment, a server may identify an executable file using a hash identifier. The server | 04-25-2013 |
20130103652 | METHOD, PROGRAM, AND SYSTEM FOR SPECIFICATION VERIFICATION - A method, program, and system for specification verification. The method includes the steps of: (a) retaining a plurality of documents as groups of abstract documents that display values capable of indicating each metadata; (b) separating the group of abstract documents based on an input condition of an operation; (c) adding a new abstract document by using, based on an output condition, at least one the operation within a group of the operations; (d) separating the abstract documents according to overlapping ranges designated by the metadata; (e) unifying the abstract documents according to overlapping ranges designated by the metadata; (f) repeating the steps (b) to (e) until a termination condition is satisfied; and (g) verifying whether an incomplete abstract document exists when the termination condition is satisfied. | 04-25-2013 |
20130117239 | Generating Information with Plurality of Files Enumerated Therein - A mechanism is provided for generating enumerated information in which a plurality of files is enumerated except entirely-invalidated files on a sequential medium. Management information for managing locations where the plurality of files on the sequential medium are recorded is acquired from the sequential medium. The enumerated information in which the plurality of files are enumerated is generated in an order according to the locations where the plurality of files are recorded on the basis of the acquired management information. | 05-09-2013 |
20130144843 | Online Data Fusion - An online data fusion system receives a query, probes a first source for an answer to the query, returns the answer from the first source, refreshes the answer while probing an additional source, and applies fusion techniques on data associated with an answer that is retrieved from the additional source. For each retrieved answer, the online data fusion system computes the probability that the answer is correct and stops retrieving data for the answer after gaining enough confidence that data retrieved from the unprocessed sources are unlikely to change the answer. The online data fusion system returns correct answers and terminates probing additional sources in an expeditious manner without sacrificing the quality of the answers. | 06-06-2013 |
20130166513 | METHODS FOR ANALYZING A DATABASE AND DEVICES THEREOF - A method, non-transitory computer readable medium, and apparatus for analyzing a database includes obtaining SQL code defining one or more databases, each including a plurality of objects, wherein the SQL code is stored on one or more database servers. Defects in the SQL code are identified by applying a plurality of rules to the SQL code. Information regarding each identified defect is stored. The information regarding each identified defect is selectively provided to one or more defect closing interface modules. | 06-27-2013 |
20130198145 | TRACKING CHANGES RELATED TO A COLLECTION OF DOCUMENTS - Changes to a collection of documents are tracked by generating content information for the collection of documents identifying initial content within the collection of documents and assigning an indicator a value indicating absence of changes to the collection of documents. A change to the collection of documents is detected and the value of the indicator is adjusted in accordance with the detected change to indicate an amount of the initial content within the modified collection of documents. | 08-01-2013 |
20130198146 | MANAGING LARGE DATASETS OBTAINED THROUGH A SURVEY-DATA-ACQUISITION PROCESS - The invention generally relates to enabling the management of survey data. One embodiment includes providing an upload description that describes characteristics of survey data to be uploaded, assigning a thread to process a group of files that store aspects of the survey data, dividing the file into data chunks, deriving from a given data chunk a corresponding data-integrity value and respectively associating the same with the given data chunk, communicating the data chunks to a remote storage device, utilizing the corresponding data-integrity values to ensure successful communication of the data chunk, and spatially storing the survey data such that it is retrievable upon a request that describes a geographic area of interest. | 08-01-2013 |
20130204845 | SYSTEMS AND METHODS OF STORING AND MANAGING CONFIGURATION DATA IN TELECOMMUNICATIONS SYSTEMS AND DEVICES - Systems and methods of storing and managing data, such as configuration data, in telecommunications systems and devices. The data are stored as objects, each data object having an associated type, and each object type having at least one instance of the data object. Each instance of each data object has a primary key field, which identifies that instance of the data object. Each instance of each data object can have zero or more foreign key fields, each of which can be used to make reference to the primary key of at least one other data object. By employing at least the foreign key fields and the primary keys of the respective data objects, various referential relationships, branching referential relationships, and many-to-many relationships among one or more groups of the object types can be defined and maintained, for use in storing and/or managing the data with increased flexibility and efficiency. | 08-08-2013 |
20130212072 | Generating and Utilizing a Data Fingerprint to Enable Analysis of Previously Available Data - According to one embodiment of the present invention, a system analyzes data in response to detecting occurrence of an event, and includes a computer system including at least one processor. The system maps fields between the data and a fingerprint definition identifying relevant fields of the data to produce a fingerprint for the data. The data is deleted after occurrence of the event. The produced fingerprint is stored in a data repository, and retrieved in response to detection of the event occurrence after the data has been deleted. The system analyzes the retrieved fingerprint to evaluate an impact of the event on corresponding deleted data. Embodiments of the present invention further include a method and computer program product for analyzing data in response to detecting occurrence of an event in substantially the same manner described above. | 08-15-2013 |
20130212073 | Generating and Utilizing a Data Fingerprint to Enable Analysis of Previously Available Data - According to one embodiment of the present invention, a system analyzes data in response to detecting occurrence of an event, and includes a computer system including at least one processor. The system maps fields between the data and a fingerprint definition identifying relevant fields of the data to produce a fingerprint for the data. The data is deleted after occurrence of the event. The produced fingerprint is stored in a data repository, and retrieved in response to detection of the event occurrence after the data has been deleted. The system analyzes the retrieved fingerprint to evaluate an impact of the event on corresponding deleted data. Embodiments of the present invention further include a method and computer program product for analyzing data in response to detecting occurrence of an event in substantially the same manner described above. | 08-15-2013 |
20130218845 | WEB-BASED COLLABORATION FOR EDITING ELECTRONIC DOCUMENTS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, include sending a first rich internet application over a data network from a web server to a first client device and to a second client device. The web server is adapted to allow multiple client devices to collaboratively access one or more electronic documents formatted for any one of multiple different applications, including the first rich internet application. A first electronic document, which includes multiple document elements, is sent from the web server to the first client device and to the second client device. A document update received from the first client device includes identifications of one or more of the document elements and a requested action with respect to the one or more identified document elements. The received document update is verified to determine whether the requested action complies with the document schema and whether the first client device is authorized to initiate the requested action. One or more updated document elements for the first electronic document is generated based at least in part on the verified document update and automatically sent to the second client device over the data network. | 08-22-2013 |
20130218846 | MANAGING ENTERPRISE DATA QUALITY USING COLLECTIVE INTELLIGENCE - An embodiment of the invention is directed to a method associated with a data processing system disposed to receive and process enterprise data. Responsive to receiving a specified data element, the method determines a data type to be used for the specified data element. The method selectively determines a confidence level of the specified data element, and selects a plurality of subject matter experts (SMEs), wherein the data type of the specified data element is used in selecting each SME. A request is dispatched to each of the SMEs to selectively revise and validate the specified data element. The specified data element is then updated in accordance with each revision provided by an SME in response to one of the requests. | 08-22-2013 |
20130226877 | COMPUTER PROGRAM AND MANAGEMENT COMPUTER - To analyze an event of high importance as quick as possible with a possible small memory size. A management server (A) detects an event related to a problem that has occurred in a predetermined management object, (B) determines, when a plurality of the events are detected, an event importance of each of the plurality of events, (C) executes an on-demand expansion for generating, in the causality information, a predetermined causality, based on a topology and an event propagation model in descending order from the event determined in (B) as having a highest event importance, (D) records that the detected event has occurred relative to the predetermined causality, and (E) analyzes the detected event by using the predetermined causality. | 08-29-2013 |
20130254168 | Data Integrity Validation - Computer-implemented systems for searching within a database, providing searching and scoring exact and non-exact matches of data from a plurality of databases to validate data integrity. Embodiments are described relating to novel systems and methods for validating data. The embodiments create a “consensus value” for various items of data based on information shared by different entities, whose separate data can be used for this purpose whilst maintaining its confidentiality from other entities, who may be business competitors and/or who for various reasons should preferably not be given access to the data. Use of consensus value validation provides significant advantages over today's methodology of reliance on outside data vendors to provide purportedly fact-checked clean data. | 09-26-2013 |
20130254169 | Fast Component Enumeration in Graphs with Implicit Edges - A method and system for graphical enumeration. The method includes creating an ordered set of vertices for a graph such that each vertex is associated with a corresponding index, and wherein each vertex in the ordered set of vertices includes information. A plurality of keys is created for defining the information. A plurality of lists of vertices is created, each of which is associated with a corresponding key such that vertices in a corresponding list include information associated with the corresponding key. For a first list of vertices, a least valued index is determined from a group of associated vertices based on vertices in the first list and vertices pointed to by the vertices in the first list. Also, all associated vertices are pointed to a root vertex associated with the least valued index. | 09-26-2013 |
20130262397 | SECURE AND RELIABLE REMOTE DATA PROTECTION - A virtual file system may be used to determine a data file, and a splitter may then split the data file into at least a first portion and a second portion, and may provide a parity file using the first portion and the second portion. Any two of the first portion, the second portion, and the parity file include sufficient information to reconstruct the data file. A dispatcher may then distribute the first portion, the second portion, and the parity file for individual storage thereof using at least three separate storage locations. | 10-03-2013 |
20130262398 | SCATTER GATHER LIST FOR DATA INTEGRITY - A system and method for improving message passing between a computer and peripheral devices is disclosed. The system and method for improving message passing between a computer and peripheral devices incorporate data checking on the command/message data and each scatter gather list element. The method in accordance with the present disclosure enables a peripheral device to check the integrity of the message and ownership of the scatter gather list element before the data is processed. | 10-03-2013 |
20130262399 | MANAGING TEST DATA IN LARGE SCALE PERFORMANCE ENVIRONMENT - A method of processing a database can include comparing, using a processor, a delta file with a risk assessment criterion, wherein the delta file is generated from a first schema and a second and different schema, assigning a risk level to a change specified within the delta file according to the comparing, and applying the change of the delta file to a test database conforming to the first schema according to the assigned risk level. | 10-03-2013 |
20130262400 | DATA INDEX QUERY METHOD, APPARATUS AND SYSTEM - Embodiments of the present invention disclose a data index query method including: after performing Gray encoding on an index attribute, shuffling and encoding, by a server side, a Gray code corresponding to the index attribute to generate at least one index key value and storing the index key value; generating, by the server side according to query condition information carried in a query request, an index key value set or interval corresponding to the query condition information; obtaining an indicator set or interval used for indicating data and corresponding to the index key value set or interval according to the index key value set or interval; generating an intermediate data set corresponding to the indicator set or interval; and finally obtaining, from the intermediate data set, a target data set corresponding to the query condition information according to the query condition information carried in the query request. | 10-03-2013 |
20130275389 | Verification of Status Schemas based on Business Goal Definitions - Methods, systems, and computer-readable storage media for evaluating a validity of a status and action management (SAM) schema. In some implementations, actions include receiving the SAM schema, the SAM schema being stored as a computer-readable document in memory, providing one or more goals, each goal representing an intention of the SAM schema, the one or more goals being provided in a computer-readable document stored in memory and including one or more primary goals and one or more recovery goals that each express an intention of a process underlying the SAM schema, and processing the one or more goals using a computer-executable model checking tool for evaluating the validity of the SAM schema. | 10-17-2013 |
20130275390 | ERASURE CODED STORAGE AGGREGATION IN DATA CENTERS - Embodiments of erasure coded storage aggregation are disclosed. The erasure coded storage aggregation includes storing a data file as erasure coded fragments in a plurality of nodes of one or more data centers. The erasure coded storage aggregation further includes monitoring an access frequency of the data file. Based on the comparison between the access frequency and a predetermined threshold, the data file is either reconstructed from the erasure coded fragments and stored in a storage node or retained as erasure coded fragments in the plurality of nodes of the one or more data centers. | 10-17-2013 |
20130290270 | METHOD AND SYSTEM OF DATA EXTRACTION FROM A PORTABLE DOCUMENT FORMAT FILE - In one exemplary embodiment, a computer-implemented method includes receiving a portable digital format (PDF) file. A text element file is generated. The text element file includes a text element of the PDF file and a coordinate location of the text element, A document type of the PDF file is determined. A property file is selected according to the document type of the PDF. The property file includes at least one property. The property includes a definition of a data element to be extracted from the PDF file. The property includes a definition of a data element value, as well. The property includes a rule for locating the data element value relative to the data element. The data element and the data element value are extracted from the text element file according to the property. | 10-31-2013 |
20130290271 | ASYNCHRONOUS SERIALIZATION FOR AGGREGATING PROCESS RESULTS - In one embodiment, a system includes logic adapted for receiving a first request to change a state of a first group of catalogs, determining which of a plurality of catalogs belong in the first group, adding a change request for each of the first group of catalogs to a queue for processing, causing processing of each change request in the queue to change the state of each of the first group of catalogs according to the first request, creating a first group result indicating successful or failed state change upon a catalog in the first group of catalogs finishing processing, passing the first group result to an adjacent catalog in the first group of catalogs, removing each catalog that has finished processing from the first group of catalogs, and outputting the group result when there are no adjacent catalogs available to pass the group result. | 10-31-2013 |
20130290272 | DETERMINING AND STORING AT LEAST ONE RESULTS SET IN A GLOBAL ONTOLOGY DATABASE FOR FUTURE USE BY AN ENTITY THAT SUBSCRIBES TO THE GLOBAL ONTOLOGY DATABASE - Determining and storing at least one validated results set in a global ontology database for future use by an entity that subscribes to the global ontology database. If global ontology data is stored in a global ontology database, attempt to determine a mapping between first and second ontologies. If a mapping between the first and second ontologies can be determined from the global ontology data, the mapping is validated and the validated mapping is defined as a validated results set. If global ontology data is not stored in a global ontology database or a mapping between the first and second ontologies can not be determined from global ontology data stored in the global ontology database, the first and second ontologies are unified by determining a mapping between the first and second ontologies, the mapping is validated and the validated mapping is defined as a validated results set. The validated results set is stored in the global ontology database for future use by an entity that subscribes to the global ontology database. | 10-31-2013 |
20130290273 | METHOD FOR UPDATING AN ENCODED FILE - The invention relates to method for updating data of an encoded file from a remote server, said encoded file being stored in a secure device, characterized in that it comprises step a): sending a message to said secure device, step b): decoding the encoded file to update, step c): locating a target data and performing an operation upon said target data, said message comprising configuration data and data block. | 10-31-2013 |
20130297567 | DATA STREAM QUALITY MANAGEMENT FOR ANALYTIC ENVIRONMENTS - According to one aspect of the present disclosure, a system and technique for data quality management is disclosed. The system includes a processor and an ingress quality specification (IQS) module executable by the processor in a runtime environment with a data stream analytic module. The IQS module is configured to: receive the data stream; analyze a subset of data of the data stream to determine if the subset of data meets a quality expectation of the analytic module; annotate the subset of data to indicate a quality status based on whether the subset of data meets the quality expectation of the analytic module; and output the data stream to the analytic module. | 11-07-2013 |
20130304708 | Method and System for Distributed Data Verification - A method and system of verifying data stored in a database, by polling one or more computing devices. A server generates a poll object for a data item and a poll notification is transmitted to the one or more computing devices, whereupon users of the computing devices may respond to the poll notification and transmit responses. A set of response notifications is received and the server determines if the set of response notifications satisfies a quorum criterion. If the quorum criterion is satisfied, the server determines a data verification result, based on a tally criterion. | 11-14-2013 |
20130304709 | META-CONFIGURATION OF PROFILES - Disclosed are methods for creating, applying, using and retrieving profile information that includes attributes that may be stored separately from, or with, the content to which the profiles are being applied. In this manner, profiles can be shared in various environments and across various applications. Attributes that have corresponding attributes in other content can be applied to the other content, as long as each of the attributes is valid. In computer aided design applications, the profile can be stored in a profile repository embedded within the CAD model. In addition, profile controllers are disclosed which control the attributes of a profile that can be used with selected content and other content and send a notification that a profile is available for use by other content. | 11-14-2013 |
20130318048 | TECHNIQUES TO MODIFY FILE DESCRIPTORS FOR CONTENT FILES - Techniques to modify file descriptors for content files are described. An apparatus may comprise a processor circuit and a file descriptor application operative on the processor circuit to manage file descriptors for content files, the file descriptor application arranged to generate a file descriptor for a content file in accordance with a universal file descriptor model, the universal file descriptor model to comprise a file descriptor surface with multiple file descriptor tiles to present corresponding content parts from the content file, with at least one of the file descriptor tiles defining a content part class representing homogeneous content parts from heterogeneous content file types. The file descriptor application may also comprise a file descriptor editor component arranged to allow modifications to the file descriptor. Other embodiments are described and claimed. | 11-28-2013 |
20130318049 | PARTIAL SOURCE VERIFICATION OF EDC DATA - Systems, methods, and other embodiments associated with partial source verification are described. In one embodiment, a method includes selecting, from a corpus of records, a set of records that includes fewer records than the corpus, where each record corresponds to an instance of an electronic form that records information about a given subject. The set of records is provided for source verification. | 11-28-2013 |
20130325815 | METHOD AND APPARATUS FOR MANAGING AND VERIFYING CAR TRAVELING INFORMATION, AND SYSTEM USING THE SAME - The present disclosure provides a method and apparatus for managing and verifying car traveling information, and a system using the same. The method for managing car traveling information includes receiving traveling image data and traveling record data; extracting computation data for integrity computation from at least one of the traveling image data and the traveling record data; generating integrity verification data by computing predetermined identification number data and the computation data; and generating integrity traveling data by combining the traveling image data, the traveling record data and the integrity verification data. In this way, integrity of an image from a black box for cars can be easily verified while maintaining an original copy of the image and related traveling record data. | 12-05-2013 |
20130332423 | DATA LINEAGE TRACKING - A data lineage tracking system may include a memory storing a module comprising machine readable instructions to obtain trace log entries representing an interaction with, a manipulation of, and/or a creation of a data value. The data lineage tracking system may further include machine readable instructions to select the trace log entries that are associated with commands performed by an application, cluster similar trace log entries from the selected trace log entries, and analyze mappings between the clustered trace log entries to determine data lineage flow associated with the data value. | 12-12-2013 |
20130332424 | CENTRALIZED READ ACCESS LOGGING - Systems and methods are disclosed for creating a read-access log. A business application may send a request for data to a backend system using a communication protocol. At the backend system, the request may be observed and a determination made as to whether the request for data is log-relevant. The determination may be based on a log configuration record associated with the business application making the request. A record may be written in a read-access log when it is determined that the request for data is log-relevant. The log record may include information used to map entity information from the retrieved data to a semantic entity. | 12-12-2013 |
20130332425 | ENHANCING CONTENT MEDIATED ENGAGEMENT - According to an aspect of the present invention, a content server enhances content mediated engagements, by first enabling a user to specify a content collection containing a set of contents according to a specific/desired sequence, and then storing a data indicating the collection. The set of contents are selected from contents (or portions thereof) maintained in a repository. In response to receiving during a content mediated engagement, a request of the stored content collection, the content server then provides the set of contents according to the specific sequence. The content server also facilitates the same content (maintained in repository) to be included and accordingly provided as part of different content collections. | 12-12-2013 |
20130339311 | INFORMATION RETRIEVAL AND NAVIGATION USING A SEMANTIC LAYER - Systems and methods for information retrieval are provided that permit users and/or processing entities to access and define synthetic data, synthetic objects, and/or synthetic groupings of data in one or more collections of information. In one embodiment, data access on an information retrieval system can occur through an interpretation layer which interprets any synthetic data against data physically stored in the collection. Synthetic data can define virtual data objects, virtual data elements, virtual data attributes, virtual data groupings, and/or data entities that can be interpreted against data that may be stored physically in the collection of information. The system and methods for information retrieval can return results from the one or more collections of information based not only on the data stored, but also on the virtual data generated from interpretation of the stored data. | 12-19-2013 |
20130339312 | Inter-Query Parallelization of Constraint Checking - A plurality of operations are executed on tables of a database with at least a portion of the operations being executed in parallel. A constraint check is performed for each operation subsequent to its execution to determine whether data stored in the database affected by the operation is valid, during this constraint checking additional operations and/or constraint checks on the same table are allowed to run in parallel. Based on this constraint checking, operations for which the constraint check determines that the data is not valid are invalidated. Related apparatus, systems, techniques and articles are also described. | 12-19-2013 |
20140006358 | CREATION AND REPLAY OF A SIMULATION WORKLOAD USING CAPTURED WORKLOADS | 01-02-2014 |
20140006359 | LOCATING AMBIGUITIES IN DATA | 01-02-2014 |
20140019421 | Shared Architecture for Database Systems - Systems, methods and computer-readable mediums are disclosed for a shared hardware and architecture for database systems. In some implementations, one or more source databases in a data warehouse can be backed up to one or more backup databases on network storage. During normal operating conditions, the backup databases are continuously updated with changes made to their corresponding source databases and metadata information for the database backup copies and database backup information are stored in a centralized repository of the system. When a source database fails (failover), the source database is replaced by its corresponding backup database on the network storage and the source database node (e.g., a server computer) is replaced by a standby node coupled to the network storage. | 01-16-2014 |
20140019422 | ENCODED DATA PROCESSING - Techniques are provided for encoded data processing which allows for continuous data processing as encoded data changes. Data is decomposed into one or more blocks with each block containing at least one data record. At least one data record within a given block is encoded with a first encoding process selected from one or more encoding processes. The first encoding process is associated with the given data block. Techniques evaluate whether or not to implement an encoding change for a given block when updating a given data record in the given block. Responsive to the evaluation, the given block is re-encoded with a second encoding process. Responsive to the re-encoding, the association of the given block is updated. A map is formed to convert the given data record encoded with the first encoding process to the second encoding process so as to preserve comparative relationships of the given data record. | 01-16-2014 |
20140032503 | SYSTEM AND METHOD FOR SENDING AND/OR RECEIVING DIGITAL CONTENT BASED ON A DELIVERY SPECIFICATION - A plurality of users may interact with a content distribution system in order to share digital media content. The system may receive, store, and/or publish a delivery specification that includes requirements relating to digital content that a first user wishes to receive. The delivery specification for the digital content may include one or more requirements of the digital content to be received. A second user who wishes to provide the digital content may access the delivery specification. The system provides for flexible validation of the media content from the second user. For example, validation may occur at device of the first user, at a device of the second user, and/or at a device of the content distribution system. Upon validation of the media content from the second user, the system may facilitate transfer of the media content from the second user to the first user. | 01-30-2014 |
20140040212 | STORAGE CONTROL GRID AND METHOD OF OPERATING THEREOF - There is provided a storage control grid capable of controlling at least one service provided in the storage system and a method of operating thereof. The storage control grid comprises at least one service dispatcher operatively coupled to at least one service requestor and to a plurality of service providers. The method comprises requesting by service requester a service, thus giving rise to at least one service request; enabling, using said at least one service dispatcher, delivery of the service request to at least one service provider among said plurality of service providers, said service provider configured to provide said at least one service, wherein the delivery is enabled in accordance with data comprised in a service data structure handled by said at least one service dispatcher and indicative, at least, of association between said at least one service and service providers among said plurality of service providers. | 02-06-2014 |
20140046908 | ARCHIVAL DATA STORAGE SYSTEM - A cost-effective, durable and scalable archival data storage system is provided herein that allow customers to store, retrieve and delete archival data objects, among other operations. For data storage, in an embodiment, the system stores data in a transient data store and provides a data object identifier may be used by subsequent requests. For data retrieval, in an embodiment, the system creates a job corresponding to the data retrieval and provides a job identifier associated with the created job. Once the job is executed, data retrieved is provided in a transient data store to enable customer download. In various embodiments, jobs associated with storage, retrieval and deletion are scheduled and executed using various optimization techniques such as load balancing, batch processed and partitioning. Data is redundantly encoded and stored in self-describing storage entities increasing reliability while reducing storage costs. Data integrity is ensured by integrity checks along data paths. | 02-13-2014 |
20140046909 | DATA STORAGE INTEGRITY VALIDATION - Embodiments of the present disclosure are directed to, among other things, validating the integrity of received and/or stored data payloads. In some examples, a storage service may perform a first partitioning of a data object into first partitions based at least in part on a first operation. The storage service may also verify the data object, by utilizing a verification algorithm, to generate a first verification value. In some cases, the storage service may additionally perform a second partitioning of the data object into second partitions based at least in part on a second operation. The second partitions may be different from the first partitions. Additionally, the archival data storage service may verify the data object using the verification algorithm to generate a second verification value. Further, the storage service may determine whether the second verification value equals the first verification value. | 02-13-2014 |
20140059011 | AUTOMATED DATA CURATION FOR LISTS - A processor-implemented method, system, and/or computer program product identifies errant data in an initial data list. An initial data list is composed of multiple data entries, where each of the data entries is associated with a parent hypernym from a group of multiple parent hypernyms. The parent hypernym describes a common attribute of data entries in the initial data list that have a same parent hypernym. A plurality parent hypernym is identified as a parent hypernym that is common to more data entries in the initial data list than any other parent hypernym. Any datum entry in the initial data list that is not associated with the plurality parent hypernym is then flagged for eviction from the initial data list. | 02-27-2014 |
20140067769 | STRING SUBSTITUTION APPARATUS, STRING SUBSTITUTION METHOD AND STORAGE MEDIUM - A method includes: unifying plural types of substitution tables in each of which a substitution source string and a substitution destination string are mapped to each other into a single substitution table; constructing a prefix tree to incorporate the substitution source string registered in the single substitution table, a string in the prefix tree represented by characters of a label assigned to plural branches on route from a root node to a certain node is identical to the substitution source string mapped to the substitution source string; performing addition of a link failure directing from a first node to a second node for all nodes included in the prefix tree under a certain condition; and searching the substitution source string included in the target string by repeating migration between nodes in the prefix tree based on a certain condition to record identification information assigned to a node before migration. | 03-06-2014 |
20140067770 | METHOD AND APPARATUS FOR CONTENT MANAGEMENT - A method and apparatus for content management are provided. The method and apparatus efficiently manage content so as to provide a convenient user interface in an electronic device supporting content playback, browsing and storage. The method includes obtaining attribute information of a content item from a storage device, registering the attribute information in a content database, determining content items to be played back by a content player using the attribute information registered in the content database, creating a content list on the basis of the determined content items, displaying the content list, and playing back a content item selected by a user from the content list. | 03-06-2014 |
20140074796 | DYNAMIC ANOMALY, ASSOCIATION AND CLUSTERING DETECTION - Techniques are provided for dynamic anomaly, association and clustering detection. At least one code table is built for each attribute in a set of data containing one or more attributes. One or more clusters associated with one or more of the code tables are established. One or more new data points are received. A determination is made if a given one of the new data points is an anomaly. At least one of the one or more code tables is updated responsive to the determination. When a compression cost of a given one of the new data points is greater than a threshold compression cost for each of the one or more clusters, the given one of the new data points is an anomaly. | 03-13-2014 |
20140081923 | DUAL-PHASE FILE SYSTEM CHECKER - Methods and a processing system directed to a file system checker are described. A file system checker performs file system validation by validating a file system's nodes. Each node is associated with two kinds of data: metadata and referenced data. A file system checker may validate one node at a time or a group of nodes contemporaneously (e.g., in parallel). The file system checker uses a dual phase procedure. The first phase includes validating metadata. The second phase includes validating, as appropriate, node type or link count. Dual phase file system checking allows validation of a node without validating referenced data associated with downstream nodes. Where validation of a given node requires validating a downstream node, performing a first phase test on the downstream node is sufficient to validate the given node. Upon completion, the given node may be unlocked for access by external devices and users. | 03-20-2014 |
20140081924 | IDENTIFICATION OF DATA OBJECTS STORED ON CLUSTERED LOGICAL DATA CONTAINERS - Exemplary embodiments provide various techniques and systems for identifying data objects stored on clustered logical data containers. In one embodiment, a method is provided for creating a backward data object handle. In this method, a request to create a file is received, and a redirector file is created on a first logical data container based on receipt of the request. A redirector handle resulting from the creation of the redirector file is received. A data object of the file is then created on a second logical data container using the redirector handle as an identifier of the data object. This redirector handle included in the identifier then becomes a backward data object handle that points from the data object to the redirector file. As such, the redirector file can be identified by referencing the identifier of the data object. | 03-20-2014 |
20140089268 | Methods for Resolving A Hang In A Database System - A method for resolving a hang in a database system includes receiving a symbolic graph having a plurality of nodes, where each node represents a database session involved in the hang during a specified time interval. The blocking time associated with each node in the symbolic graph is recursively determined. The node that has the longest blocking time is output to a display for review by the database administrator. Alternatively, the database session represented by the node having the longest blocking time may be automatically eliminated. | 03-27-2014 |
20140095453 | REAL-TIME AUTOMATIC DATABASE DIAGNOSTIC MONITOR - A method for obtaining data items from an unresponsive database host. The method includes receiving an indication that the database host is unresponsive, receiving, from a management server via a diagnostic connection, a first request for a first organized data item, and sending a first query, using a first interface, to a memory for the first organized data item. The method further includes receiving, from the management server via a normal connection, a second request for a second organized data item, retrieving, from memory on the database host, a first data item in response to the first query, converting the first data item into the first organized data item, and sending the first organized data item to the management server, wherein the first organized data item is analyzed to determine a source causing the database host to be unresponsive. | 04-03-2014 |
20140095454 | Message Validation in a Service-Oriented Architecture - Message validation in a service-oriented architecture defines a message structure using XML data types. Context-independent validity constraints are specified using an XML schema. Context-specific validity constrains are specified in an intermediary data structure for a specific service operation. A service interface including the XML schema and the intermediary data structure is published. | 04-03-2014 |
20140108356 | INFORMATION PROCESSING APPARATUS - An information processing apparatus includes a file saving unit for saving a first file, a state saving unit for saving, as saved state information, a second file's state at a time that the first file is saved in which the second file is opened at the time that the first file is saved, and a state reproducing unit for reproducing the second file's state to be the state at the time that the first file is saved based on the saved state information where the first file is opened after the first file is saved. The information processing apparatus allows users to re-edit files and webpages later without laboriously looking up those data again. | 04-17-2014 |
20140114925 | PICTORIAL SYMBOL REGISTRATION APPARATUS AND PICTORIAL SYMBOL REGISTRATION METHOD - A pictorial symbol registration apparatus determines whether reading data of a pictorial symbol used in a character conversion process is embedded in an image file containing image data indicating the pictorial symbol, and stores the reading data to be associated with the image data in a dictionary file such as a dictionary file used in the character conversion process when it is determined that the reading data is embedded, thereby easily realizing registration of the pictorial symbol. | 04-24-2014 |
20140114926 | PROFILING DATA WITH SOURCE TRACKING - Profiling data includes accessing multiple collections of records to store quantitative information for each particular collection including, for at least one selected field of the records in the particular collection, a corresponding list of value count entries, each including a value appearing in the selected field and a count of the number of records in which the value appears. Processing the quantitative information of two or more collections includes: merging the value count entries of corresponding lists for at least one field from each of a first collection and a second collection to generate a combined list of value count entries, and aggregating value count entries of the combined list of value count entries to generate a list of distinct field value entries identifying a distinct value and including information quantifying a number of records in which the distinct value appears for each of the two or more collections. | 04-24-2014 |
20140114927 | PROFILING DATA WITH LOCATION INFORMATION - Profiling data includes processing an accessed collection of records, including: generating, for a first set of distinct values appearing in a first set of one or more fields, corresponding location information; generating, for the first set of fields, a corresponding list of entries identifying a distinct value from the first set of distinct values and the location information for the distinct value; generating, for a second set of one or more fields, a corresponding list of entries, with each entry identifying a distinct value from a second set of distinct values appearing in the second set of fields; and generating result information, based at least in part on: locating at least one record of the collection using the location information for at least one value appearing in the first set of fields, and determining at least one value appearing in the second set of fields of the located record. | 04-24-2014 |
20140114928 | COHERENCE PROTOCOL TABLES - An agent is provided to include state table storage to hold a set of state tables to represent a plurality of coherence protocol actions, where the set of state tables is to include at least one nested state table. The agent further includes protocol logic associated with the state table storage, the protocol logic to receive a coherence protocol message, and determine a coherence protocol action of the plurality of coherence protocol actions from the set of state tables based at least in part on the coherence protocol message. | 04-24-2014 |
20140114929 | Method and Apparatus for Accelerated Format Translation of Data in a Delimited Data Format - Various methods and apparatuses are described for performing high speed format translations of incoming data, where the incoming data is arranged in a delimited data format. As an example, the data in the delimited data format can be translated to a mapped variable field format using pipelined operations. A reconfigurable logic device can be used in exemplary embodiments as a platform for the format translation. | 04-24-2014 |
20140114930 | RELIABILITY CALCULATION APPARATUS, RELIABILITY CALCULATION METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - In order to calculate a reliability that serves as an index of reliableness of an evaluator who evaluated a document, a reliability calculation apparatus ( | 04-24-2014 |
20140122440 | RECOVERING DATA FROM CORRUPTED ENCODED DATA SLICES - A method begins by a dispersed storage (DS) processing module receiving a set of encoded data slices, where some of the encoded data slices have an integrity issue such that less than a decode threshold number of encoded data slices have valid integrity. The method continues with the DS processing module creating partial coded matrices from the set of encoded data slices and generating partial decoding matrices. The method continues with the DS processing module generating a test data matrix based on the partial coded matrices and the partial decoding matrices, encoding the test data matrix into a set of test encoded data slices, and generating integrity information for the set of test encoded data slices. When the integrity information is valid, the method continues with the DS processing module utilizing the test data matrix as a data matrix and converting the data matrix into a recovered data segment. | 05-01-2014 |
20140122441 | Distributed Object Storage System Comprising Performance Optimizations - A distributed object storage system comprises an encoding module configured to calculate for a plurality of predetermined values of the spreading requirement the cumulative size of the sub fragment files when stored on the file system with the predetermined block size; and select as a spreading requirement from said plurality of predetermined values a calculated value that is equal to one of said predetermined values for which the cumulative size is minimal. | 05-01-2014 |
20140122442 | DISTRIBUTED ANONYMIZATION SYSTEM, DISTRIBUTED ANONYMIZATION DEVICE, AND DISTRIBUTED ANONYMIZATION METHOD - The present invention provides a distributed anonymization device capable of executing a distributed anonymization process without the risk of leaking data of users to other parties. This distributed anonymization device is provided with: a storing means for storing a user identifier and personal information in association with one another; a setting means for setting, as a dummy identifier, the identifier that does not correspond to the user identifier from among all of the externally-notified identifiers; a separating means for separating all the identifiers including the dummy identifier into groups; a transmitting means for transmitting to another device, the separation information indicating the content of the identifiers in each group; and a determining means for determining, each of groups, whether the proportion of identifiers in the abovementioned distributed anonymization device and the other device satisfy a predetermined anonymity index. | 05-01-2014 |
20140129525 | NORMALIZING DATA FOR FAST SUPERSCALAR PROCESSING - A data normalization system is described herein that represents multiple data types that are common within database systems in a normalized form that can be processed uniformly to achieve faster processing of data on superscalar CPU architectures. The data normalization system includes changes to internal data representations of a database system as well as functional processing changes that leverage normalized internal data representations for a high density of independently executable CPU instructions. Because most data in a database is small, a majority of data can be represented by the normalized format. Thus, the data normalization system allows for fast superscalar processing in a database system in a variety of common cases, while maintaining compatibility with existing data sets. | 05-08-2014 |
20140143210 | SYSTEM AND METHOD FOR MANIPULATING DATA RECORDS - Systems and methods for manipulating data records in a byte accessible format are provided. Records may be loaded into one of two byte accessible files or arrays, with a determined record length added to the beginning and/or end of each record. Records may then be slewed between the two files and records added, removed, or modified at the split between the two files. | 05-22-2014 |
20140149359 | SYSTEM FOR VERIFYING CORRECTNESS OF DATA WHEN DATA ARE REQUESTED AND METHOD THEREOF - A system for verifying correctness of data when the data are requested and the method thereof read target paragraphs from a target file to verify whether each of target paragraphs is correct or not. When the target paragraphs are correct, the target paragraphs are sent to the client requesting the target file. The system and the method can provide verified and correct data to the user, achieving the goal of verifying part of target file and reducing the verifying time. | 05-29-2014 |
20140164337 | DATA SET CONNECTION MANAGER HAVING A PLURALITY OF DATA SETS TO REPRESENT ONE DATA SET - Provided are a computer program product, system, and method for a data set connection manager having a plurality of data sets to represent one data set. A request is processed to open a connection to a data set having members, wherein the connection is used to perform read and write requests to the members in the data set. In response to establishing the connection, establishing for the connection a primary data set having all the members; a secondary data set to which updated members in the primary data set are written; and a pending delete data set to pending delete members comprising members that are updated. | 06-12-2014 |
20140181052 | TECHNIQUES FOR ALIGNED RUN-LENGTH ENCODING - Techniques for Aligned Run-Length Encoding (ARLE) are described. ARLE is an encoding scheme that transforms sets of same-valued consecutive rows into one or more runs, while enforcing boundaries between the runs at set intervals (e.g. every predetermined number of rows). Consecutive rows that contain the same value, but which cross one or more interval boundaries, are encoded as multiple runs that are divided along those interval boundaries. According to one technique, a database server accelerates query processing by setting the interval size to the word size of the processor performing the predicate comparisons. According to another technique, a database server accelerates row lookup by maintaining an offset array that stores the run offsets into the ARLE data of the run that begins each interval. | 06-26-2014 |
20140181053 | CONDENSING EVENT MARKERS - Systems, methods, and computer-readable storage media for analyzing the recorded interactions of users within a shared dataspace, where the shared dataspace is provided by an a synced online content management system. As each user adds and deletes files in the shared dataspace, the content management system can record each interaction. The content management system can then analyze the recorded interactions, creating collapsed summaries of the interactions, and generate notifications that can be presented to users. Various thresholds can be used to determine when the recorded interactions are condensed, and when notifications associated with those condensed interactions are presented to users. | 06-26-2014 |
20140188813 | Fast Object Fingerprints - An embodiment computing device operating in a data storage system includes an object storage controller operable to divide an object into blocks and to create an object hash from hash values, and a network interface in communication with the object storage controller, the network interface operable to transmit the blocks to a storage subsystem that generates one of the hash values from each of the blocks, to receive the hash values from the storage subsystem, and to provide the hash values to the object storage controller for creation of the object hash from the hash values. In an embodiment, the object storage controller is operably coupled to a processor and a memory or stored on a computer readable medium. | 07-03-2014 |
20140201164 | VALIDATION AND DELIVERY OF DIGITAL ASSETS - Systems and methods for determining ownership of an asset and providing access to alternate versions of the asset are provided. A system and method can include associating a unique identifier with an asset stored in one or more locations, receiving a request for an asset interaction, validating the request using the unique identifier, determining an asset storage location, identifying whether an enhanced version of the asset is available, and granting the request for an asset interaction when the unique identifier is validated and when the asset storage location is a local storage location. In one aspect, the asset interaction can be with an enhanced version of the asset when an enhanced version of the asset is available. | 07-17-2014 |
20140222766 | SYSTEM AND METHOD FOR DATABASE MIGRATION AND VALIDATION - A system and method for database migration and validation is provided. In an embodiment, the database migration and validation system may include a migration framework which analyzes a relational database and its associated access coding and preprocessing/post-processing coding, and based on these analyses generates an in-memory database, access coding, and database coding in a computer system. The database migration and validation system may also include a validation framework which presents validation queries to the relational database and the in-memory database, compares the results of the queries, and reports the outcome of the comparison. | 08-07-2014 |
20140222767 | Page Substitution Verification Preparation - A system and method are disclosed for rendering published documents tamper evident. Embodiments render classes of documents tamper evident with cryptographic level security or detect tampering, where such security was previously unavailable, for example, documents printed using common printers without special paper or ink. Embodiments enable proving the date of document content without the need for expensive third party archival, including documents held, since their creation, entirely in secrecy or in untrustworthy environments, such as on easily-altered, publicly-accessible internet sites. Embodiments can extend, by many years, the useful life of currently-trusted integrity verification algorithms, such as hash functions, even when applied to binary executable files. Embodiments can efficiently identify whether multiple document versions are substantially similar, even if they are not identical, thus potentially reducing storage space requirements. | 08-07-2014 |
20140244594 | COMPUTER-IMPLEMENTED METHOD OF DETERMINING VALIDITY OF A COMMAND LINE - Provided is a method of determining command line validity, including: a step of maintaining a block network address database including block network address information; a step of receiving a command line from a terminal of a user; a step of extracting network address information included in the command line; a step of determining whether the network address information is the block network address information, with reference to the block network address database; a step of generating log information associated with the command line in case that the network address information is not the block network address information as the result of the determination, in which the log information comprises at least one of the network address information included in the command line, input time point information with respect to the input time point of the command line, and request content information; a step of recording the log information in a log database; and a step of determining the validity of the command line by using the log information. | 08-28-2014 |
20140279933 | Hashing Schemes for Managing Digital Print Media - A method for managing digital files, including the steps of generating a main hash for a new file, searching for a matching main hash of any existing file in storage, if a matching main hash is found, then stop from further processing the new file, but if no match is found, then generating a sub-hash for a sub-part of the new file, and searching for a matching sub-hash of any existing file in storage; if no match of the sub-hash is found, then processing the entire new file and saving the processed new file in the storage, if a matching sub-hash for a sub-part of an existing file is found, then processing only the remaining part of the new file that is not the sub-part for which the sub-hash is generated, and retrieving the matching sub-part of the existing file; and saving the processed remaining part of the new file and the retrieved sub-part of the existing file in storage as a combined digital file. An alternative process uses component and composite hashes generated for the component parts of digital files for detecting duplicates. | 09-18-2014 |
20140279934 | SELF-ANALYZING DATA PROCESSING JOB TO DETERMINE DATA QUALITY ISSUES - Techniques are disclosed to determine data quality issues in data processing jobs. The data processing job is received, the data processing job specifying one or more processing steps designed based on one or more data schemas and further specifies one or more desired quality metrics to measure at the one or more processing steps. One or more state machines are provided, that are generated based on the quality metrics and on the data schemas. Input data to the data process job are processed using the one or more state machines, in order to generate output data and a set of data quality records characterizing a set of data quality issues identified during the execution of the data processing job. | 09-18-2014 |
20140279935 | Computer-implemented method of assessing the quality of a database mapping - A computer-implemented method is provided of assessing the quality of a database mapping. Fields of a source file are mapped to fields of a target database using a database mapping. A sampled subset of the records in the source file are converted to records in the target database using the field mappings, wherein the quality of the records in the source file is presumed to be high. A data validator is selected from a plurality of different data validators, wherein the selection is made based at least in part on the purpose of the target database. A sampled subset of the converted records are tested with the selected data validator to determine the quality of the database mapping. | 09-18-2014 |
20140279936 | METHOD FOR DATA RETRIEVAL FROM A DISTRIBUTED DATA STORAGE SYSTEM - There is provided a method and server for retrieving data from a data storage system including a plurality of storage nodes. The method may include sending a multicast message to at least a subset of the storage nodes. The multicast message may include a request for the subset of storage nodes to send the data. The multicast message may further include a data identifier, indicating the data to be retrieved. Moreover, the method may include receiving data from a first storage node of the subset of storage nodes. The data received from the first storage node may correspond to the requested data. At least the act of sending a multicast message or the act of receiving data from the first storage node may be performed on a condition that an estimated size of the data is less than a predetermined value. | 09-18-2014 |
20140289207 | QUALITY ASSURANCE CHECKS OF ACCESS RIGHTS IN A COMPUTING SYSTEM - Systems and methods for ensuring the quality of identity and access management information at a computing system are described. Access right information that respectively corresponds to one or more access rights may be stored at a data store. The access right information may be stored in accordance with a data model that defines respective relationships between the access rights and both the users having access to the computing system and the computing resources of the computing system. At least a portion of the access right information may be retrieved, and quality assurance tasks may be performed using the portion of the access right information retrieved. | 09-25-2014 |
20140304236 | HASH VALUE GENERATION APPARATUS, SYSTEM, DETERMINATION METHOD, PROGRAM, AND STORAGE MEDIUM - A hash value generation apparatus that generates a hash value for identifying unknown data as belonging to a specified class or an unspecified class, includes a generation unit configured to generate hash function information including a hash function based on a specified feature amount of data belonging to the specified class, a conversion unit configured to convert the specified feature amount into a hash value based on the generated hash function information, and a storage unit configured to store the hash value obtained by the conversion as a normal hash value in association with the hash function information. | 10-09-2014 |
20140310248 | VERIFICATION SUPPORT PROGRAM, VERIFICATION SUPPORT APPARATUS, AND VERIFICATION SUPPORT METHOD - A verification support method includes: referring to a storage to select a second use case to be verified next to a first use case selected from a use case group from the use case group on the basis of a postcondition of the first use case and a precondition of a use case different from the first use case, the storage storing, for use case representing a function of a verification target, the precondition that is met by an input value to be input into the verification target and an output value to be output from the verification target before the function represented by the use case is executed and the postcondition that is met by the input value and the output value after the function represented by the use case is executed. | 10-16-2014 |
20140317064 | ELECTRONIC DEVICE AND METHOD FOR CHANGING FILE NAME BACKGROUND - An electronic device and system for changing file names includes a processing unit and a storage unit. The storage unit stores a plurality of files in a number of folders. The processing unit detects the user-selection of a file, acquires from the containing folder any other files with the same attributes, detects whether or there is an operation for changing file name, and controls a file name of the selected file and of any acquired files to be editable when there is an operation for changing file name. The file name of the selected file and of any acquired files is changed to new ordered file names when the selected file is renamed. | 10-23-2014 |
20140330791 | Performance and Quality Optimized Architecture for Cloud Applications - A data validation procedure may be propagated to a server machine and to a client machine to perform the same data checking in the respective machines. The data validation procedure may be converted and expressed in a specification language that is suitable for the server machine. Likewise, the data validation procedure may be converted and expressed in a specification language that is suitable for the client machine. | 11-06-2014 |
20140337297 | SYSTEM AND METHOD FOR MARINE DEBRIS MANAGEMENT - Authenticating marine debris prior to recovery by: generating SONAR signal data related to the bottom of a body of water; processing the data to identify the presence or absence of debris targets; generating an image of the objects detected; storing the image in an assessment database; associating each image stored in the assessment database with position information and dimensions of that debris target; generating a side scan sonar report including an image of debris target detected on the ocean or lake floor, object position and dimension data; transmitting the sonar report to an offsite or on site debris image processor for identification of the target; comparing the target reported in the sonar report with a database to determine whether the target object is storm debris; transmitting a signal to ship for target pickup if it is determined that the target object is a storm debris object. The system and method further includes verification where upon recovering and imaging the target object or the target site post recovery; the image is processed and compared with the authentication database. | 11-13-2014 |
20140344225 | EVENT-DRIVEN INVALIDATION OF PAGES FOR WEB-BASED APPLICATIONS - Systems and methods for invalidating and regenerating pages. In one embodiment, a method can include detecting content changes in a content database including various objects. The method can include causing an invalidation generator to generate an invalidation based on the modification and communicating the invalidation to a dependency manager. A cache manager can be notified that pages in a cache might be invalidated based on the modification via a page invalidation notice. In one embodiment, a method can include receiving a page invalidation notice and sending a page regeneration request to a page generator. The method can include regenerating the cached page. The method can include forwarding the regenerated page to the cache manager replacing the cached page with the regenerated page. In one embodiment, a method can include invalidating a cached page based on a content modification and regenerating pages which might depend on the modified content. | 11-20-2014 |
20140358865 | PROCESSING A TECHNICAL SYSTEM - Rules of a rule base are transformed in an automated fashion in order to be able to conduct consistency checks and generate explanations and thus classify and correct existing rules. This is beneficial in particular in large systems with existing rule bases, e.g., wherein each rule is associated with at least a diagnostic task of a component of a technical system, e.g., a power system. The task can be subject to fault detection, fault isolation, predictive diagnosis or reporting. The solution presented provides an overview of large sets of rules and thus allows determining which rules are suitable and which are not. The invention is applicable for all kinds of technical systems, e.g., industry and automation systems, in particular power systems. | 12-04-2014 |
20140365445 | SERVER WITH FILE MANAGING FUNCTION AND FILE MANAGING METHOD - A server communicates with a number of terminal devices. Each terminal device stores a file having a same file name. The server generates a trace log. The trace log records modification of the file in each of the terminal devices. The server further determines whether or not one of the terminal devices opens the file, searches in the trace log according to the file name of the file to find all the modifications corresponding to the file, determines the latest modification among all the modifications in the terminal devices according to the modification time corresponding to each of the found modifications, and displays at least a part of content of the found latest modification in the terminal device which currently runs the file. | 12-11-2014 |
20140365446 | VERIFICATION SYSTEM, VERIFICATION METHOD, AND MEDIUM STORING VERIFICATION PROGRAM - A verification system include: a server that receives each of a first data group and second data group, and transmits a third data group and a fourth data group to respond to each of the first data group and second data group received; a database server that receives the third data group and transmits the second data group; and a verification device that performs operation verification of the server or database server, the verification device including a processor configured to transmit, to the database server, a partial data group in the third data group received by the database server, and transmit, to the server, the first data group corresponding to another data group in the third data group, thereby supplying the other data group to the database server and using the first data group, the partial data group, and the fourth data group, to perform the operation verification. | 12-11-2014 |
20140365447 | MAKING ADDRESS BOOK A SOURCE OF LATITUDE AND LONGITUDE COORDINATES - A method for determining latitude and longitude coordinates for geographic addresses input into an address book on a mobile device is provided. For each geographic address received for storing in a contact record, latitude and longitude coordinates are automatically determined and associated with the geographic address in a database of contact records. In some embodiments, for each geographic address to be input, the method first searches existing contact records for the geographic address and if the latitude and longitude coordinates for the geographic address are in an existing contact record, the contact record for the contact is cross-referenced to the existing record for accessing the latitude and longitude coordinates. | 12-11-2014 |
20140379664 | SYSTEM AND METHOD FOR AUTOMATIC CORRECTION OF A DATABASE CONFIGURATION IN CASE OF QUALITY DEFECTS - The present invention refers to a system, a method and product for automatically identifying quality defects in configuration parameters of a database system and for automatically correcting them according to predefined quality procedures. The method is executed on a central server ( | 12-25-2014 |
20140379665 | DATA LINEAGE MANAGEMENT OPERATION PROCEDURES - Apparatus and methods for data lineage management operation procedures are provided. The apparatus may include a relational database. The relational database may store a plurality of Key Business Elements (“KBEs”). The apparatus may retrieve a selected KBE. The selected KBE may include one or more KBE parameters. The parameters may be associated with the selected KBE. The KBE may be used in a business process. The apparatus may include a processor. The processor may identify a KBE system of origination. The system of origination may create the KBE. The system of origination may modify the KBE. The processor may identify a KBE system of record. The system of record may determine an authoritative source. The authoritative source may be the authoritative source of the KBE. The processor may develop a data lineage. The data lineage may be the lineage of the KBE from the system of origination to the system of record. | 12-25-2014 |
20140379666 | Error Correction in Tables Using Discovered Functional Dependencies - Mechanisms are provided for performing tabular data correction in a document. Tabular data is received and analyzed to identify at least one portion of the tabular data having an erroneous/missing data value. A functional dependency of the at least one portion of the tabular data on one or more other portions of the tabular data is determined. A correct data value for the erroneous or missing data value of the at least one portion of the tabular data is determined based on the functional dependency of the at least one portion. In addition, the tabular data is modified to replace the erroneous or missing data value with the correct data value and thereby generate a modified table data. A processing operation is then performed on the modified table data to generate a resulting output. | 12-25-2014 |
20140379667 | DATA QUALITY ASSESSMENT - According to one embodiment of the present invention, a system assesses the quality of column data. The system assigns a pre-defined domain to one or more columns of the data based on a validity condition for the domain, applies the validity condition for the domain assigned to a column to data values in the column to compute a data quality metric for the column, and computes and displays a metric for a group of columns based on the computed data quality metric of at least one column in the group. Embodiments of the present invention further include a method and computer program product for assessing the quality of column data in substantially the same manners described above. | 12-25-2014 |
20150019497 | DATABASE DIAGNOSTICS INTERFACE SYSTEM - A database diagnostics system with an interface system that may be used to define, monitor, and deploy database diagnostics tools is presented. The interface system presents a user with a user interface for defining the parameters, behaviors, and schedules of database diagnostic tools. The diagnostic tools execute on a target database recording database parameters and state information. The interface system may present the user with a graphical user interface for assembling diagnostic tools at least partially from a predefined set of reusable modules and scripts. | 01-15-2015 |
20150019498 | Systems and Methods for Generating a Document with Internally Consistent Data - Systems and methods for generating a document include creating, by a computing device, a link between a first data element in a document and a second data element in a document. A relationship function between the first data element and the second data element is created to define a data dependency between the first and second data elements. One or more verification functions is associated with the link to verify that the data dependency has been me. Each data dependency in the document is verified to determine the data dependency is met. One or more data dependencies are corrected by computing corrected data via the relationship function. Either the first or second data element is replaced with the corrected data. A final document that includes the corrected data is generated. | 01-15-2015 |
20150032700 | ELECTRONIC INTERACTIVE PERSONAL PROFILE - The invention involves an interactive live verified profile. Any type of information can be stored in the profile, such as text, audio, video, graphics, etc. The owner of the profile then designates which information can be seen by which people. Most importantly, before being made available, the information is verified. The people who are designated to see the important also get the benefit of automatically getting updates when information is changed, modified, or updated | 01-29-2015 |
20150066866 | DATA HEALTH MANAGEMENT - A data health management apparatus may include a non-transitory memory and a processor communicatively coupled to the memory. In some cases, the processor may be configured to process instructions read from the memory. For example, the instructions may cause the processor to identify data associated with an application, where the data stored in at least one data repository. The processor may then analyze, the data stored in the at least one data repository, such as via a network, to determine a data health metric. The instructions may then cause the processor to determine an action to be performed on the data repository based on the determined data health metric. | 03-05-2015 |
20150066867 | SYSTEMS AND METHODS FOR ZERO-KNOWLEDGE ATTESTATION VALIDATION - The method for zero-knowledge attestation validation process includes receiving a statement from a primary account in a primary electronic database over a communication network for validation with an authority account in an authority electronic database, creating a set of keys permitting validation of the statement without the primary electronic database identifying the authority account and without the authority electronic database identifying the primary account, associating a first key with the statement, correlating the associated first key and statement with a second key identifying the authority account, validating the veracity of the statement as an attestation with the authority account over the communication network, relating the first key to the attestation, linking the related first key and attestation with a third key identifying the primary account, and transmitting the attestation to the primary electronic database over the communication network for storage in the primary account with the statement. | 03-05-2015 |
20150066868 | NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM THAT STORES DOCUMENT EVALUATION PROGRAM THAT EVALUATES CONSISTENCY IN DOCUMENT - A non-transitory computer-readable recording medium that stores a document evaluation program executable by a computer in a document evaluation apparatus includes first program code that causes the computer to determine that a plurality of pages from each of which the same type of object has been detected are in the same group and to detect the plurality of pages in the same group from the document, second program code that causes the computer to evaluate consistency in the plurality of pages, in the same group, that have been detected by the first program code, and third program code that causes the computer to display an evaluation result obtained by the second program code. | 03-05-2015 |
20150081647 | SERVER AND METHOD FOR UPDATING DATA OF SERVER - In a method for updating data of a server, the server receives a modification operation from a client device communicating with the server. Data corresponding to the modification operation is obtained, and a database of the server for storing the obtained data is determined. The server further sends a prompt to the client device for confirming a successful modification of the obtained data, and updates the obtained data to the determined database and other databases that share the obtained data with the determined database. | 03-19-2015 |
20150088833 | METHOD FOR PROVIDING RELATED INFORMATION REGARDING RETRIEVAL PLACE AND ELECTRONIC DEVICE THEREOF - An electronic device and a method for providing related information regarding a retrieval place are provided. The method includes determining a retrieval place using at least one contents information, extracting information related to the determined retrieval place, and providing the information related to the determined retrieval place by determining a validity of the extracted information related to the determined retrieval place. | 03-26-2015 |
20150106340 | SYSTEM FOR AUTOMATICALLY DETECTING ABNORMALITIES STATISTICAL DATA ON USAGE, METHOD THEREFOR, AND APPARATUS APPLIED TO SAME - The present invention discloses a system for automatically detecting abnormalities in statistical data on usage, to a method for same, and to an apparatus applied to same. Namely, the present invention can increase the reliability and accuracy of statistical data on usage by: collecting statistical data on usage, the data relating to the usage of electronic information from a plurality of information-providing platform apparatuses each issuing separate electronic information; and, from among the collected statistical data on usage, determining, as data to be subjected to abnormality detection, only the statistical data on usage that corresponds to a reference data format, and detecting abnormalities for each type from the statistical data on usage determined as the data to be subjected to abnormality detection. | 04-16-2015 |
20150106341 | DATA PROFILING - Processing data includes profiling data from a data source, including reading the data from the data source, computing summary data characterizing the data while reading the data, and storing profile information that is based on the summary data. The data is then processed from the data source. This processing includes accessing the stored profile information and processing the data according to the accessed profile information. | 04-16-2015 |
20150112948 | DYNAMICALLY SCALABLE DISTRIBUTED HETEROGENOUS PLATFORM RELATIONAL DATABASE - Disclosed embodiments provide a dynamically scalable distributed heterogeneous platform relational database system architecture for collection, management and dissemination of data, wherein the architecture is scalable both in terms of the number of servers making up the distributed database and the topology of the DDB, and wherein database servers may be added or removed without system interruption, and the topology of the DDB can be dynamically morphed. | 04-23-2015 |
20150120676 | AUTOMATICALLY PUBLISHING COURSE OFFERINGS FOR DIFFERENT TYPES OF COURSES ACCORDING TO A PLURALITY OF POLICIES AND EDUCATIONAL INSTITUTIONS - A method and apparatus for automatically publishing course offerings for different types of courses according to a plurality of policies and templates is presented herein. Instructors and/or administrators for a course create course records for each course that will be offered at a particular educational institution. The course record includes, and/or is associated with, data that indicates when the course will start, what assets should be published in the course offering, what template should be used, when a course is eligible to be automatically published, and/or when a course should be published by. When courses are eligible to be published, a controller determines what priority to assign each course. The controller manages a pool of course publishing processes to publish each course according to the policies and templates defined by each course's educational institution. The controller also notifies administrators when a course publishing process fails to publish a course. | 04-30-2015 |
20150120677 | VALIDATION OF LOG FORMATS - Systems and methods for validation of log formats are described herein. Log data is stored via a logging service in a data store or other storage system. An example log or proposed log format is received by the logging service. The proposed log format is validated against validation rules provided by log consumers. | 04-30-2015 |
20150134620 | SYSTEM AND METHOD FOR ANALYZING AND VALIDATING OIL AND GAS WELL PRODUCTION DATA - A system and method for analyzing and validating oil and gas well production data is disclosed. The system includes a network, a server connected to the network, and a set of wells connected to the network. In a preferred embodiment, the server is programmed to store and execute the method. The method includes the steps of collecting a set of data from the set of wells, performing an first RPI® evaluation on the set of data, creating a matched data set from the set of data, segregating the matched data set into a set of comparison groups, normalizing each comparison group of the set of comparison groups, calculating a set of performance metrics between a subset of the set of comparison groups, and calculating a probability for each performance metric of the set of performance metrics. | 05-14-2015 |
20150149416 | SYSTEM AND METHOD FOR NEGOTIATED TAKEOVER OF STORAGE OBJECTS - A system and method of negotiated takeover of storage objects includes one or more processors, a storage controller, and memory coupled to the one or more processors. The memory stores a data structure that includes information about a plurality of storage objects manageable by the storage controller. The storage controller is configured to assume, one by one, current ownership of a first subset of the storage objects and assume, concurrently, current ownership of a second subset of the storage objects. The first subset of storage objects and the second subset of storage objects are currently owned by a second storage server coupled to the storage server. In some embodiments, current ownership of the first subset of storage objects is transferred by iteratively detecting a particular storage object from the first subset of the storage objects whose current ownership can be assumed and bringing the particular storage object online. | 05-28-2015 |
20150149417 | WEB-BASED DEBUGGING OF DATABASE SESSIONS - A system includes reception, from a first user, of a first web-protocol request to establish a first database server session, establishment of the first database server session in response to the first request, reception, from a second user, of a second web-protocol request to establish a second database server session and to communicate with the second database server session via a non-transient connection, establishment of the second database server session in response to the second request, reception, from the second user, of a third web-protocol request to attach the second database server session to the first database server session, attachment of the second database server session to the first database server session, and transmission of debugging information of the first database server session to the second user via the non-transient connection. | 05-28-2015 |
20150310055 | USING LINEAGE TO INFER DATA QUALITY ISSUES - Identifying data quality along a data flow. A method includes identifying quality metadata for two or more datasets. The quality metadata defines one or more of quality of a data source, accuracy of a dataset, completeness of a dataset, freshness of a dataset, or relevance of a dataset. At least some of the metadata is based on results of operations along a data flow. Based on the metadata, the method includes creating one or more quality indexes for the datasets. The one or more quality indexes include a characterization of quality of two or more datasets. | 10-29-2015 |
20150310166 | METHOD AND SYSTEM FOR PROCESSING DATA FOR EVALUATING A QUALITY LEVEL OF A DATASET - A method processes data for evaluating quality level of an original dataset. The original dataset is obtained from an automated sequencing of a chain of nucleotides and represents a plurality of total mapped reads. The method includes sampling of a plurality of total mapped reads of the original dataset to produce a subset of mapped reads. The method also includes computing a dispersion indicator for the subset. The dispersion indicator represents divergence between an actual read count intensity and a theoretical read count intensity. The actual read count corresponds to the number of sampled mapped reads. The theoretical read count corresponds to a theoretical number of sampled mapped reads, which does not depend on the current sampling. | 10-29-2015 |
20150331941 | Audio File Quality and Accuracy Assessment - Disclosed computer-based systems and methods for analyzing a plurality of audio files corresponding to text-based news stories and received from a plurality of audio file creators are configured to (i) compare quality and/or accuracy metrics of individual audio files against corresponding quality and/or accuracy thresholds, and (ii) based on the comparison: (a) accept audio files meeting the quality and/or accuracy thresholds for distribution to a plurality of subscribers for playback, (b) reject audio files failing to meet one or more certain quality and/or accuracy thresholds, (c) remediate audio files failing to meet certain quality thresholds, and (d) designate for human review, audio files failing to meet one or more certain quality and/or accuracy thresholds by a predetermined margin. | 11-19-2015 |
20150379064 | DEPENDENCY MANAGEMENT DURING MODEL COMPILATION OF STATISTICAL MODELS - The disclosed embodiments provide a method and system for processing data. During operation, the system obtains a dependency graph associated with feature selection in a statistical model, wherein nodes in the dependency graph include one or more feature sources, one or more transformers, and an assembler. Next, the system uses the dependency graph to derive an evaluation order associated with the nodes. The system then compiles a set of configurations for the statistical model according to the evaluation order. | 12-31-2015 |
20160070771 | READ DESCRIPTORS AT HETEROGENEOUS STORAGE SYSTEMS - In response to a read request directed to a first data store of a storage group, a state transition indicator is identified, corresponding to a modification that has been applied at the data store before a response to the read is prepared. A read descriptor that includes the state transition indicator and read repeatability verification metadata is prepared. The metadata can be used to check whether the read request is a repeatable read. The read descriptor is transmitted to a client-side component of the storage group. | 03-10-2016 |
20160078061 | Long-Term Data Storage Service for Wearable Device Data - Methods and apparatus for storing data about biological entities are provided. A computing device can receive a plurality of data items about a biological entity from a plurality of sources. The computing device can verify each data item of the plurality of data items using the computing device by at least: determining a source of the data item from among the plurality of sources, determining a provenance for the data item associated with the source of the data item, and verifying that the data item is associated with the biological entity based at least on the provenance for the data item associated with the source of the data item. After verifying that a particular data item is associated with the biological entity, the computing device can store the particular data item in a data log associated with the biological entity. | 03-17-2016 |
20160078078 | AUDITING OF WEB-BASED VIDEO - A method for auditing a web-based video can comprise receiving validation information associated with one or more video files that are accessible on a webpage. The validation information can comprise one or more time intervals associated with at least one video file. Additionally, the validation information can comprise tag data relating a tag that are associated with the at least one video files. The method can also request through a network connection the at least one video files. The method can then execute the at least one video files. Executing the at least one video file can cause a tag to fire. Additionally, the method can validate the tag by determining whether the tag conforms to the received tag data. | 03-17-2016 |
20160154841 | METHODS AND APPARATUS FOR ADMINISTERING A MUTUALLY OWNED SECURE EMAIL AND FILE REPOSITORY FOR FAMILY MEMBERS | 06-02-2016 |
20160154866 | EFFICIENT DATA MANIPULATION SUPPORT | 06-02-2016 |
20160188631 | LOCATING CALL MEASUREMENT DATA - A server includes a time-series generator to receive a sequence of unlabeled data records for a first user equipment. The unlabeled data records include values of measurements performed by the first user equipment on signals received from at least one base station. The server also includes a localization engine to estimate locations of the unlabeled data records based on the values of the measurements, a labeled dataset representing a channel model of a geographic area, and a map representative of the geographic area. | 06-30-2016 |
20160203338 | METHODS AND SYSTEMS FOR DETECTING DEVICE OR CARRIER CHANGE CONVERSIONS | 07-14-2016 |