Patent application number | Description | Published |
20090007094 | Loosely coupled product install and configuration - A method, system and program are provided for managing the installation and configuration of a software product by using a proxy service to loosely couple the installation and/or configuration of constituent modules within the installation/configuration flow of the software product. The proxy service invokes the installation/configuration processing of an existing software component, thereby reducing the complexity associated with installing new component installation processes every time a component is to be supported, especially where the software products and new component(s) do not share the same installation/configuration platforms. | 01-01-2009 |
20090007097 | Product install and configuration providing choice of new installation and re-use of existing installation - A method, system and program are provided for managing the installation and configuration of a software product by using a proxy service to loosely couple the installation and/or configuration of constituent modules within the installation/configuration flow of the software product. The proxy service invokes the installation/configuration processing of an existing software component, thereby reducing the complexity associated with installing new component installation processes every time a component is to be supported, especially where the software products and new component(s) do not share the same installation/configuration platforms. | 01-01-2009 |
20090138843 | SYSTEM AND METHOD FOR EVALUATING SOFTWARE SUSTAINABILITY - A system and method for evaluating software sustainability are provided. The illustrative embodiments provide code scanning tools for identifying authors of portions of a software product and various attributes about the development, maintenance, and improvement of portions of the software product over time. This information may be correlated with organizational information to identify portions of the software product that may be lacking in sustainability by the persons currently associated with the software organization. Moreover, this information may be used to obtain information regarding the relative quality of the composition or conception of portions of the software product, portions of the software product that have required a relatively larger amount of resources to develop over time, a relative indication of which portions of the software product are “harder” or “easier” to sustain and who is associated with those portions of the software product, and the like. | 05-28-2009 |
20100083359 | TRUSTED DATABASE AUTHENTICATION THROUGH AN UNTRUSTED INTERMEDIARY - A method, system and computer-usable medium are disclosed for validating user credentials submitted to a data source by an untrusted intermediary. An untrusted intermediary attempts to access a data source on behalf of a user. The untrusted intermediary challenges the user to provide credentials of the type and format required to access the data provided by the data source. The user's trust client connects to an authentication service and identification credentials of the required type and format are generated. The identification credentials are conveyed to the user's trust client, which then provides them to the user's client, which in turn conveys them to the untrusted intermediary. The untrusted intermediary then presents the identification credentials to an authentication plug-in of the data source. The authentication plug-in validates the authenticity of the provided credentials with their associated authentication service. Once the credentials are successfully validated, the requested data is provided to the user's client by the untrusted intermediary. | 04-01-2010 |
20110161332 | Method and System for Policy Driven Data Distribution - A method, system and computer-usable medium are disclosed for controlling the distribution of data. Data stored in a datastore is filtered according to a data release policy to generate filtered data. A data release policy agreement, corresponding to the data release policy, is generated. The filtered data and the data release policy agreement are then provided to an information consumer. The data release policy agreement is then used to enforce the data release policy. | 06-30-2011 |
20110295866 | ONTOLOGY GUIDED REFERENCE DATA DISCOVERY - Mapping and translating reference data from multiple databases using an enterprise ontology. This is achieved by various means, including mapping values of a first database to corresponding fields within the ontology, mapping values of a second database to corresponding fields within the ontology, and determining relationships between the values of the first database and the values of the second database based on their respective mappings to common fields within the ontology. | 12-01-2011 |
20110321136 | GENERALIZED IDENTITY MEDIATION AND PROPAGATION - Provided are techniques for providing security in a computing system with identity mediation policies that are enterprise service bus (EBS) independent. A mediator component performs service-level operation such as message brokering, identity mediation, and transformation to enhance interoperability among service consumers and service providers. A mediator component may also delegate identity related operations to a token service of handler. Identity mediation may include such operations as identity determination, or “identification,” authentication, authorization, identity transformation and security audit. | 12-29-2011 |
20120174185 | GENERALIZED IDENTITY MEDIATION AND PROPAGATION - Provided are techniques for providing security in a computing system with identity mediation policies that are enterprise service bus (EBS) independent. A mediator component performs service-level operation such as message brokering, identity mediation, and transformation to enhance interoperability among service consumers and service providers. A mediator component may also delegate identity related operations to a token service of handler. Identity mediation may include such operations as identity determination, or “identification,” authentication, authorization, identity transformation and security audit. | 07-05-2012 |
20120191731 | Method and System for Policy Driven Data Distribution - A method, system and computer-usable medium are disclosed for controlling the distribution of data. Data stored in a datastore is filtered according to a data release policy to generate filtered data. A data release policy agreement, corresponding to the data release policy, is generated. The filtered data and the data release policy agreement are then provided to an information consumer. The data release policy agreement is then used to enforce the data release policy. | 07-26-2012 |
20120254205 | ONTOLOGY GUIDED REFERENCE DATA DISCOVERY - Mapping and translating reference data from multiple databases using an enterprise ontology. This is achieved by various means, including mapping values of a first database to corresponding fields within the ontology, mapping values of a second database to corresponding fields within the ontology, and determining relationships between the values of the first database and the values of the second database based on their respective mappings to common fields within the ontology. | 10-04-2012 |
20130031117 | Auto-Mapping Between Source and Target Models Using Statistical and Ontology Techniques - A system maps data within a data source to a target data model, and comprises a computer system including at least one processor. The system determines an identifier for each data object of the data source based on the data within that data object, wherein the identifier indicates for that data object a corresponding concept within a domain ontological representation of a data model of the data source. The determined identifiers for the data objects of the data source are compared to the target data model to determine mappings between the data objects of the data source and the target data model. Data objects from the data source are extracted for the target data model in accordance with the mappings. Present invention embodiments further include a method and computer program product for mapping data within a data source to a target data model. | 01-31-2013 |
20130205252 | CONVEYING HIERARCHICAL ELEMENTS OF A USER INTERFACE - Techniques are disclosed for generating a view of a data flow model. One or more groupings of data flow objects in the data flow model is determined, based on an ontology. At least a first one of the groupings is collapsed in the view. The view is output for display in a user interface configured to selectively expand and collapse the first group based on user input. | 08-08-2013 |
20130212072 | Generating and Utilizing a Data Fingerprint to Enable Analysis of Previously Available Data - According to one embodiment of the present invention, a system analyzes data in response to detecting occurrence of an event, and includes a computer system including at least one processor. The system maps fields between the data and a fingerprint definition identifying relevant fields of the data to produce a fingerprint for the data. The data is deleted after occurrence of the event. The produced fingerprint is stored in a data repository, and retrieved in response to detection of the event occurrence after the data has been deleted. The system analyzes the retrieved fingerprint to evaluate an impact of the event on corresponding deleted data. Embodiments of the present invention further include a method and computer program product for analyzing data in response to detecting occurrence of an event in substantially the same manner described above. | 08-15-2013 |
20130212073 | Generating and Utilizing a Data Fingerprint to Enable Analysis of Previously Available Data - According to one embodiment of the present invention, a system analyzes data in response to detecting occurrence of an event, and includes a computer system including at least one processor. The system maps fields between the data and a fingerprint definition identifying relevant fields of the data to produce a fingerprint for the data. The data is deleted after occurrence of the event. The produced fingerprint is stored in a data repository, and retrieved in response to detection of the event occurrence after the data has been deleted. The system analyzes the retrieved fingerprint to evaluate an impact of the event on corresponding deleted data. Embodiments of the present invention further include a method and computer program product for analyzing data in response to detecting occurrence of an event in substantially the same manner described above. | 08-15-2013 |
20130238550 | METHOD TO DETECT TRANSCODING TABLES IN ETL PROCESSES - Techniques are disclosed for identifying transcoding tables in an Extract-Transform-Load (ETL) process, by identifying, by operation of one or more computer processors, records passing through an operator configured to replace values in the records with values from at least one table linked to the operator before being sent to an output table, wherein the operator specifies an operation for extracting, transforming, or loading data stored in one or more source systems into storage by a target system, and evaluating at least a first table linked to the operator to determine whether the first table is a transcoding table by assigning a score to the first table, wherein the score is indicative of the likelihood that the first table is a transcoding table, wherein a transcoding table is used to harmonize values from a plurality of tables in the one or more source systems to a table in the target. | 09-12-2013 |
20130238557 | MANAGING TENANT-SPECIFIC DATA SETS IN A MULTI-TENANT ENVIRONMENT - A method, computer program product and system for managing tenant-specific data sets in a multi-tenant system, by receiving a request to convert a data set in a physical data store from a first type of multi-tenant deployment to a second type of multi-tenant deployment, retrieving tenant identification metadata identifying a tenant making the request, modifying the data set in the physical data store based on the second type of multi-tenant deployment, and modifying metadata associated with an abstraction layer to allow the modified data set to be accessed. | 09-12-2013 |
20130238596 | METHOD TO DETECT REFERENCE DATA TABLES IN ETL PROCESSES - A method, system and computer program product for identifying reference data tables in an Extract-Transform-Load (ETL) process, by identifying, by operation of one or more computer processors, at least a first reference data operator in the process, wherein the first reference data operator references one or more tables and evaluating at least a first table referenced by the reference data operator to determine whether the first table is a reference data table by assigning a score to the first table, wherein the score is indicative of the likelihood that the first table is a reference data table and wherein a reference data table contains a set of values that describes other data. | 09-12-2013 |
20130238641 | MANAGING TENANT-SPECIFIC DATA SETS IN A MULTI-TENANT ENVIRONMENT - A method, computer program product and system for managing tenant-specific data sets in a multi-tenant system, by receiving a request to convert a data set in a physical data store from a first type of multi-tenant deployment to a second type of multi-tenant deployment, retrieving tenant identification metadata identifying a tenant making the request, modifying the data set in the physical data store based on the second type of multi-tenant deployment, and modifying metadata associated with an abstraction layer to allow the modified data set to be accessed. | 09-12-2013 |
20130275170 | INFORMATION GOVERNANCE CROWD SOURCING - A method, computer program product, and system for information governance crowd sourcing by, responsive to receiving a data quality exception identifying one or more data quality errors in a data store, identifying a performance level required to correct the data quality errors, selecting, from a crowd hierarchy, a first one or more crowds meeting the defined performance level, wherein the crowd hierarchy ranks the performance of one or more crowds, and routing, by operation of one or more computer processors, the one or more data quality errors to the selected crowds for correction. | 10-17-2013 |
20130275803 | INFORMATION GOVERNANCE CROWD SOURCING - A method, computer program product, and system for information governance crowd sourcing by, responsive to receiving a data quality exception identifying one or more data quality errors in a data store, identifying a performance level required to correct the data quality errors, selecting, from a crowd hierarchy, a first one or more crowds meeting the defined performance level, wherein the crowd hierarchy ranks the performance of one or more crowds, and routing, by operation of one or more computer processors, the one or more data quality errors to the selected crowds for correction. | 10-17-2013 |
20140006339 | DETECTING REFERENCE DATA TABLES IN EXTRACT-TRANSFORM-LOAD PROCESSES | 01-02-2014 |
20140136576 | DESTRUCTION OF SENSITIVE INFORMATION - Provided are techniques for deleting sensitive information in a database. One or more objects in a database that are accessed by a statement are identified. It is determined that at least one object among the identified one or more objects contains sensitive information by checking an indicator for the at least one object. One or more security policies associated with the at least one object are identified. The identified one or more security policies are implemented for the at least one object to delete sensitive information. | 05-15-2014 |
20140136577 | DESTRUCTION OF SENSITIVE INFORMATION - Provided are techniques for deleting sensitive information in a database. One or more objects in a database that are accessed by a statement are identified. It is determined that at least one object among the identified one or more objects contains sensitive information by checking an indicator for the at least one object. One or more security policies associated with the at least one object are identified. The identified one or more security policies are implemented for the at least one object to delete sensitive information. | 05-15-2014 |
20140164399 | INFERRING VALID VALUES FOR OBJECTS IN A GLOSSARY USING REFERENCE DATA - Method, system, and computer program product to improve a coverage of a plurality of classifications between a plurality of terms in a glossary and a set of values in a reference data management system, by identifying a first classification, of the plurality of classifications in the glossary, between a first term in the glossary and a first set of values in the reference data management system, detecting a relationship between the first set of values and a second set of values in the reference data management system, and upon determining that a relevance score for a relevant value from the second set of values exceeds a predefined threshold, identifying the relevant value to be classified with the term in the glossary, wherein the glossary is configured to create a second classification between the first term and the relevant value. | 06-12-2014 |
20140222991 | SENTRY FOR INFORMATION TECHNOLOGY SYSTEM BLUEPRINTS - Lifecycle management for blueprints of information technology systems includes determining, using a processor, a component referenced by a blueprint defining an information technology system and determining a component tool used to manage the component. The component is registered with a sensor within the component tool. Responsive to detecting a change in status of the component within the component tool, the sensor sends a notification. | 08-07-2014 |
20140223001 | SENTRY FOR INFORMATION TECHNOLOGY SYSTEM BLUEPRINTS - Lifecycle management for blueprints of information technology systems includes determining, using a processor, a component referenced by a blueprint defining an information technology system and determining a component tool used to manage the component. The component is registered with a sensor within the component tool. Responsive to detecting a change in status of the component within the component tool, the sensor sends a notification. | 08-07-2014 |
20140280342 | SECURE MATCHING SUPPORTING FUZZY DATA - Provided are techniques for secure matching supporting fuzzy data. A first bloom filter for a first data element is retrieved, wherein each of the characters in the data element has been encrypted with a beginning offset position of the character and encrypted with an end offset position of the character to produce two encrypted values that are added to the first bloom filter. A second bloom filter for a second data element is retrieved. The first bloom filter and the second bloom filter are compared to determine whether there is a match between the first data element and the second data element. | 09-18-2014 |
20140281856 | DETERMINING LINKAGE METADATA OF CONTENT OF A TARGET DOCUMENT TO SOURCE DOCUMENTS - Provided are to a computer program product, system, and method for determining linkage metadata of content of a target document to source documents. In response to a determination that a target fragment in a target document matches a source fragment in a source document, linkage metadata is generated for the target fragment | 09-18-2014 |
20140365106 | Method For Collecting And Processing Relative Spatial Data - A method and system are disclosed for determining relative connectedness from temporal data. A user iterates through a list of items implemented on a mobile device. As each item on the list is located it is marked to generate a corresponding timestamp. The timestamps are then used to generate a timestamped list, which in turn is processed to determine the amount of elapsed time between each item on the list being located. The timestamped list data is then processed to generate relative connectedness data, which in turn is processed to generate a relative connectedness graph. The relative connectedness graph is then processed to assign coordinates to each item on the list. In turn, the coordinates are used to generate a map of the items. | 12-11-2014 |
20140365423 | Method For Collecting And Processing Relative Spatial Data - A method and system are disclosed for determining relative connectedness from temporal data. A user iterates through a list of items implemented on a mobile device. As each item on the list is located it is marked to generate a corresponding timestamp. The timestamps are then used to generate a timestamped list, which in turn is processed to determine the amount of elapsed time between each item on the list being located. The timestamped list data is then processed to generate relative connectedness data, which in turn is processed to generate a relative connectedness graph. The relative connectedness graph is then processed to assign coordinates to each item on the list. In turn, the coordinates are used to generate a map of the items. | 12-11-2014 |
Patent application number | Description | Published |
20090259753 | Specializing Support For A Federation Relationship - The invention provides federated functionality within a data processing system by means of a set of specialized runtimes, which are instances of an application for providing federation services to requesters. Each of the plurality of specialized runtimes provides requested federation services for selected ones of the requestors according to configuration data of respective federation relationships of the requestors with the identity provider. The configuration data is dynamically retrieved during initialization of the runtimes which allows the respective_runtime to be specialized for a given federation relationship. Requests are routed to the appropriate specialized runtime using the first requestor identity and the given federation relationship. The data, which describes each federation relationship between the identity provider and each of the plurality of requestors, is configured prior to initialization of the runtimes. | 10-15-2009 |
20100106558 | Trust Index Framework for Providing Data and Associated Trust Metadata - An approach is provided in which facts are received and then one or more atomic fact trust analyses are performed on the facts. The atomic fact trust analyses result in various atomic trust factor scores. Composite trust analysis is performed using the atomic trust factor scores. The composite trust analyses result in composite trust factor scores. The atomic trust factor scores and the composite trust factor scores are stored in a trust index repository that is managed by a trust index framework. A request is then received for trusted data, the request being from an information consumer. The trust index framework then retrieves one of the composite trust factor scores from the trust index repository, with the retrieved composite trust factor score corresponding to the trusted data request, and this the retrieved composite trust factor score is provided to the information consumer. | 04-29-2010 |
20100106559 | Configurable Trust Context Assignable to Facts and Associated Trust Metadata - An approach is provided for selecting a trust factor from trust factors that are included in a trust index repository. A trust metaphor is associated with the selected trust factor. The trust metaphor includes various context values. Range values are received and the trust metaphor, context values, and range values are associated with the selected trust factor. A request is received from a data consumer, the request corresponding to a trust factor metadata score that is associated with the selected trust factor. The trust factor metadata score is retrieved and matched with the range values. The matching results in one of the context values being selected based on the retrieved trust factor metadata score. The selected context value is then provided to the data consumer. | 04-29-2010 |
20100106560 | Generating Composite Trust Value Scores Based on Assignable Priorities, Atomic Metadata Values and Associated Composite Trust Value Scores - An approach is provided in which atomic trust scores are computed using a atomic trust factors that are applied to a plurality of metadata. A first set of composite trust scores are computed using some of the atomic trust scores. The composite trust scores are computed using a first set of algorithms. Some of the algorithms use a factor weighting value as input to the algorithm. A second plurality of composite trust scores is computed using some of the composite trust scores that were included in the first set of scores as input. A fact and one of the second set of composite trust scores are presented to a user. The presented composite trust score provides a trustworthiness value that corresponds to the presented fact. | 04-29-2010 |
20100107244 | Trust Event Notification and Actions Based on Thresholds and Associated Trust Metadata Scores - An approach is provided for selecting one or more trust factors from trust factors included in a trust index repository. Thresholds are identified corresponding to one or more of the selected trust factors. Actions are identified to perform when the selected trust factors reach the corresponding threshold values. The identified thresholds, identified actions, and selected trust factors are stored in a data store. The selected trust factors are monitored by comparing one or more trust metadata scores with the stored identified thresholds. The stored identified actions that correspond to the selected trust factors are performed when one or more of the trust metadata scores reach the identified thresholds. At least one of the actions includes an event notification that is provided to a trust data consumer. | 04-29-2010 |
20100268934 | METHOD AND SYSTEM FOR SECURE DOCUMENT EXCHANGE - A document management (DM), data leak prevention (DLP) or similar application in a data processing system is instrumented with a document protection service provider interface (SPI). The service provider interface is used to call an external function, such as an encryption utility, that is used to facilitate secure document exchange between a sending entity and a receiving entity. The encryption utility may be configured for local download to and installation in the machine on which the SPI is invoked, but a preferred approach is to use the SPI to invoke an external encryption utility as a “service.” In such case, the external encryption utility is implemented by a service provider. When the calling program invokes the SPI, preferably the user is provided with a display panel. Using that panel, the end user provides a password that is used for encryption key generation, together with an indication of the desired encryption strength. The service provider uses the password to generate the encryption key. In one embodiment, the service provider provides the key to the service provider interface, which then uses the key to encrypt the document and to complete the file transfer operation. In the alternative, the service provider itself performs the document or file encryption. The service provider interface also preferably generates and sends an email or other message to the receiving entity that includes the key or a link to enable the receiving entity to retrieve the key. This approach obviates the sending and receiving entity having to install and manage matched or other special-purpose encryption utilities. | 10-21-2010 |
20110112974 | Distributed Policy Distribution For Compliance Functionality - A multi-component auditing environment uses a set of log-enabled components that are capable of being triggered during an information flow in a data processing system. A “master”, compliance component receives data from each log-enabled component in the set of log-enabled components, the data indicating a set of logging properties that are associated with or provided by that log-enabled component. The master compliance component determines, for a given compliance policy, which of a set of one or more events are required from one or more of the individual log-enabled components in the set of log-enabled components. As a result of the determining step, the master compliance component then configures one of more of the individual log-enabled components, e.g. by generating one or more configuration events that are then sent to the one or more individual components. This configuration may take place remotely, i.e., over a network connection. As a result of the information flow, audit or other logs are then collected from the log-enabled components. The master compliance component evaluates the collected logs to determine compliance with the compliance policy. As necessary, the master compliance component re-configures one or more log-enabled components in the set of log-enabled components to address any compliance issues arising from the evaluation. Thus, once a given compliance policy is specified, typically the individual log-enabled components in the multiple-component environment are not responsible for their own configuration, as that task is undertaken by the master compliance component. | 05-12-2011 |
20120239703 | PROVIDING HOMOGENEOUS VIEWS OF INFORMATION COLLECTIONS IN HETEROGENEOUS INFORMATION STORAGE SOURCES - A method, apparatus and computer program product, for generating a framework for supporting a homogeneous view of an information collection managed in a heterogeneous system of information storage sources. The framework includes an information collection data model mapped to an information source data model, and an information storage services data model mapped to the information source data model. The information collection data model defines information to be collected and stored as an information collection in one or more information storage sources. The information source data model references data sets containing the information defined in the information collection data model. The information storage services data model defines information storage services for accessing and performing operations on the one or more information storage sources storing the information collection. The framework allows a user to view and perform operations on the information collection without knowing how the information collection is stored. | 09-20-2012 |
20140101299 | TECHNIQUES FOR IMPLEMENTING INFORMATION SERVICES WITH TENTANT SPECIFIC SERVICE LEVEL AGREEMENTS - A technique for selecting an information service implementation includes receiving a service request that includes a tenant identifier that uniquely identifies a calling tenant. Transformation logic to service the service request is selected based on the received tenant identifier. One or more data sources and one or more data targets are selected for the service request based on the received tenant identifier. Data from the selected data sources is processed using the selected transformation logic and the processed data is stored at the selected data targets. | 04-10-2014 |
20140143878 | Security Capability Reference Model for Goal-based Gap Analysis - Gap analysis is performed on security capabilities of a computer system compared to a desired or targeted security model according to one or more security requirement by providing a data structure of security capabilities of a computer system under analysis, wherein each capability is classified in a formal security capability reference model with a mean having a set of attributes and a goal; determining the security capabilities of the deployed system-under-analysis; matching the security capabilities of the deployed system-under-analysis with the security capabilities defined in the data structure; determining one or more gaps in security capabilities between the deployed system and a security reference model goal; and displaying the gaps to a user in a report. | 05-22-2014 |
20140143879 | Security Capability Reference Model for Goal-based Gap Analysis - Gap analysis is performed on security capabilities of a computer system compared to a desired or targeted security model according to one or more security requirement by providing a data structure of security capabilities of a computer system under analysis, wherein each capability is classified in a formal security capability reference model with a mean having a set of attributes and a goal; determining the security capabilities of the deployed system-under-analysis; matching the security capabilities of the deployed system-under-analysis with the security capabilities defined in the data structure; determining one or more gaps in security capabilities between the deployed system and a security reference model goal; and displaying the gaps to a user in a report. | 05-22-2014 |