BMC SOFTWARE, INC. Patent applications |
Patent application number | Title | Published |
20150188768 | SERVER PROVISIONING BASED ON JOB HISTORY ANALYSIS - A system includes a runbook manager configured to generate a runbook governing future server provisioning jobs, based on analyzed job history. The runbook manager includes a history analyzer configured to analyze a job history for a plurality of provisioning jobs performed to provision a plurality of servers, to thereby obtain the analyzed job history. | 07-02-2015 |
20150186447 | LIFECYCLE REFERENCE PARTITIONING FOR DATABASE OBJECTS - In one general aspect, a computer-implemented system for reference partitioning database objects by lifecycle state includes at least one hardware processor, at least one database environment, the database environment supporting triggers and partitioning, at least one application program, and memory storing a lifecycle metadata framework. The lifecycle metadata framework identifies classes in a ragged hierarchy of database objects, identifies at least one class as a root of the hierarchy, identifies, for each non-root class, a lifecycle inheritance function for the class, and identifies, for each parent class-child class pair in the hierarchy, a relation-join query, the relation-join query being a join between tables in the database environment onto which the parent class and child class are persisted. The memory also stores triggers that use the framework to maintain lifecycle states for non-root database objects. | 07-02-2015 |
20150095089 | WORKLOAD MANAGEMENT FOR LICENSE COST OPTIMIZATION - A workload change evaluator may receive workload metrics characterizing a plurality of workloads executed within a license environment during a license period, and cost metrics characterizing license costs incurred by license environment during the license period. A baseline model generator may generate a baseline model providing a time-based contribution of each of the plurality of workloads to the license cost during the license period. A cost estimator may receive a potential workload change, and may estimate a license cost change caused by the potential workload change, based on the baseline model. | 04-02-2015 |
20140282519 | MANAGING A SERVER TEMPLATE - A non-transitory computer-readable storage medium may comprise instructions for managing a server template stored thereon. When executed by at least one processor, the instructions may be configured to cause at least one computing system to at least convert the server template to a corresponding virtual machine, manage the corresponding virtual machine, and convert the corresponding virtual machine back into a template format. | 09-18-2014 |
20140282225 | OFF-SCREEN WINDOW CONTROLS - A window detector may detect that an off-screen portion of a window is not visible within a display that is providing the window, the off-screen portion including at least one window control element. A control identifier may determine at least one supplemental control element corresponding to, and providing analogous functionality of, the at least one window control element. A control view generator may provide the at least one supplemental control element visibly within the display. A supplemental window controller may execute the analogous functionality with respect to the window, based on receipt of user input by way of the at least one supplemental control element. | 09-18-2014 |
20140282010 | STORY-MODE USER INTERFACE - A method includes displaying, in a single story-mode presentation on a user interface, information on events occurring in and/or related to a business process managed by a business process management application. The single story-mode presentation includes a time map navigation section that displays a time map of events relevant to a first business task or object of the business process along a first time line, and an event details section that contains information corresponding to the events displayed in the time map navigation section. | 09-18-2014 |
20140281891 | CONFIGURABLE USER INTEFACE INTERACTIONS USING A STATE MACHINE COMBINED WITH EVENT ORCHESTRATION - Disclosed is a method of displaying a user interface, the method includes defining a template for the user interface, the template including a plurality of display areas, defining a plurality of components, each of the components configured to perform an associated user interface function, each of the plurality of components being associated with one of the display areas, defining a plurality of states, each of the plurality of states including one or more of the plurality of components, each of the plurality of states defining a configuration of the user interface, defining a table, the table defining a plurality events associated with transitioning between states; and triggering a transition between states based on look-up of the table with a received event. | 09-18-2014 |
20140280130 | MULTI-ENTITY NORMALIZATION - In accordance with aspects of the disclosure, systems and methods are provided for normalizing data representing entities and relationships linking the entities including defining one or more graph rules describing searchable characteristics for the data representing the entities and relationships linking the entities, applying the one or more graph rules to the data representing the entities and the relationships linking the entities, identifying one or more matching instances between the one or more graph rules and the data representing the entities and the relationships linking the entities, and performing one or more actions to update the one or more matching instances between the one or more graph rules and the data representing the entities and the relationships linking the entities. | 09-18-2014 |
20140280068 | ADAPTIVE LEARNING OF EFFECTIVE TROUBLESHOOTING PATTERNS - The system may include a troubleshooting activity recorder configured to record troubleshooting sessions. Each troubleshooting session may include a sequence of queries and query results. The troubleshooting activity recorder may include a query transformer configured to transform the queries and the query results into transformed queries and transformed query results before recording the troubleshooting sessions. The troubleshooting activity recorder may be configured to record the transformed queries and the transformed query results as troubleshooting session information in a troubleshooting activity database. The system may include a troubleshooting pattern learning unit including a graph builder configured to generate a troubleshooting pattern graph having query nodes and links between the query nodes based on the troubleshooting session information. | 09-18-2014 |
20140279992 | STORING AND RETRIEVING CONTEXT SENSTIVE DATA IN A MANAGEMENT SYSTEM - A management system may include a reconciliation engine configured to reconcile a first instance of a resource object from a first data provider and a second instance of the resource object from a second data provider to obtain a reconciled resource object, and store the first instance, and second instance, and the reconciled resource object in datasets. The management system may include a context sensitive query engine configured to receive a context-sensitive query including context information identifying a source originally providing context sensitive data associated with a context-sensitive attribute, and retrieve the context sensitive data from one or more of the datasets based on the context information. | 09-18-2014 |
20140279977 | DATA ACCESS OF SLOWLY CHANGING DIMENSIONS - Disclosed is a method including storing selected historical persist dimension attribute data utilizing a row insertion without updating all previous versions of the selected persist dimension attribute, and generating a view of persisted dimension attribute data as dual values utilizing a star join. | 09-18-2014 |
20140279797 | BEHAVIORAL RULES DISCOVERY FOR INTELLIGENT COMPUTING ENVIRONMENT ADMINISTRATION - A management system for determining causal relationships among system entities may include a causal relationship detector configured to receive events from a computing environment having a plurality of entities, and detect causal relationships among the plurality of entities, during runtime of the computing environment, based on the events, and a rules converter configured to convert one or more of the causal relationships into at least one behavioral rule. The at least one behavioral rule may indicate a causal relationship between at least two entities of the plurality of entities. | 09-18-2014 |
20140278824 | SERVICE CONTEXT - According to one general aspect, a method may include displaying a user interface associated with the application. The user interface may provide a selection of a business service that is implemented within an Information Technology (IT) environment by at least one server and at least one business application executing on the at least one server. The method may include requesting a service status for the business service based on the selection, and receiving a database result regarding the business service from a database server. The database result may include performance information associated with the business service. The method may include displaying the service status as a user interface element viewable within the user interface of the application. The service status may provide the performance information that has been received within the database result. | 09-18-2014 |
20140278818 | BUSINESS DEVELOPMENT CONFIGURATION - In accordance with aspects of the disclosure, systems and methods are provided for configuring business development software for a modeled business environment including simulating one or more business related scenarios for managing situational events encountered with the modeled business environment using scenario input data to thereby generate data related to simulation results, and applying the data related to the simulation results to the modeled business environment to refine the modeled business environment by reconfiguring the business development software for the refined modeled business environment based on the data related to the simulation results provided by simulating the one or more business related scenarios with the scenario input data for the modeled business environment. | 09-18-2014 |
20140278326 | SERVICE PLACEMENT TECHNIQUES FOR A CLOUD DATACENTER - A container set manager may determine a plurality of container sets, each container set specifying a non-functional architectural concern associated with deployment of a service within at least one data center. A decision table manager may determine a decision table specifying relative priority levels of the container sets relative to one another with respect to the deployment. A placement engine may determine an instance of an application placement model (APM), based on the plurality of container sets and the decision table, determine an instance of a data center placement model (DPM) representing the at least one data center, and generate a placement plan for the deployment, based on the APM instance and the DPM instance. | 09-18-2014 |
20140258988 | SELF-EVOLVING COMPUTING SERVICE TEMPLATE TRANSLATION - Methods and apparatus for automatically generating translation programs for translating computing services templates to service blueprints are disclosed. An example method includes generating a population of translation logic elements from a plurality of verified computing services template translation programs, where each of the verified programs is configured to correctly translate at least one computing services template of a plurality of known templates to a respective service blueprint. The example method further includes identifying a new computing services template and programmatically augmenting the population of translation logic elements. The example method also includes generating one or more additional translation programs based on the augmented population of translation logic elements and validating each of the one or more additional computing services template translation programs. Based on the validating, each of the one or more additional computing services template translation programs is added to the verified translation programs or is discarded. | 09-11-2014 |
20140258507 | SYSTEM AND METHODS FOR REMOTE ACCESS TO IMS DATABASES - Systems and methods are provided that allow client programs using IMS database access interfaces to access IMS database data available from IMS systems on remote logical partitions and remote zSeries mainframes rather than from a local IMS system. For example, a method may include intercepting an IMS request having a documented IMS request format from a client program executing on a source mainframe system. The method may also include selecting a destination mainframe system and sending a buffer including information from the request from the source mainframe system to the destination mainframe system and establishing, at the destination mainframe system, an IMS DRA connection with the IMS system from the request. The method may further include receiving a response from the IMS system, sending a buffer having information from the response from the destination mainframe system to the source mainframe system, and providing the information to the client program. | 09-11-2014 |
20140258335 | IMS DL/I Application Accelerator - A method includes providing an application accelerator to an IMS region controller, which is coupled to an IMS data language interpreter (DL/I) interface. The IMS DL/I interface provides standard data access paths to a user application launched in an IMS environment. The application accelerator is configured to make alternate data access paths available to the user application in addition to the standard data access paths provided by the IMS environment. The method further includes intercepting an IMS DL/I call made by the user application and determining whether the intercepted call should be processed by the IMS DL/I interface or by an I/O engine of the application accelerator. | 09-11-2014 |
20140258256 | SYSTEMS AND METHODS FOR REMOTE ACCESS TO DB2 DATABASES - Systems and methods are provided that allow client programs using APIs for accessing local DB2 databases to access DB2 systems on remote logical partitions and remote zSeries mainframes rather than from a local DB2 system. For example, a method may include intercepting a DB2 request using a documented API for accessing local DB2 databases from a client program executing on a source mainframe system. The method may also include selecting a destination mainframe system and sending a buffer including information from the request from the source mainframe system to the destination mainframe system and establishing, at the destination mainframe system, a DB2 connection with the DB2 system from the request. The method may further include receiving a response from the DB2 system, sending a buffer having information from the response from the destination mainframe system to the source mainframe system, and providing the information to the client program. | 09-11-2014 |
20140244230 | COMPUTING INFRASTRUCTURE PLANNING - In accordance with aspects of the disclosure, systems and methods are provided for generating one or more potential configurations corresponding to one or more parameters used for computing infrastructure planning by determining a sizing grammar for each of the one or more potential configurations corresponding to the one or more parameters, interpreting the sizing grammar based on one or more grammar rules to output configuration information for each of the one or more potential configurations, and translating the configuration information for each of the one or more potential configurations based on one or more motif descriptions to output resource information for each of the one or more potential configurations. | 08-28-2014 |
20140237453 | EXCEPTION BASED QUALITY ASSESSMENT - The embodiments may include an apparatus for measuring code quality using exceptions. The apparatus may include a runtime collector configured to intercept exceptions generated by an application, and collect exception information for each exception, during runtime of the application, based on instrumentation code included within the application. The apparatus may include a collection module configured to store the intercepted exceptions and corresponding exception information in a memory unit, an exception analyzer configured to analyze the intercepted exceptions based on the collected exception information stored in the memory unit, and a report generator configured to generate at least one report based on the analysis. The at least one report may provide an indication of code quality of the application. | 08-21-2014 |
20140195504 | STATISTICAL IDENTIFICATION OF INSTANCES DURING RECONCILIATION PROCESS - A system for reconciling object for a configuration management databases employs statistical rules to reduce the amount of manual identification required by conventional reconciliation techniques. As users manually identify matches between source and target datasets, statistical rules are developed based on the criteria used for matching. Those statistical rules are then used for future matching. A threshold value is adjusted as the statistical rules are used, incrementing the threshold value when the rule successfully matches source and target objects. If the threshold value exceeds a predetermined acceptance value, the system may automatically accept a match made by a statistical rule. Otherwise, suggestions of possibly applicable rules may be presented to a user, who may use the suggested rules to match objects, causing adjustment of the threshold value associated with the suggested rules used. | 07-10-2014 |
20140189644 | ADDITIVE INDEPENDENT OBJECT MODIFICATION - Disclosed is a method, a system and a computer readable medium for additive independent object modification. The method includes determining an association between an independent object modification and a base object of a software application, modifying at least one element of the base object based on the associated independent object modification, and configuring the software application to execute in a computer system using the modified base object. | 07-03-2014 |
20140189438 | MEMORY LEAK DETECTION - In accordance with aspects of the disclosure, systems and methods are provided for monitoring one or more classes for detecting suspected memory leaks in a production environment. The systems and methods may include identifying which of the one or more classes hold at least one static or non-static field of collection or array type, accessing the one or more classes that hold the at least one static or non-static fields of collection or array type, and tracking a size for each field of each class by periodically sampling the size of each field over an interval, processing the size data for each field of each class, and detecting suspected memory leaks of each class by identifying which of the one or more fields of each class exhibits suspect behavior in the size over the interval. | 07-03-2014 |
20140180661 | AUTOMATIC CREATION OF GRAPH TIME LAYER OF MODEL OF COMPUTER NETWORK OBJECTS AND RELATIONSHIPS - A method and system create a model of a set of relationships between a set of parent computer network objects and a set of corresponding child computer network objects, over a period of time, and output a user interface graphing the model in a single view to illustrate the set of relationships over the period of time. The parent computer network objects include virtual machines and the child computer network objects include hosts. The user interface includes a search option to provide for a search of problems with the child computer network objects over the period of time. | 06-26-2014 |
20140173245 | RELATIVE ADDRESSING USAGE FOR CPU PERFORMANCE - The embodiments provide a computing device for incorporating data into code such that the data is relative to the code and, thereby, available for relative addressing. The computing device may include a code generator configured to receive source code from a source code database, and generate executable object code from the source code. The executable object code may include at least one instruction referencing data having an absolute address from a data source. Also, the computing device may include a data incorporator configured to transfer the data from the data source into the executable object code, where the transferred data is relative to the at least one instruction. Further, the computing device may include a relative addresser configured to adjust the at least one instruction to include a relative address for the transferred data including converting the absolute address to the relative address. | 06-19-2014 |
20140172786 | OFFLINE RESTRUCTURING OF DEDB DATABASES - An IMS DEDB database restructure operation creates an empty offline DEDB having the desired structure. The offline database is populated with data from a source (online) database while keeping the source database online (i.e., available for access and update operations). Updates to the source database made during this process are selectively processed in parallel with the offline DEDB load operation. When the contents of the offline database is substantially the same as the source or online database, the source database is taken offline, final updates to the offline database are applied whereafter the offline database is brought online, thereby replacing the source database. It is significant to note that updates occurring to the source or online DEDB are applied to the offline DEDB. | 06-19-2014 |
20140143416 | GENERIC DISCOVERY FOR COMPUTER NETWORKS - A generic discovery methodology collects data pertaining to components of a computer network using various discovery technologies. From the collected data, the methodology identifies, filters and analyzes information related to inter-component communications. Using the communication and application information, the methodology determines reliable relationships for those components having sufficient information available. To qualify more components, the methodology implements a decision service to generate hypothetical relationships between components that are known and components that are unqualified or unknown. The hypothetical relationships are presented to a user for selection, and each hypothetical relationship is preferably associated with an indication of its reliability. | 05-22-2014 |
20140114931 | MANAGEMENT OF ANNOTATED LOCATION AWARE ASSETS - According to one general aspect, a method may include storing, in a memory device, a plurality of floor maps, each floor map indicating the structural layout of a respective predefined physical location. The method may include storing, in a memory device, a plurality of point-of-interest (POI) data structures. Each POI data structure may include a physical location of an associated POI. The method may include receiving a floor map request from a client computing device, wherein the floor map request includes a requested location. The method may include based upon the location included by the floor map request, selecting a selected floor map and a selected subset of the plurality of POI data structures. The method may include transmitting, to the client computing device, a response to the floor map request based upon the selected floor map and the selected POI data structures. | 04-24-2014 |
20140113559 | PROACTIVE ROLE AWARE ASSET MONITORING - According to one general aspect, a method may include establishing a short-range wireless communication between a user device and a point-of-interest (POI) device, wherein the POI device is associated with a POI data structure that represents a physical POI. The method may include receiving a request to perform a POI action in regards to the physical POI. The method may include causing the POI action to be performed. | 04-24-2014 |
20140111520 | USER-CENTRIC ANNOTATED LOCATION AWARE ASSET MAPPING - According to one general aspect, a method may include receiving a floor map indicating the structural layout of a predefined physical location. The method may also include receiving a point-of-interest (POI) data structure representing a POI and POI metadata associated with the POI. The method may include generating an annotated floor map, based upon the floor map and including a POI indicator, wherein the POI indicator is placed on the floor map at the location of an associated POI and indicates both the type of the associated POI and at least part of the status of the associated POI. The method may include displaying, via a display interface, at least a portion of the annotated floor map. | 04-24-2014 |
20140101178 | PROGRESSIVE ANALYSIS FOR BIG DATA - According to one general aspect, a method may include receiving a data query request that includes one or more search parameters to be searched for within a plurality of files that are stored according to a hierarchical organizational structure, wherein each file includes at least one data record. The method may include scanning a plurality of files to determine if one or more files match a sub portion of the search parameters. The method may further include parsing the candidate files to determine which, if any, records included by the respective candidate files meet the search parameters. The method may include generating, by one or more result analyzers, query results from the resultant data. The method may also include streaming, to the user device, the query results as at least one query result becomes available and to start streaming before the query requests have been fully generated. | 04-10-2014 |
20140096109 | APPLICATION OF BUISINESS PROCESS MANAGEMENT STANDARDS FOR DYNAMIC INFORMATION TECHNOLOGY MANAGEMENT PROCESS AND INTEGRATIONS - Processes and integrations include a method for managing a business process application development lifecycle. The method includes initiating, in a planning stage, requirements for an application based on adding new features to the application or a new application, implementing, in a development stage, a service process node (SPN) as a business process, and managing, in an operations stage, software code representing the application in a production environment. The SPN is configured to encapsulate at least one business service object and generate an interface configured to expose internal processes of the at least one business service object. | 04-03-2014 |
20140095676 | Elastic Packaging of Application Configuration - Elastic packaging of application configuration may include selecting at least one configurable attribute from an application model hierarchy, generating at least one formula for the selected at least one configurable attribute, the at least one formula including interface parameters, and tag the generated at least one formula with the selected at least one configurable attribute in an application deployment package, the application deployment package including an application to be deployed on a cloud computer. | 04-03-2014 |
20140089253 | ZERO-OUTAGE DATABASE REORGANIZATION - Methods and systems enable a database reorganization to occur without a database outage. In one aspect, the method includes pausing transactions directed to the database, keeping a logical view of the database online. The method may also include taking individual partitions offline, changing the names of datasets associated with the individual partitions in a database schema, and bringing the partitions online, all while the logical view of the database remains online. The database schema may be changed to reflect the name of datasets associated with a shadow copy of the database that has been reorganized. | 03-27-2014 |
20140032768 | AUTOMATED CAPACITY PROVISIONING METHOD USING HISTORICAL PERFORMANCE DATA - The method may include collecting performance data relating to processing nodes of a computer system which provide services via one or more applications, analyzing the performance data to generate an operational profile characterizing resource usage of the processing nodes, receiving a set of attributes characterizing expected performance goals in which the services are expected to be provided, and generating at least one provisioning policy based on an analysis of the operational profile in conjunction with the set of attributes. The at least one provisioning policy may specify a condition for re-allocating resources associated with at least one processing node in a manner that satisfies the performance goals of the set of attributes. The method may further include re-allocating, during runtime, the resources associated with the at least one processing node when the condition of the at least one provisioning policy is determined as satisfied. | 01-30-2014 |
20140025647 | Normalization Engine to Manage Configuration Management Database Integrity - Data is often populated into Configuration Management Databases (CMDBs) from different sources. Because the data can come from a variety of sources, it may have inconsistencies—and may even be incomplete. A Normalization Engine (NE) may be able to automatically clean up the incoming data based on certain rules and knowledge. In one embodiment, the NE takes each Configuration Item (CI) or group of CIs that are to be normalized and applies a rule or a set of rules to see if the data may be cleaned up, and, if so, updates the CI or group of CIs accordingly. In particular, one embodiment may allow for the CI's data to be normalized by doing a look up against a Product Catalog and/or an Alias Catalog. In another embodiment, the NE architecture could be fully extensible, allowing for the creation of custom, rules-based plug-ins by users and/or third parties. | 01-23-2014 |
20140019597 | SEMI-AUTOMATIC DISCOVERY AND GENERATION OF USEFUL SERVICE BLUEPRINTS - According to one general aspect, a method of semi-automatically discovering and generating useful service blueprints may include collecting, by an apparatus, a plurality of configuration information sets regarding a plurality of network service applications. The method may also include converting, by the apparatus, the plurality of configuration information sets into one or more normalized application instance graphs. The method may further include generating, by the apparatus, one or more application blueprint files based, at least in part, upon the one or more normalized application instance graphs. | 01-16-2014 |
20140007079 | HYBRID-CLOUD INFRASTRUCTURES | 01-02-2014 |
20130263140 | WINDOW-BASED SCHEDULING USING A KEY-VALUE DATA STORE - A scheduling system for scheduling executions of tasks within a distributed computing system may include an entry generator configured to store, using at least one key-value data store, time windows for scheduled executions of tasks therein using a plurality of nodes of the distributed computing system. The entry generator may be further configured to generate scheduler entries for inclusion within a time window of the time windows, each scheduler entry identifying a task of the tasks and an associated schedule for execution thereof. The system may further include an execution engine configured to select the time window and execute corresponding tasks of the included scheduler entries in order. | 10-03-2013 |
20130263096 | APPLICATION INSTRUMENTATION CODE EXTENSION - The embodiments provide an application diagnostics apparatus including an instrumentation engine configured to monitor one or more methods of a call chain of the application in response to a server request according to an instrumentation file specifying which methods are monitored and which methods are associated with a code extension, an extension determining unit configured to determine that at least one monitored method is associated with the code extension based on code extension identification information, a class loading unit configured to load the code extension from a resource file when the at least one monitored method associated with the code extension is called within the call chain, a code extension execution unit configured to execute one or more data collection processes, and a report generator configured to generate at least one report for display based on collected parameters. | 10-03-2013 |
20130263091 | SELF-EVOLVING COMPUTING SERVICE TEMPLATE TRANSLATION - Methods and apparatus for automatically generating translation programs for translating computing services templates to service blueprints are disclosed. An example method includes generating a population of translation logic elements from a plurality of verified computing services template translation programs, where each of the verified programs is configured to correctly translate at least one computing services template of a plurality of known templates to a respective service blueprint. The example method further includes identifying a new computing services template and programmatically augmenting the population of translation logic elements. The example method also includes generating one or more additional translation programs based on the augmented population of translation logic elements and validating each of the one or more additional computing services template translation programs. Based on the validating, each of the one or more additional computing services template translation programs is added to the verified translation programs or is discarded. | 10-03-2013 |
20130263080 | AUTOMATED BLUEPRINT ASSEMBLY FOR ASSEMBLING AN APPLICATION - The embodiments provide a data processing apparatus for automated blueprint assembly. The data processing apparatus includes a micro-blueprint assembler configured to receive a request for automated blueprint assembly for assembling an application, where the request specifies at least one feature, and a model database configured to store model data. The model data includes a plurality of classes and class properties. The data processing apparatus further includes a micro-blueprint database configured to store a plurality of micro-blueprints. Each micro-blueprint corresponds to a functional component of a stack element or service tier, and the functional component is annotated with one or more classes of the plurality of classes and at least one required capability and available capability. The micro-blueprint assembler is configured to generate at least one application blueprint based on the model data and the plurality of micro-blueprints according to the request. | 10-03-2013 |
20130262680 | DYNAMIC SERVICE RESOURCE CONTROL - The embodiments may provide a data processing apparatus for controlling service resource allocation. The data processing apparatus including a resource hints controller configured to obtain a resource control request before a task is to be executed on a virtual machine having resources allocated to a processing unit, a memory unit and a storage unit. The resource hints controller is configured to obtain a usage of the resources allocated to at least one of the processing unit, the memory unit and the storage unit of the virtual machine, and increase the resources allocated to the at least one of the processing unit, the memory unit and the storage unit in response to the resource control request based on the usage being equal to or above a threshold level. | 10-03-2013 |
20130262660 | OPTIMIZATION OF PATH SELECTION FOR TRANSFERS OF FILES - A system may include a file transfer manager that determines a file for transfer from a source location to a target location, the file being associated with file metadata characterizing the file, and with an organization. The file transfer manager may include an orchestrator that determines at least two transfer paths for the transfer, including at least a first transfer path utilizing a private wide area network (WAN) of the organization and a second transfer path utilizing a publicly available data hosting service, access transfer metadata characterizing the at least two transfer paths, and access organizational metadata characterizing organizational transfer path usage factors. The file transfer manager also may include a heuristics engine configured to execute path decision logic using the file metadata, the transfer metadata, and the organizational metadata, to select a selected transfer path from the at least two transfer paths. | 10-03-2013 |
20130262655 | MONITORING NETWORK PERFORMANCE OF ENCRYPTED COMMUNICATIONS - According to one general aspect, a method of using a first probing device may include monitoring one or more encrypted communications sessions between a first computing device and a second computing device. In some implementations of the method, each encrypted communications session includes transmitting a plurality of encrypted data objects between the first and second computing devices. The method may include deriving, by the first probing device, timing information regarding an encrypted communications session. The method may also include transmitting, from the first probing device to a second probing device, the derived timing information. | 10-03-2013 |
20130262403 | UNIQUE ATTRIBUTE CONSTRAINTS FOR VERSIONED DATABASE OBJECTS - Methods and apparatus for ensuring uniqueness of database object attributes are disclosed. An example computer-implemented method includes receiving a request to insert, update or delete a versioned database object having a first identifier (ID) in a main database table. The method further includes determining, based on the request, whether to fire an insert trigger, a delete trigger or an update trigger for the main database table. In the event an insert trigger is fired, the method includes performing, in a secondary database table, a record insertion process. In the event a delete trigger is fired, the method includes performing, in the secondary database table, a record deletion process. In the event an update trigger is fired, the method includes performing, in the secondary database table, at least one of the record insertion process for a post-update versioned database object and the record deletion process for a pre-update versioned database object. | 10-03-2013 |
20130173778 | MONITORING NETWORK PERFORMANCE REMOTELY - According to one general aspect, a method may include establishing at least a first and a second network tap point near, in a network topology sense, an intranet/internet access point device and a server computing device, respectively. The method may include monitoring, via the first and second network tap points, at least partially encrypted network communication between a client computing device and the server computing device. A second network tap point analyzer device may decrypt at least a portion of the encrypted network communication that is viewed by the second tap point analyzer device. The method may include analyzing the monitored encrypted network communication to generate aset of metrics regarding the performance of the network communication between the client computing device and server computing device. In some embodiments a plurality of tap points and tap point analyzer devices corresponding to a multitude of network segments may be employed. | 07-04-2013 |
20130173770 | REGISTRY SYNCHRONIZER AND INTEGRITY MONITOR - According to one general aspect, a method may include maintaining a primary registry of registry entries. Each registry entry may include a description and a network address of a network service. The method may also include periodically determining the validity a registry entry, wherein the registry entry is included in the primary registry. The method may further include, if the registry entry is not valid, moving the registry entry to a deleted items registry of registry entries. The method may also include periodically determining the validity a registry entry that is included in the deleted items registry; and, if that registry entry is valid, moving that registry entry back to the primary registry. | 07-04-2013 |
20130173558 | DATABASE RECOVERY PROGRESS REPORT - The present description refers to a computer implemented method, computer program product, and computer system for receiving a start time, selecting one or more database objects for which a database recovery progress report is to be provided, determining, based on an object recovery table generated by the database recovery utility, which of the selected database objects have been recovered since the start time, and outputting a database recovery progress report that identifies at least a number or percentage of the selected database objects that have been recovered by the database recovery utility since the start time. | 07-04-2013 |
20130173547 | SYSTEMS AND METHODS FOR MIGRATING DATABASE DATA - In one aspect, a computer-implemented method for ensuring a source database (e.g., target space or index space) has correct version information before a migration includes executing, using at least one processor, instructions recorded on a computer-readable storage medium. The instructions include determining whether a table has been changed since a most recent alter of the table, performing an update on the table when it is determined that the table has not been changed since the most recent alter, and performing a rollback on the table after the update. The method may also include creating an image copy of the data in the source database and refreshing data in a target database with the image copy of the data in the source database. The method may also include automatically repairing the target database when the version information of the target does not correspond with the version information for the source. | 07-04-2013 |
20130173546 | SYSTEMS AND METHODS FOR MIGRATING DATABASE DATA - In one general aspect, a computer-implemented method for migrating data from a source database to a target database includes a computer-implemented method that includes executing, using at least one processor, instructions recorded on a non-transitory computer-readable storage medium. The method includes ensuring that the source database has correct version information, creating an image copy of the data in the source database, and collecting metadata describing the source database. The metadata may include information used to verify that the target database is compatible with the source database, to automatically translate object identifiers, and to avert the migration if no data has changed in the source and the target databases (e.g., table and index spaces) since a previous migration. The method may further include refreshing the data in the target database using the image copy after determining that the source database and the target database are compatible. | 07-04-2013 |
20130086587 | DYNAMIC EVOCATIONS FOR COMPUTER EVENT MANAGEMENT - According to an example implementation, a computer-readable storage medium, computer-implemented method and a system are provided to detect a plurality of computer events, determine an event severity for each event, select a set of the events having a highest severity of the plurality of events, determine an event category for each event in the set of events, display an event management console including an entry for each event of the set of events, each entry in the event management console including at least an event description and an event severity indicator that indicates event severity, and wherein the displayed event management console also includes one or more evocations for each event category of the set of events, each evocation providing a suggested course of action to address events of the event category. | 04-04-2013 |
20130086552 | SYSTEMS AND METHODS FOR APPLYING DYNAMIC RELATIONAL TYPING TO A STRONGLY-TYPED OBJECT-ORIENTED API - A computer-implemented method includes executing instructions stored on a computer-readable medium. The computer-implemented method includes receiving, at a server hosting a strongly-typed object-oriented application programming interface (API), a single API call to request data from the strongly-typed object-oriented API, where the single API call includes a tuple having multiple object types, obtaining the requested data and returning the requested data. | 04-04-2013 |
20130086507 | DISPLAY WINDOW WITH MULTI-LAYER, PARALLEL TAB DISPLAY - A layer manager provides at least two content layers within a user interface window of a software application. A tab manager provides at least two content tabs within at least one of the content layers. A transfer manager is configured to transfer at least one content tab between the at least two content layers. | 04-04-2013 |
20130086092 | SYSTEMS AND METHODS RELATED TO A TEMPORAL LOG STRUCTURE DATABASE - In one general aspect, a computer-implemented method includes a computer-implemented method that includes executing, using at least one processor, instructions recorded on a non-transitory computer-readable storage medium. The method includes receiving a request to insert a data record within a database of a data collection system. The data record can be placed within a buffer in a main memory of the data collection system. A record data structure and a record index structure associated with the data record are defined. The record data structure and the record index structure are stored within a storage chunk in a storage medium of the database, and the storage medium is different than the main memory. The storage chunk has an associated index that can be used to retrieve the data record and the storage chunk can include other data records different than the data record associated with the received request. | 04-04-2013 |
20130086038 | PROVISION OF INDEX RECOMMENDATIONS FOR DATABASE ACCESS - A cost estimator may estimate execution costs for execution of at least one query against a database, using at least one existing index, if any, and based on estimation criteria determined from analyzing the query execution. A candidate index provider may provide candidate indexes, based on the estimation criteria, and re-estimate the execution costs to obtain updated execution costs, using the candidate indexes. An index recommender may recommend a recommended index, based on the updated execution costs. | 04-04-2013 |
20130085985 | METHODS AND APPARATUS FOR PERFORMING DATABASE MANAGEMENT UTILITY PROCESSES - In one general aspect, a computer-readable storage medium can be configured to store instructions that when executed cause a processor to perform a process. The instructions can include instructions to identify, at a mainframe computing environment during an initiation phase associated with a management utility process, a set of tasks for implementing the management utility process, and instructions to send, to a non-mainframe computing environment, a description identifying at least a portion of the set of tasks. The instructions can also include instructions to receive an indicator, at the mainframe computing environment, that processing based on the at least the portion of the set of tasks associated with the management utility process has been completed, and instructions to execute a termination phase of the management utility process at the mainframe computing environment in response to the indicator. | 04-04-2013 |
20130080462 | METHODS AND APPARATUS FOR MONITORING EXECUTION OF A DATABASE QUERY PROGRAM - In one general aspect, a computer-readable storage medium can be configured to store instructions that when executed cause a processor to perform a process. The instructions can include instructions to receive, during a first portion of an execution of a main program including a database query program and based on a first configuration for monitoring the database query program, a parameter value representing performance of execution of the database query program. The instructions can include instructions to produce an indicator that a performance condition has been satisfied based, at least in part, on the parameter value, and instructions to trigger execution of a second configuration for monitoring the database query program during a second portion of the execution of the main program in response to the performance condition being satisfied. | 03-28-2013 |
20130006940 | METHODS AND APPARATUS RELATED TO COMPLETION OF LARGE OBJECTS WITHIN A DB2 DATABASE ENVIRONMENT - In one general aspect, an apparatus can include a completion identifier configured to identify, for completion processing, a large object (LOB) deleted from an auxiliary table within a DB2 database environment based on a space map record associated with the large object where the auxiliary table functions as an auxiliary space to a base table. The apparatus can also include a completion analyzer configured to identify a resource where an image of the large object is stored at a time before the deletion of the large object from the auxiliary table. | 01-03-2013 |
20130006935 | METHODS AND APPARATUS RELATED TO GRAPH TRANSFORMATION AND SYNCHRONIZATION - In one general aspect, a computer system can include instructions configured to store on a non-transitory computer-readable storage medium. The computer system can include a subgraph transformer configured to transform a plurality of subgraphs of a source graph into a plurality of transformed subgraphs, and configured to define a target graph that is a transformed version of the source graph based on the plurality of transformed subgraphs. The computer system can include a change detector configured to receive an indicator that a portion of the source graph has been changed, and a synchronization module configured to synchronize a portion of the target graph with the changed portion of the source graph. | 01-03-2013 |
20130002723 | SYSTEMS AND METHODS FOR DISPLAYING AND VIEWING DATA MODELS - A computer program product is tangibly embodied on a computer-readable medium and includes executable code that, when executed, is configured to cause a data processing apparatus to display multiple objects in a single pane, where the multiple objects are visual representations of real objects and the multiple objects are dynamically sized and spaced relative to one another to fit all of the objects in the single pane. The computer program product includes executable code that, when executed, causes the data processing apparatus to display a subset of the objects and associated metadata in an examination frame. The examination frame is sized to fit within the single pane, where the subset of the objects displayed within the examination frame are sized larger than the objects outside of the examination frame. | 01-03-2013 |
20130002668 | SYSTEMS AND METHODS FOR DISPLAYING, VIEWING AND NAVIGATING THREE DIMENSIONAL REPRESENTATIONS - A computer program product is tangibly embodied on a computer-readable medium and includes executable code that, when executed, is configured to cause a data processing apparatus to display multiple objects in a three dimensional (3D) representation, where the multiple objects are visual representations of real objects, and display a subset of the objects and associated metadata in a shaped lens that is movable within the 3D representation in all three axes, where the subset of the objects displayed within the shaped lens are sized larger than the objects outside of the shaped lens. | 01-03-2013 |
20120330702 | MOBILE SERVICE CONTEXT - According to one general aspect, a method may include requesting, from a database and by a program executing on a mobile computing device, at least a portion of a business service context regarding a business service. The method may also include receiving, from the database, an aggregated database result regarding the business service. The aggregated database result may include the requested business service context and wherein the business service context includes information from a plurality of applications. The method may also include displaying, via the mobile computing device, at least a portion of the information included by the business service context. | 12-27-2012 |
20120259960 | Dynamic Self-Configuration of Heterogenous Monitoring Agent Networks - A centralized, policy-driven approach allows dynamic self-configuration and self-deployment of large scale, complex, heterogeneous monitoring agent networks. Such an approach resolves the scalability and manageability issues of manually configured conventional agents. Embodiments of the agents can be self-configuring using a dynamic, adaptive technique. An administrator can group hosts on which agents into groups that have similarly configured agents. | 10-11-2012 |
20120259812 | Cooperative Naming for Configuration Items in a Distributed Configuration Management Database Environment - Disclosed are methods and systems to provide coordinated identification of data items across a plurality of distributed data storage repositories (datastores). In one disclosed embodiment, a single configuration management database (CMDB) controls identification rights for all CIs as they are first identified in a master/slave relationship with all other CMDBs in the distributed environment. In a second embodiment, a plurality of CMDBs divide identification rights based upon coordination identification rules where certain CMDBs are assigned authoritative identification rights for CIs matching the rules of a particular CMDB in the distributed environment. In a third embodiment, one or more of the plurality of CMDBs may also have advisory identification rights for CIs which do not already have an identifiable unique identity and can coordinate with an authoritative CMDB to establish an identity for CIs. | 10-11-2012 |
20120254433 | Pre-Bursting to External Clouds - In a cloud computing environment customers of the cloud believe they have instantaneous access to unlimited resources however to satisfy this with finite resources there are times when resources could have to be acquired from an external cloud with potentially different security capabilities and performance capabilities. A method and system are therefore disclosed to reduce cost incurred while scaling to an external cloud to meet short term demand and to take into account security and performance requirements of customers. The proposed method and system provide automation and prediction capabilities to help with the decision of growing cloud resources or temporarily becoming a hybrid cloud. By “pre-bursting” the cloud in anticipation of a cloud burst the growth in resources can be predicted and performed (with security and load balancing in mind) prior to actual cloud consumer requests. | 10-04-2012 |
20120254414 | USE OF METRICS SELECTED BASED ON LAG CORRELATION TO PROVIDE LEADING INDICATORS OF SERVICE PERFORMANCE DEGRADATION - The present description refers to a computer implemented method, computer program product, and computer system for identifying a service metric associated with a service, identifying one or more abnormalities of one or more infrastructure metrics that occur within a time window around an abnormality of the service metric, determining a set of candidate infrastructure metrics for the service metric based on how many times an abnormality of an infrastructure metric occurred within a time window around an abnormality of the service metric, determining a degree of lag correlation for each candidate infrastructure metric with respect to the service metric, selecting one or more candidate infrastructure metrics having a degree of lag correlation that exceeds a threshold to be a leading indicator infrastructure metric for the service metric, and providing a performance degradation warning for the service when an abnormality of one of the leading indicator infrastructure metrics is detected. | 10-04-2012 |
20120254278 | Dynamic Dispatch for Distributed Object-Oriented Software Systems - A provider definition represents software that implements the semantics of one or more operations on an object in an object-oriented system. A provider represents a specific instance of a provider definition. One or more providers implement operations for objects in the system. A component of the system called the provider registry maintains a mapping of providers and operations as defined by the provider definitions. When handling a request to invoke a operation on an object, the system dynamically dispatches to the correct provider based on this mapping. Where more than one provider are registered as implementing the desired operation on an object, techniques are disclosed for selecting a provider to perform the desired operation. | 10-04-2012 |
20120254254 | Directed Graph Transitive Closure - Disclosed are methods and systems to provide for using database triggers to maintain a relational persistence of the transitive closure and path structure of an object hierarchy in the form of an object hierarchy bridge table. In one embodiment, database triggers fire when objects or relationships are added or deleted from the hierarchy. Based on the additions and deletions, a delta can be calculated and applied to an object hierarchy bridge table and the graph transitive closure and path structure can be dynamically built and maintained as corresponding changes to the graph occur. Later, more efficient access and retrieval of a graph transitive closure and path structure can be retrieved without necessarily having to perform recursion to calculate the graph transitive closure and path at request time. | 10-04-2012 |
20120246317 | Cloud-Based Resource Identification and Allocation - Systems, methods, and computer readable media for identifying resources to implement a service in a cloud computing environment are disclosed. In general, the disclosed methodologies analyze a cloud's ability to support a desired service while maintaining separation between the cloud's logical layers. For example, given a list of resources needed to implement a target service, a hierarchical plan may be generated. The plan may then be used by each layer to track and record the availability of various possible layer-specific resource selections. Since each layer may be permitted access only to that portion of the plan that is associated with, or applicable to, the specific layer, the logical separation between different layers may be enforced. As a consequence, each layer may implement its resource selection mechanisms in any desired manner. | 09-27-2012 |
20120246179 | Log-Based DDL Generation - Systems, methods, and computer readable media for automatically generating Data Definition Language (DDL) commands from database log information is described. In general, techniques are disclosed for analyzing database log entries to identify those associated with targeted DDL commands and associating those entries with a DDL command object. The DDL command object may be used (immediately or at some later time) to generate a DDL commands corresponding to the (possibly many) aggregated log records associated with the command object. The use of multiple database log entries as described herein enables the generation of DDL commands that capture database activity occurring over a period of time (full time context auditing) and can, therefore, naturally account for database schema changes. | 09-27-2012 |
20120198373 | Focus-Driven User Interface - Systems, methods and computer readable media for implementing a Focus-Driven User Interface using a Focus-Driven MVC architecture are described. The Focus-Driven MVC architecture builds on the traditional MVC framework, adding a Focus component between the Controller and Model components. The Focus component implements Focus Logic to handle Focus-Driven features. The Focus component may receive access commands or requests from the Controller, relay those commands to the Model and, in response, obtain data from the Model. The Focus Logic applies rules to the data, determines relevancy rankings for the given property, and sends the processed data to the Controller which, in turn, may update the user interface with the processed data. | 08-02-2012 |
20120198049 | System and Method for Stateless, Fault Tolerance and Load Balanced Data Collection Using Overlay Namespaces - Systems, methods and computer readable media that provide stateless fault tolerance and load balanced data collection using overlay namespaces are described. A cluster is used. Each node of the cluster may be a monitoring system. A data provider process may run on each node in the cluster. Each node has an overlay namespace which comprises one or more links to namespaces on other nodes, and local viewpoints of those linked namespaces. When a node detects a resource waiting to be monitored, it queries other nodes to determine whether object creation for that resource is allowed. It creates an object only if no other node is creating or has created an object for that resource. A node may stop monitoring more resources if the load on the node reaches a specified threshold. The node may also stop monitoring a resource if it determines the load level on another node is at a predefined low level compared with its own load level. | 08-02-2012 |
20120185290 | Integrating Action Requests from a Plurality of Spoke Systems at a Hub System - Disclosed are methods and systems to automatically integrate work requests from multiple Spoke systems at a centralized Hub system. In one embodiment, a Hub system receives a portion of a work request from a problem tracking system executing in the region (e.g., geographic area or network subnet) of an associated Spoke system. The request comprises enough information for the Hub system to prioritize this work request against other work requests already received from this same Spoke system, other Spoke systems in the same region, or even other Spoke systems from other regions. A Hub user can then be presented with an integrated work queue of requests to service after they have been properly prioritized. The Hub user may be supporting multiple clients in an outsourcing style Information Technology (IT) support model or a call center model. Supported clients can execute on different data center platforms, at the same time. | 07-19-2012 |
20120166811 | System and Method for Efficiently Detecting Additions and Alterations to Individual Application Elements for Multiple Releases of Applications in the Environments where the Applications can be Altered - Systems, methods and computer readable media for detecting customization of an application running on a customer's environment are described. An application's original source can maintain a master hash registry for an application. The master hash registry includes valid and invalid hash codes for all objects in the application across all versions of the application. This master hash registry may be provided to the customer. A customization detection system loads a master hash registry to memory. The customization detection system may then retrieve an application object from the application, generate hash values for the object and compare these values with the object's master hash registry values to determine whether the application object is new or whether it has been customized in a supportable or unsupportable manner. The customization detection system may then set the object's customization status based on the results of the comparison. | 06-28-2012 |
20120151464 | Running Injected Code Prior to Execution of an Application - A technique provides a hook that executes prior to a software application that is invisible to the software application. In an object-oriented execution environment, an imposter main class is loaded instead of the application main class. The imposter main class then manipulates the object-oriented execution environment to load the application main class without requiring knowledge of the application main class other than its name, and without requiring a change to the command line for the application. | 06-14-2012 |
20120151284 | Recording Method Calls that Led to an Unforeseen Problem - A technique assists in resolving problems by aiding in the determination of the root cause of the problem. The technique allows recording of information about methods of executing applications that encounter problems, even if the method was not previously marked for recording. Upon detection of a problem, the method and all other methods on the current execution stack may be marked for retrospective recording. When each method exits, information about entry conditions and exit conditions of each method may be recorded for presentation to a user of the application for problem resolution. | 06-14-2012 |
20120151274 | Client-Side Application Script Error Processing - Systems, methods, and computer readable media for collecting run-time error information for an executing script through the use of a double code-injection technique are described. A first native code injection into a user's client-side application (e.g., a browser application) is made. The second injection is thereafter made by the user's client-side application itself (when the first injected program code is executed) into the application's associated scripting engine and only when a script error has been detected. The second injected program code or scripts collect detailed run-time script error information within the context of the application's scripting engine. The second injected program code can then return the collected error information to the user application's context where it may be provided to a debug tool or recorded for later review (by the first injected program code). | 06-14-2012 |
20120144055 | DETERMINATION OF QUALITY OF A CONSUMER'S EXPERIENCE OF STREAMING MEDIA - A bit stream analyzer may detect a bitstream representing a streamed content file that is being streamed from a streaming server to a client over a network connection. An encoding rate extractor may determine an encoding rate of the bitstream, and a bit rate extractor may determine a transfer bit rate at which the bitstream is being streamed. A pause calculator may determine a minimum wait time experienced at the client during which playback of the streamed content file is paused, based on the encoding rate and the transfer bit rate. | 06-07-2012 |
20120131067 | Full-Function to HALDB Conversion - Systems, methods and computer readable medium for migrating Information Management System (IMS) Full-Function databases to IMS High Availability Large Databases (HALDBs) are described. Full Function database conversion operations in accordance with this disclosure assign a unique identifier to each segment having a physically paired logically related child segment. These unique identifiers may then be used during HALDB load operations to identify a segment's physically paired logically related segment. Use of the disclosed unique identifiers permit Full Function database conversion operations to avoid the input-output (I/O) and compare operations needed by prior art unload techniques to completely identify physically paired logically related segments. | 05-24-2012 |
20120042164 | MONITORING BASED ON CLIENT PERSPECTIVE - According to one general aspect, a method may include establishing a network tap point near, in a network topology sense, an intranet/internet access point device. The network tap point may provide a substantially non-intrusive means of viewing network communication through the intranet/internet access point. The method may include monitoring, via the network tap point, at least partially encrypted network communication between a client computing device that is within the intranet and server computing device that is within the internet. The method may also include analyzing the monitored at least partially encrypted network communication to generate at least one set of metrics regarding the performance of the network communication between the client computing device and server computing device. | 02-16-2012 |
20120042064 | MONITORING BASED ON CLIENT PERSPECTIVE - According to one general aspect, a method may include receiving, via a first network tap point included by a first network segment, a first portion of network communication data between a client computing device and a server computing device. The method may include receiving, via a second network tap point included by a second network segment, a second portion of network communication data between the client computing device and the server computing device. The method may include attempting to correlate each sub-portion of the first portion of network communication data to corresponding sub-portion of the second portion of network communication data. The method may also include analyzing the correlated network communication sub-portions to generate at least one set of metrics regarding the performance of the network communication between the client computing device and server computing device. | 02-16-2012 |
20110321033 | Application Blueprint and Deployment Model for Dynamic Business Service Management (BSM) - Disclosed are systems and methods for model based provisioning of applications and servers (both physical and virtual) to execute provisioned applications in a reliable and repeatable manner. Several aspects of a complex application management including compliance, change tracking, monitoring, discovery, processing steps, CMDB integration are disclosed within a comprehensive hierarchy of definition templates forming a model. This model can then be used at provisioning time to instantiate a compliant instance of the provisioned application. This model can also be used at run-time for managing run-time aspects of the provisioned application. Additionally, the model based approach can help track applications even when or if applications drift from their intended design and policies for use. | 12-29-2011 |
20110320598 | System and Method for Offering Virtual Private Clouds within a Public Cloud Environment - Systems, methods and computer readable media for providing virtual private clouds within a public cloud are described. Examples include a method wherein a service provider deploys a primary instance of a cloud-in-a-box (CIAB) to his cloud computing system to create a public cloud. A CIAB includes adapters configured to manage virtual infrastructure of the cloud, and end-user portal and an administrative portal. A nested instance of CIAB may be deployed to one of the virtual machines, with one of the adapters of the nested instance of CIAB being connected to the end-user portal of the primary instance. An administrator of the nested instance may create his own library of virtual machine images and offer the library to the end-users of the nested CAIB instance. | 12-29-2011 |
20110320228 | Automated Generation of Markov Chains for Use in Information Technology - Disclosed are methods and systems to automatically generate a model for pro-active rather than reactive enterprise systems management. In one embodiment, a Markov Chain model is constructed from a Configuration Management Database (CMDB), Service Impact models, event logs and system logs. The model can then be maintained and automatically updated or regenerated based on changing conditions and attributes of configuration items (CIs) being modeled. As part of model generations probabilities associated with potential state transitions of CIs can be calculated. The model can then be used to predict anticipated availability of a corporate enterprise or specific portions of a corporate information technology (IT) environment. In another embodiment, a model can be used to perform what-if scenarios to assist in planning or deferring change requests for the corporate IT environment. | 12-29-2011 |
20110316856 | Spotlight Graphs - In a computer-displayed graph, indications of multiple attributes or states of an object represented by a node of the graph are displayed using a spotlight, in which attributes of the spotlight correspond to attributes of the object represented by the node. The attributes of the spotlight each correspond to an attribute of the object and may include the color, brightness, and size of the spotlight. The spotlight may be positioned with the node, including overlaying the spotlight on the node and positioning the spotlight relative to the node. | 12-29-2011 |
20110295788 | Method and System to Enable Inferencing for Natural Language Queries of Configuration Management Databases - Disclosed are embodiments of systems and methods to derive a semantic network from a CMDB relationship graph which can then be queried in a natural way from a linguistic standpoint (i.e., using natural language queries). Because disclosed embodiments combine natural language queries with an inferencing engine the disclosed systems and methods automatically “connect the dots” between disparate pieces of information and can allow for a richer user experience. In general, CMDB graph relationships can be converted into semantic networks. Once a semantic network is created, queries can be phrased to leverage the inferential relationships between objects in the semantic network. | 12-01-2011 |
20110271327 | Authorized Application Services Via an XML Message Protocol - Disclosed are systems and methods to provide a persistent authorized server address space (ASAS). The ASAS can host components from product suites that are not able to execute in an authorized state. To host other product's components, the ASAS receives “messages” from the unauthorized product components in the form of a generic eXtensible Markup Language (XML) protocol. These messages may request product initialization/administration or performance of a function by the ASAS on behalf of the requesting product. Security constraints are also provided to ensure system and data integrity. Further, the ASAS is not tightly coupled to any requesting product so that flexibility of product update or update to the ASAS itself may not be unnecessarily constrained. | 11-03-2011 |
20110246585 | Event Enrichment Using Data Correlation - Systems and methods for enriching events using data correlation are described herein. At least some embodiments include a method for enriching events reflecting the state of a plurality of computer systems, the method including storing a plurality of event messages and system metric data that includes service metric data, determining a degree of correlation between a system metric and at least one of a plurality of service metrics, and enriching an event message of the plurality of event messages based at least in part on the degree of correlation. At least one system metric data value triggers the event message. The degree of correlation is based at least in part on the system metric data and the service metric data. | 10-06-2011 |
20110239275 | Centrally Managed Impersonation - Systems, methods and computer readable media for centrally managed impersonation are described. Examples include a system having a central server and a remote shell daemon running on a remote machine, wherein a trust relationship is established between the central server and the remote shell daemon. Examples also include a method wherein a user sends the management system a request to act upon a remote machine. The management system determines whether the user is authenticated for the requested action. Upon authentication, the management system identifies an impersonation policy based on user profile and the remote machine. The management system connects to the remote machine, impersonates an elevated privilege account if required, and executes the user action on the remote machine. | 09-29-2011 |
20110239190 | Method for Customizing Software Applications - Techniques for overlaying objects of a software application with other objects allow modification and customization of the application by one or more users in different ways, without storing multiple modified copies of the application. The technique allows configuring the software application to execute using overlaid objects instead of the base objects contained in the software application. In some embodiments, the base objects for the software application and the overlaid objects are stored in a datastore, and a runtime embodiment causes execution of the overlaid objects instead of the base objects. | 09-29-2011 |
20110238691 | Mechanism to Display Graphical IT Infrastructure Using Configurable Smart Navigation - A system allows pre-defining CI scope definitions for use by users of a CMDB system. The pre-defined CI scope definitions may be used to expand a starting CI in a graph displaying a portion of the CMDB according to the types of CIs and relationships between CIs defined in the scope definition. The scope definition is converted into one or more CMDB queries that are restricted to a chain of CIs related to the starting CI. The system restricts the visibility of scope definitions to only those applicable to the starting CI. | 09-29-2011 |
20110238637 | Statistical Identification of Instances During Reconciliation Process - A system for reconciling object for a configuration management databases employs statistical rules to reduce the amount of manual identification required by conventional reconciliation techniques. As users manually identify matches between source and target datasets, statistical rules are developed based on the criteria used for matching. Those statistical rules are then used for future matching. A threshold value is adjusted as the statistical rules are used, incrementing the threshold value when the rule successfully matches source and target objects. If the threshold value exceeds a predetermined acceptance value, the system may automatically accept a match made by a statistical rule. Otherwise, suggestions of possibly applicable rules may be presented to a user, who may use the suggested rules to match objects, causing adjustment of the threshold value associated with the suggested rules used. | 09-29-2011 |
20110238377 | Auto Adjustment of Baseline on Configuration Change - A baseline adjusting technique allows automatically adjust the baselines of metrics affected by a configuration change a monitored system. If a configuration change is detected, a performance management system retrieves linkages between the changed configuration parameter and one or more metrics. The performance management system then adjusts the baselines of the metric using the baseline adjusting algorithm retrieved from the linkage. | 09-29-2011 |
20110238376 | Automatic Determination of Dynamic Threshold for Accurate Detection of Abnormalities - An improved performance management technique allows automatic determination dynamic thresholds of a metric based on a baseline of the matching pattern. A pattern matching process is conducted against a set of baseline patterns to find the matching pattern. If a matching pattern is found, the baseline of the matching pattern is used as the dynamic threshold. A series of sanity checks are performed to reduce any false alarms. If the metric does not follow any pattern, a composite of baselines is selected as the dynamic threshold. | 09-29-2011 |
20110234595 | Graph Expansion Mini-view - A graphical representation of a service model provides a full view of a portion of the graphical representation. A sub graph view may be displayed for nodes of the graphical representation of the service model that are associated with a selected node, including nodes that may not be visible in the full view. The sub graph view may be interactive, providing additional information regarding the nodes displayed in the sub graph view, and allowing making nodes in the sub graph view visible or invisible in the full view. Information may be displayed in the sub graph view about the status of the components being modeled by the service model corresponding to nodes displayed in the sub graph view. | 09-29-2011 |
20110214024 | Method of Collecting and Correlating Locking Data to Determine Ultimate Holders in Real Time - A technique for collecting and correlating locking data collects and correlates information on a plurality of programs waiting on or holding a plurality of resources in a multi-computer database system. The technique identifies a program executing on one computer of the multi-computer database system that is waiting on a resource. The technique also identifies a second program, executing on another computer, as the ultimate holder of the resource. An operator display screen displays information corresponding to the first program and the second program. The operator display screen may be switched between a multiline display format and a single line display format. The collection, identification, and display of the locking data is performed periodically, to allow the operator to discover locking problems and take a desired corrective action. | 09-01-2011 |
20110213886 | Intelligent and Elastic Resource Pools for Heterogeneous Datacenter Environments - Disclosed are methods and systems for intelligent resource pool management of heterogeneous datacenter resources. In one embodiment, intelligent resource pool management is utilized to assist in application provisioning performed based upon a blueprint and deployment model defining requirements of the provisioned application. In other embodiments, intelligent resource pool managers are configured to work in concert with other intelligent resource pool managers and/or a centralized provisioning engine. Resource pools may also be configured in a hierarchical manner whereby higher level resource pools may automatically draw resources from lower level resource pools as directed by one or more intelligent resource pool managers. | 09-01-2011 |
20110213885 | Automating Application Provisioning for Heterogeneous Datacenter Environments - Disclosed are methods and systems to automate the provisioning and deployment of application instances within a heterogeneous data center. In one embodiment, the application provisioning is performed based upon a blueprint and deployment model defining requirements of the provisioned application. In another embodiment, the totality of available resources for provisioning is divided into different segments. When resources are requested and assigned to an incoming provisioning request, the resource pool may be refreshed or augmented as defined by thresholds or forecasting of user needs. The resource pool may be refreshed by recapturing allocated resources that are no longer in use or by configuring resources taken from the reserve. Further, when reserve resources are not available or are below a minimum reserve threshold, capacity planning actions may be initiated or advised. | 09-01-2011 |
20110161964 | Utility-Optimized Scheduling of Time-Sensitive Tasks in a Resource-Constrained Environment - Systems and methods implementing utility-maximized scheduling of time-sensitive tasks in a resource constrained-environment are described herein. Some embodiments include a method for utility-optimized scheduling of computer system tasks performed by a processor of a first computer system that includes determining a time window including a candidate schedule of a new task to be executed on a second computer system, identifying other tasks scheduled to be executed on the second computer system within said time window, and identifying candidate schedules that each specifies the execution times for at least one of the tasks (which include the new task and the other tasks). The method further includes calculating an overall utility for each candidate schedule based upon a task utility calculated for each of the tasks when scheduled according to each corresponding candidate schedule and queuing the new task for execution according to a preferred schedule with the highest overall utility. | 06-30-2011 |
20110161959 | Batch Job Flow Management - Systems and methods for improved batch flow management are described. At least some embodiments include a computer system for managing a job flow including a memory storing a plurality of batch queue jobs grouped into Services each including a job and a predecessor job. A time difference is the difference between a scheduled job start time and an estimated predecessor job end time. Jobs with a preceding time gap include jobs immediately preceded only by non-zero time differences. The job start depends upon the predecessor job completion. The computer system further includes a processing unit that identifies jobs preceded by a time gap, selects one of the Services, and traverses in reverse chronological order a critical path of dependent jobs within the Service until a latest job with a preceding time gap is identified or at least those jobs along the critical path preceded by another job are traversed. | 06-30-2011 |
20110161928 | Method to Provide Transparent Process I/O Context on a Remote OS While Retaining Local Processing - Systems and methods are disclosed that implement a data collection infrastructure that supports both agent-based and agentless data collection. Existing data collection scripts may be used, whether agent-based or agentless, and new scripts may be created that include commands that may execute either locally or remotely, as desired. These scripts, while executed locally, may interact with either the local machine or another remote machine for performing data collection, corrective actions, or other desired functionality. An execution context defines whether commands executed by the script are to execute locally or remotely, and a context handler allows processing those commands either locally or remotely depending on the execution context, transparently to the script. Data generated by remote execution may be transported back to the local machine for manipulation locally, transparently to the script. | 06-30-2011 |
20110161477 | Method and System to Automatically Adapt Web Services from One Protocol/Idiom to Another Protocol/Idiom - A method and system to convert an existing web service from a first web services implantation type to a second web services implementation type. Example implementation types include SOAP and Representational State Transfer (REST). This conversion is achieved by recognizing and classifying available information from each of the distinct implementation types. After proper recognition and classification as disclosed herein, a deterministic process may be utilized to assist in converting or translating the exposed interface; thereby assisting in developing an interface, based on a different interface type than the one already exposed, may be provided. | 06-30-2011 |
20110161465 | Method and System to Automatically Adapt Web Services from One Protocol/Idiom to Another Protocol/Idiom - Disclosed are embodiments of a method and system to convert an existing web services request from a first web services implementation type to a second web services implementation type. Example implementation types include SOAP-based and Representational State Transfer (RESTful). Conversion may be achieved through use of a generic web services adaptor. The generic web services adaptor can provide a plurality of interface types and convert requests to a request type supported by an existing web service provider endpoint. In some embodiments, requests not requiring a conversion may be forwarded directly to an existing web service provider endpoint. | 06-30-2011 |
20110161048 | Method to Optimize Prediction of Threshold Violations Using Baselines - A baseline technique allows reducing the number of threshold violation predictions that need to be generated in a performance monitoring system. One or more baselines may be calculated based on long-term trends in a monitored metric. If the metric is within the baseline, then predictions regarding short-term trends in the metric may be omitted. If the metric is outside the baseline, then short-term trends may be analyzed to predict possible threshold violations. | 06-30-2011 |
20110154362 | Automated Computer Systems Event Processing - Systems and methods for automated computer systems event processing are described herein. At least some example embodiments include a communication interface that receives an event message and a processing unit (coupled to the communication interface) that processes the event message and that further obtains, parses and tokenizes an character string that includes one or more delimited elements selected from the group consisting of a constant, a variable and a function, wherein each function accepts as input the one or more delimited elements. The processing unit further evaluates the parsed and tokenized character string in response to receiving the event message and initiates an action based upon the result of the evaluation. The processing unit also creates a common execution environment for performing the processing, obtaining, parsing, tokenizing and evaluation. | 06-23-2011 |
20110154353 | Demand-Driven Workload Scheduling Optimization on Shared Computing Resources - Systems and methods implementing a demand-driven workload scheduling optimization of shared resources used to execute tasks submitted to a computer system are disclosed. Some embodiments include a method for demand-driven computer system resource optimization that includes receiving a request to execute a task (said request including the task's required execution time and resource requirements), selecting a prospective execution schedule meeting the required execution time and a computer system resource meeting the resource requirement, determining (in response to the request) a task execution price for using the computer system resource according to the prospective execution schedule, and scheduling the task to execute using the computer system resource according to the prospective execution schedule if the price is accepted. The price varies as a function of availability of the computer system resource at times corresponding to the prospective execution schedule, said availability being measured at the time the price is determined. | 06-23-2011 |
20110153580 | Index Page Split Avoidance With Mass Insert Processing - A technique is disclosed that avoids index page splits when inserting large numbers of rows into a table of a relational database. Keys in index pages are moved to successive index pages to make room to insert keys on the original index page. Where no room is available on successive pages, a new index page is created to hold moved keys. The result is typically a smaller chain of index pages with better locality than using the conventional insertion technique of splitting index pages. | 06-23-2011 |
20110153559 | Mechanism for Deprecating Object Oriented Data - Techniques are described to allow the deprecation of classes in an object-oriented data model, such as a CDM for a CMDB. When a class is deprecated and replaced by another existing or new class, data associated with instances of the deprecated class may be migrated to the replacement class. A mapping between the deprecated class and its replacement class may be provided to allow existing applications to continue to access data using the deprecated class without change until the deprecated class is finally deleted or the application is updated to use the replacement class. New applications written to use the object-oriented data model after the deprecation may use the replacement class to access data instances created using the original data model. | 06-23-2011 |
20110137887 | Constraint Processing - Constraint processing for a relational database generates primary (e.g., based on primary key values) and constraint index records (e.g., based on foreign key values) during table load operations that are then sorted in a manner that rapidly and unambiguously identifies rows that fail the specified constraint test. Rows so identified may be deleted to maintain the constraint (e.g., referential) integrity of a child table. In one case, child table row data may be processed in constraint key order, eliminating the need first load the child table with row data and then delete those rows that subsequently fail the integrity test. | 06-09-2011 |
20110131186 | Extending a Database Recovery Point at a Disaster Recovery Site - A DBA may pre-generate database recovery jobs on a convenient schedule at a local site, then recover a database at a disaster recovery site. Archive log files for the database that are generated in the interim between recovery job generation and recovery job execution are automatically incorporated into the recovery job when it executes, extending the recovery point closer to the time of the disruption that triggered the need or desire for recovery. | 06-02-2011 |
20110125745 | Balancing Data Across Partitions of a Table Space During Load Processing - A balancing technique allows a database administrator to perform a mass data load into a relational database employing partitioned tablespaces. The technique automatically balances the usage of the partitions in a tablespace as the data is loaded. Previous definitions of the partitions are modified after the loading of the data into the tablespace to conform with the data loaded into the tablespace. | 05-26-2011 |
20110113117 | Asynchronous Collection and Correlation of Trace and Communications Event Data - A transaction processing system that includes a communications bridge between clients and a transaction processing engine provides a way to correlate events associated with the communications bridge and events associated with the transaction processing engine. By passing a unique identification information with the transaction requests and response between the communications bridge and transaction processing engine and including the unique identification information in logging information created by each, a correlation utility may correlate logging information to create a more complete view of the events associated with a transaction, including end-to-end response times. | 05-12-2011 |
20110072433 | Method to Automatically ReDirect SRB Routines to a ZIIP Eligible Enclave - A Method to redirect SRB routines from otherwise non-zIIP eligible processes on an IBM z/OS series mainframe to a zIIP eligible enclave is disclosed. This redirection is achieved by intercepting otherwise blocked operations and allowing them to complete processing without errors imposed by the zIIP processor configuration. After appropriately intercepting and redirecting these blocked operations more processing may be performed on the more financially cost effective zIIP processor by users of mainframe computing environments. | 03-24-2011 |
20110072432 | METHOD TO AUTOMATICALLY REDIRECT SRB ROUTINES TO A zIIP ELIGIBLE ENCLAVE - A Method to redirect SRB routines from otherwise non-zIIP eligible processes on an IBM z/OS series mainframe to a zIIP eligible enclave is disclosed. This redirection is achieved by intercepting otherwise blocked operations and allowing them to complete processing without errors imposed by the zIIP processor configuration. After appropriately intercepting and redirecting these blocked operations more processing may be performed on the more financially cost effective zIIP processor by users of mainframe computing environments. | 03-24-2011 |
20110071984 | Area-Specific Reload of Database - A hierarchical database stores data for the database in a plurality of areas. A disclosed technique allows reorganization of one or more areas of the database without stopping the entire database. The areas to be reorganized are first stopped, then the areas are unloaded, reorganized, and reloaded, before restarting the reorganized areas. In-memory control blocks for the areas are updated to indicate to the database software that the areas have been reorganized, without stopping the entire database. | 03-24-2011 |
20110071982 | Offline Restructuring of DEDB Databases - An IMS DEDB database restructure operation creates an empty offline DEDB having the desired structure. The offline database is populated with data from a source (online) database while keeping the source database online (i.e., available for access and update operations). Updates to the source database made during this process are selectively processed in parallel with the offline DEDB load operation. When the contents of the offline database is substantially the same as the source or online database, the source database is taken offline, final updates to the offline database are applied whereafter the offline database is brought online, thereby replacing the source database. It is significant to note that updates occurring to the source or online DEDB are applied to the offline DEDB. | 03-24-2011 |
20110055181 | Database Quiesce Operations - A technique to quiesce a database without causing after-arriving access requests to abnormally terminate interrogates database management system control structures associated with the database. Specified modifications to these control structures can be made so that subsequent access requests to the database (i.e., during quiesce operations) are not abnormally terminated. Once quiesced, regular or special purpose maintenance or testing operations, the starting or stopping of log keeping operations or similar operations may be made to the database. Once these are complete, the database control structures may be updated again to permit pending/scheduled access requests to proceed. | 03-03-2011 |
20100318497 | Unobtrusive Copies of Actively Used Compressed Indices - Unobtrusive Copies of Actively Used Compressed Indices Methods, devices and systems to make compressed backup copies of in-use compressed database indices are described. In general, an “oldest” time at which index pages in working memory had been updated is identified. Compressed index pages may be directly copied without the need to bring them into working memory or uncompressing them. The identified “oldest” time is then associated with the compressed backup copy. In some embodiments, an entire compressed backup copy may be associated with a single point in time (e.g., the identified “oldest” time). In other embodiments, a compressed backup copy may be associated with multiple points in time (e.g., one time for each portion of the compressed index that is being backed-up). Compressed indices copied in accordance with the invention may be used during restore operations to reconstruct database indices using the identified “oldest” time and database log files. | 12-16-2010 |
20100287143 | Relational Database Page-Level Schema Transformations - Methods, devices and systems which facilitate the conversion of database objects from one schema version (e.g., an earlier version) to another schema version (e.g., a newer version) without requiring the objects be unloaded and reloaded are described. In general, data object conversion applies to both table space objects and index space objects. The described transformation techniques may be used to convert any object whose schema changes occur at the page-level. | 11-11-2010 |
20100268565 | SYSTEM AND METHOD OF ENTERPRISE SYSTEMS AND BUSINESS IMPACT MANAGEMENT - A system architecture and a method for managing using a cellular architecture to allow multi-tier management of events such as the managing of the actual impact or the potential impact of IT infrastructure situations on business services. A preferred embodiment includes a high availability management backbone to frame monitoring operations using a cross-domain model where IT component events are abstracted into IT Aggregate events. By combining IT Aggregate events with transaction events, an operational representation of the business services is possible. Another feature is the ability to connect this information to dependent business user groups such as internal end-users or external customers for direct impact measurement. A web of peer-to-peer rule-based cellular event processors preferably using Dynamic Data Association constitutes management backbone crossed by event flows, the execution rules, and distributed set of dynamic inter-related object data rooted in the top data instances featuring the business services. | 10-21-2010 |
20100251379 | Method and System for Configuration Management Database Software License Compliance - A software license engine allows an enterprise to model software license contracts and evaluate deployment of software for compliance with the software license contracts. Deployment of software products in the enterprise is modeled in a configuration management database. The software license engine maintains a license database for connecting software license contracts with software deployment modeled by the configuration management database. Users of the software license engine may use license types that are predefined in the software license engine or may define custom license types. The software license engine may indicate compliance or non-compliance with the software license contracts. | 09-30-2010 |
20100223166 | Unified Service Model for Business Service Management - A unified service model method is used for Business Service Management of a computing infrastructure. In the model, service offerings are defined for a business service, and one or more service level targets are associated with each of these offerings. The business service is associated with one or more technical services that support the business service. These technical services are delivered by actual component in a computing infrastructure. In the model, service offerings are associated with the technical services, and service level targets are associated with each of these offerings. A customer defined in the model subscribes to one of the service offerings of the business service. As business services are provided, the unified service model combines the service offerings tying the business and technical services to the associated service level targets, and administrators can manage the services and IT components using the unified service model. | 09-02-2010 |
20100220625 | Heuristic Determination of Network Interface Transmission Mode - A method for measuring and determining the duplex modes of a network interface. The method assumes the network interface to be operating in a half-duplex mode until the bandwidth utilization reaches a threshold. When the threshold is reached, the method checks the traffic collision in the interface. If there is no collision, then the duplex mode is determined to be full-duplex. If there is collision, then the duplex mode is determined to be half-duplex and an alarm is set off. In another embodiment, the interface type is determined through SNMP. If the interface is a WAN interface, then the interface is determined to be full-duplex. | 09-02-2010 |
20100205157 | Log Data Store and Assembler for Large Objects in Database System - A mechanism works in conjunction with a DB2® Log and an analysis tool, such as BMC's Log Master™, to handle logged data for Large Objects (LOBs) stored in tables of a DB2 database system. A plurality of controls track data logged for the LOBs. The mechanism reads log records from a DB2 Log and uses the controls to determine which of the tracked LOBs is associated with the log records and obtains data from those associated log records. The mechanism builds keys to index the data and stores the keys and the data in a Virtual Storage Access Method store having Key Sequenced Data Sets maintained separate from the log record store for the DB2 Log. When requested by the analysis tool, the data in the store can be reassembled using the keys and map records in the first store that map the logged data for the tracked LOBs. | 08-12-2010 |
20100199058 | Data Set Size Tracking and Management - Specified data sets may be tracked from creation to end-of-life (e.g., deletion). Between creation and end-of-life, data set storage changes may be recorded (i.e., when additional storage is allocated or when some storage is released). During a subsequent allocation cycle, this information may be used in conjunction with user-specified allocation rules to manage or control the data set's initial allocation. | 08-05-2010 |
20100198843 | Software Title Discovery - In a computer system that has no single place to discover all installed software applications, a software title discovery technique uses a combination of techniques to discover installed software. One of the combined techniques is an operating system predefined interface for obtaining information about installed software application; other techniques that may be employed include searching a repository of uninstall information, searching for executable files in a portion of a filesystem for the computer, and searching for executable files pointed to by other files in the filesystem of the computer system. A client/server configuration may be employed to allow collection of the software application information across a network of computers in an enterprise by a server computer system, allowing the server system to provide reports regard installed software applications. | 08-05-2010 |
20100191624 | SYSTEM AND METHOD FOR CLASSIFYING REQUESTS - Embodiments of the present invention generate identification rules to classify requests as corresponding to particular transactions. Embodiments of the present invention examine a set of sample requests, determine patterns in the sample requests and generate identification rules based on the request patterns to classify subsequent requests as corresponding to particular transactions. As more sample requests are processed, embodiments of the present invention can update the identification rules. Put another way, embodiments of the present invention can automatically learn how to classify requests better as more requests are processed. | 07-29-2010 |
20100185658 | MDR FEDERATION FACILITY FOR CMDBf - This disclosure relates generally to the field of federated configuration management databases (CMDBs). To claim compliance with the CMDBf Standard (“the Standard”), a CMDB implementation must provide working and interoperable implementations of the interfaces defined in the Standard. To make a working implementation, certain non-obvious features are required that are not addressed by the Standard. Among these requirements are: registering management data repositories (MDRs) so that they can be federated; managing/maintaining the list of federated MDRs; querying an MDR for its Data Model; using such MDR Data Models to define mappings of one or more attributes from the MDR data model to one or more attributes of one or more of the CMDB's data models; identifying attributes and defining rules to be used when reconciliation is performed; and managing as well as storing data representative of those mappings. This disclosure addresses these and other deficiencies. | 07-22-2010 |
20100179945 | Normalization Engine to Manage Configuration Management Database Integrity - Data is often populated into Configuration Management Databases (CMDBs) from different sources. Because the data can come from a variety of sources, it may have inconsistencies—and may even be incomplete. A Normalization Engine (NE) may be able to automatically clean up the incoming data based on certain rules and knowledge. In one embodiment, the NE takes each Configuration Item (CI) or group of CIs that are to be normalized and applies a rule or a set of rules to see if the data may be cleaned up, and, if so, updates the CI or group of CIs accordingly. In particular, one embodiment may allow for the CI's data to be normalized by doing a look up against a Product Catalog and/or an Alias Catalog. In another embodiment, the NE architecture could be fully extensible, allowing for the creation of custom, rules-based plug-ins by users and/or third parties. | 07-15-2010 |
20100179939 | CMDB FEDERATION METHOD AND MANAGEMENT SYSTEM - This disclosure relates generally to the field of Configuration Management Databases (CMDBs). One embodiment of a user interface embodying the present invention is an extension of the process for creating CMDB classes and is therefore readily available for use by someone with knowledge of CMDB administration. The CMDB administrator is thus relieved from having to understand in detail the technologies and interfaces used by the Management Data Repository (MDR) sources. The result of setting up a relation from a CMDB data structure to an MDR data structure by a CMDB administrator may be represented by one or more new CMDB class(es) for the MDR data. The related MDR may then be accessed by an existing CMDB application using already existing CMDB interfaces. The instances of the new relationships and classes thus appear as if they were native instances stored in the CMDB. | 07-15-2010 |
20100162227 | Automation of Mainframe Software Deployment - Methods and systems to automate the deployment from one SMP/E installed run-time mainframe system logical partition (LPAR) to one or more different and distinct LPARs within a mainframe environment are described. Deployment may consist of distributing one or more installation items (e.g., complete products, product upgrades, patches and/or temporary fixes) from one installation environment to another target system. Also, the installed items may have optionally undergone further configuration after the initial installation and prior to actual automated deployment. Each of the target systems are communicatively coupled to the first (i.e., source) LPAR. | 06-24-2010 |
20100161577 | Method of Reconciling Resources in the Metadata Hierarchy - An enhanced resource reconciliation process is disclosed to examine the metadata hierarchy of unidentified instances of configuration objects within a particular “data partition” (sometimes called a dataset) of an enterprise configuration management database (CMDB) and perform reconciliation against a target dataset, such as a golden, i.e., production, dataset. The enhanced reconciliation process could identify against instances in the production dataset that are of the same class as the unidentified instance—as well as instances that come from any “candidate” classes. Candidate classes could consist of, e.g., classes upstream or downstream from the unidentified instance in the metadata hierarchy. By allowing the specification of one or more reconciliation properties, such as, “identify downstream,” “identify upstream,” “identify upstream and downstream,” or “identify resources of the same class only,” the enhanced resource reconciliation process could perform identification and resource reconciliation against instances of any class in the unidentified instance's metadata hierarchy. | 06-24-2010 |
20100146498 | METHOD TO MAKE SMP/E BASED PRODUCTS SELF DESCRIBING - Systems and methods of providing information from run-time installations of mainframe SMP/E based products. Information is embedded into a fingerprint library. The fingerprint library may then be associated with a product installed via SMP/E. The fingerprint library may then remain with the product when it is copied to its distributed location. A system administrator may later query the run-time installation and retrieve information previously only known to the SMP/E tool in an SMP/E controlled installation. In one embodiment, information may be embedded into a fingerprint library at product build time. | 06-10-2010 |
20100114948 | SYSTEM AND METHOD FOR SCHEDULED AND COLLABORATIVE DISTRIBUTION OF SOFTWARE AND DATA TO MANY THOUSANDS OF CLIENTS OVER A NETWORK USING DYNAMIC VIRTUAL PROXIES - A method and system for distributing content from a server computer to a number of client computers is disclosed. A file to be distributed to a requesting client computer is identified. If another client computer of the plurality of client computers can distribute the file to the requesting client computer, the requesting client computer requests the file from the other client computer. If no client computer can distribute the file to the requesting client computer, the requesting client computer requests the first file from the server computer. Then the requesting client computer receives the first file from either the other client computer or the server computer. Each client computer can act both as a client and a server to the other client computers, providing content that would otherwise be provided by the server computer. | 05-06-2010 |
20100100533 | Cascade Delete Processing - A time-efficient means for identifying and processing cascading deletes due to referential constraint violations includes: logging, to an error file, all primary key (“PK”) errors detected during table load operations; building a foreign key (“FK”) index for each child table; recursively probing each relevant FK index to identify all loaded rows that violate a referential constraint due to a PK error; logging all identified FK errors to the error file; and using the (preferably sorted) error file contents to identify, mark and physically delete table rows that violate a referential constraint. The described cascade delete processing methods make only a single pass through the table data, using ordinary computer files to track and organize rows identified for deletion. Use of error files rather than tablescans (multiple passes through the loaded table data) can provide a significant reduction in table load times, especially for large or intricately “related” tables. | 04-22-2010 |
20100050023 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR OPTIMIZED ROOT CAUSE ANALYSIS - Embodiments disclosed herein can significantly optimize a root cause analysis and substantially reduce the overall time needed to isolate the root cause or causes of service degradation in an IT environment. By building on the ability of an abnormality detection algorithm to correlate an alarm with one or more events, embodiments disclosed herein can apply data correlation to data points collected within a specified time window by data metrics involved in the generation of the alarm and the event(s). The level of correlation between the primary metric and the probable cause metrics may be adjusted using the ratio between theoretical data points and actual points. The final Root Cause Analysis score may be modified depending upon the adjusted correlation value and presented for user review through a user interface. | 02-25-2010 |
20100023302 | System and Method for Assessing and Indicating the Health of Components - A system and method for visualization of the components of an enterprise system and the rendering of information about the health or status of the enterprise system, its components, and/or its subcomponents. The invention uses a combination of color codes or other indicators and a combination of algorithms and/or rules-based systems to control the computation of status/severities to associate to components and setup the color codes and indicators. | 01-28-2010 |
20090307272 | IMS Change Mapper - A method, system and device for monitoring internal database log events in a computer database environment are described. As database updates are detected they are analyzed and used to determine which of several kinds of database maintenance are required. The database administrator is therefore presented with information to allow for more accurate maintenance scheduling and able to prevent unnecessary database maintenance outages. | 12-10-2009 |
20090292720 | Service Model Flight Recorder - A method, system and medium for recording events in a system management environment is described. As system events are detected in an enterprise computing environment they are stored in a manner allowing them to be “replayed” either forward or reverse to assist a system administrator or other user to determine the chain of events that affected the enterprise. The system engineer and business process owner are therefore presented with pertinent information for monitoring, administrating and diagnosing system activities and their correlation to business services. | 11-26-2009 |
20090240765 | SYNTHETIC TRANSACTION MONITOR WITH REPLAY CAPABILITY - Systems and methods for recording and replaying client-server transactions on selected clients in order to gauge the performance of the client-server system from the perspective of the client. In one embodiment, a method comprises playing back a set of recorded transactions on a client, monitoring selected performance-related parameters at the client, and transmitting monitored data to the server for analysis or for viewing by a system administrator. The set of transactions is recorded on a first client by replacing a standard Internet transaction driver (e.g., WinInet.DLL) with a modified driver that is configured to intercept function calls. The function calls and corresponding parameters are recorded in a file which is later transmitted to a client, where the recorded transaction information is used to reproduce the transactions on the client. As the transactions are played back, performance data may be monitored and forwarded to a management server for analysis, display, etc. | 09-24-2009 |
20090158192 | Dynamic Folding of Listed Items for Display - A list folding process dynamically groups items of a list into logically related visual folds to reduce the number of items to be displayed in a window of a computer screen. The process determines attributes of the items to be displayed and dynamically groups items together into a special group called a visual fold based on the attributes. The rules for folding items based on attributes can be defined by a particular user so that each view of the items may be different among users. As the attributes of each item change, the display of the items and visual folds may be automatically adjusted to reflect the current proper grouping. The folding process therefore allows a user to view the maximum amount of information in available display area of a computer screen window. | 06-18-2009 |
20090157724 | Impact Propagation in a Directed Acyclic Graph Having Restricted Views - Service impact data is efficiently propagated in a directed acyclic graph with restricted views. One or more service components, impact rules and business rules are grouped together into a directed acyclic graph and a related metadata array. Impact propagation uses related metadata array to minimize traversal of the graph. As nodes of the graph are updated to propagate impact data, a determination is made as to when no further impact propagation is required. Subsequently, calculations are terminated without having to traverse the entire graph. This method allows a system or business administrator to view and receive real-time notification of the impacted state of all nodes in the graph that are available to their permitted view. Restricted views ensure that available service impact data is only displayed to end users having the proper authorization to view the underlying impact model data. | 06-18-2009 |
20090157723 | Impact Propagation in a Directed Acyclic Graph - A method, system and medium for efficiently propagating service impact data in a directed acyclic graph. One or more service components, impact rules and business rules will be grouped together into a directed acyclic graph and a related metadata array. Impact propagation uses related metadata array to minimize traversal of the graph. As nodes of the graph are updated to propagate impact data a determination is made as to when no further impact propagation is required and calculations are terminated without having to traverse the entire graph. This method will allow a system or business administrator to maintain real-time notification and visualization of, the impacted state of all objects in the graph. | 06-18-2009 |
20090157712 | Dynamic Compression of Systems Management Data - A method, system, and medium for compressing systems management information in a historical data store. Dynamically determining the appropriate compression algorithm to apply based on the type of data being compressed and stored. As further input is received for any particular measurement, the appropriate compression algorithm will be automatically selected from the set of available compression algorithms or be defined by a user configuration parameter. The amount of historical data stored with the minimal amount of data loss is optimized by the system dynamically changing the compression algorithm used for the given input data over a particular time span. The system engineer is therefore presented with the pertinent information for monitoring, administrating and diagnosing system activities. | 06-18-2009 |
20090094294 | Associating Database Log Records into Logical Groups - A method, system and medium for organizing and associating log records into logically related groups is described. One or more input sources from, possibly, different systems/subsystems are input to a log correlation method. As the log records are processed the fields are interrogated to determine which log records are related to each other. As further log records are processed more information about previously unidentifiable relationships is determined. After this later information is known, log records that could previously not be associated with any other log records are added to the existing association. The system engineer is therefore presented with the pertinent information for monitoring, administrating and diagnosing system activities. | 04-09-2009 |
20080281865 | Database Recovery Using Logs Applied to Consistent Copies - A copy utility creates a copy of source database objects that is transactionally consistent to a consistent point-in-time, and a recovery utility apples log records to the consistent copy to make a resulting image that is updated as of an identified point-in-time (i.e., the current time or a point-in-time after the copy was made). To effectively recover and apply the logs so that no previously in-flight transactions are lost, the copy utility registers a starting point indicating a point-in-time for logs to be applied to the copy and also registers a smallest lock size used to block access to target data when the copy was made. The recovery utility bases its recovery operations using the registered starting point and the smallest lock size when applying log records to the copy so as not to lose any previously in-flight transactions. | 11-13-2008 |
20080243945 | Log Data Store and Assembler for Large Objects in Database System - A mechanism works in conjunction with a DB2® Log and an analysis tool, such as BMC's Log Master™, to handle logged data for Large Objects (LOBs) stored in tables of a DB2 database system. A plurality of controls track data logged for the LOBs. The mechanism reads log records from a DB2 Log and uses the controls to determine which of the tracked LOBs is associated with the log records and obtains data from those associated log records. The mechanism builds keys to index the data and stores the keys and the data in a Virtual Storage Access Method store having Key Sequenced Data Sets maintained separate from the log record store for the DB2 Log. When requested by the analysis tool, the data in the store can be reassembled using the keys and map records in the first store that map the logged data for the tracked LOBs. | 10-02-2008 |