Entries |
Document | Title | Date |
20110066893 | SYSTEM AND METHOD TO MAP DEFECT REDUCTION DATA TO ORGANIZATIONAL MATURITY PROFILES FOR DEFECT PROJECTION MODELING - A method is implemented in a computer infrastructure having computer executable code tangibly embodied on a computer readable storage medium having programming instructions. The programming instructions are operable to receive a maturity level for an organization and select at least one defect analysis starter/defect reduction method (DAS/DRM) defect profile based on the maturity level. Additionally, the programming instructions are operable to determine a projection analysis for one or more stages of the life cycle of a software code project of the organization based on the at least one DAS/DRM defect profile. | 03-17-2011 |
20110066894 | DEBUGGING A MAP REDUCE APPLICATION ON A CLUSTER - A method, apparatus, system, article of manufacture, and data structure provide the ability to debug a map-reduce application on a cluster. A cluster of two or more computers is defined by installing a map-reduce framework (that includes an integrated development environment [IDE]) onto each computer. The cluster is formatted by identifying and establishing communication between each computer so that the cluster functions as a unit. Data is placed into the cluster. A function to be executed by the framework on the cluster is obtained, debugged, and executed directly on the cluster using the IDE and the data in the cluster. | 03-17-2011 |
20110072310 | Diagnostic Data Capture in a Computing Environment - A method in a multithreaded computing environment for capturing diagnostic data, the method comprising the steps of: in response to a determination that the computing environment is in a predetermined invalid state, a first thread recording diagnostic data for the computing environment, wherein the determination includes a verification that the invalid state corresponds to a state other than a transient state of the computing environment corresponding to a transition of the computing environment by a second thread from a first valid state to a second valid state. An apparatus and computer program element for providing such diagnostic data capture are also provided. | 03-24-2011 |
20110078509 | INFERENCE OF CONTRACT USING DECLARATIVE PROGRAM DEFINITION - A declarative program definition. The definition is analyzed to produce an application contract that describes semantics for sending and receiving application messages during the successful execution of operations by the program. In addition, this analysis may also generate local behaviors associated with the local execution of the program. Alternatively or in addition, the analysis may infer secondary contracts regarding the sending and receiving of application messages, even though the full details of the secondary contracts are not present in the declarative program definition. For instance, the secondary contracts might include error contracts or consistency contracts. | 03-31-2011 |
20110078510 | Computer Software and Hardware Evaluation System and Device - The present disclosure generally relates to systems and devices that enable computer software or hardware systems to be evaluated. In an embodiment, an evaluation processing device can receive a set of needs or requirements for a computer software or hardware system from a user or organization. The evaluation processing device may determine software or hardware configurations that may be suitable for the user based on the set of needs, and configure one or more evaluation devices with software of hardware based on the determination. The evaluation processing device can allow the user to evaluate the software of hardware configurations by utilizing the one or more configured evaluation devices. Additionally, the integration device can receive information related to the evaluation from the one or more evaluation devices and generate a report based on the information. | 03-31-2011 |
20110078511 | PRECISE THREAD-MODULAR SUMMARIZATION OF CONCURRENT PROGRAMS - Methods and systems for concurrent program verification. A concurrent program is summarized into a symbolic interference skeleton (IS) using data flow analysis. Sequential consistency constraints are enforced on read and write events in the IS. Error conditions are checked together with the IS using a processor. | 03-31-2011 |
20110083044 | AUTOMATIC CORRECTION OF APPLICATION BASED ON RUNTIME BEHAVIOR - A system and associated method for automatically correcting an application based on runtime behavior of the application. An incident indicates a performance of the application in which a problem object produces an outcome that had not been expected by a user or by a ticketing tool. An incident flow for the problem object is automatically analyzed. Actual run of the application renders a forward data flow and at least one backward data flow is simulated from an expected outcome of the problem object. The forward data flow and the backward data flow(s) are compared to create a candidate fault list for the problem object. A technical specification to correct the candidate fault list and a solution to replace the application are subsequently devised. | 04-07-2011 |
20110087926 | HEAP ASSERTIONS - A programming language support for debugging heap related errors includes one or more queries for determining one or more global properties associated with use of the area by the program. The one or more queries may be executed in parallel or concurrently and dynamically utilize available number of cores. | 04-14-2011 |
20110087927 | DETECTING DEFECTS IN DEPLOYED SYSTEMS - Detecting defects in deployed systems, in one aspect, identify one or more monitoring agents used in a computer program. Total execution metric of the computer program and execution metric associated with the one or more monitoring agents are measured and the measure execution metric is compared with a specified overhead criteria. The execution of the one or more monitoring agents is adjusted based on the comparing step while the computer program is executing to meet the specified overhead criteria. | 04-14-2011 |
20110099429 | SYSTEMS AND METHODS FOR BACKWARD-COMPATIBLE CONSTANT-TIME EXCEPTION-PROTECTION MEMORY - Embodiments of the invention provide a table-free technique for detecting all temporal and spatial memory access errors in programs supporting general pointers. Embodiments of the invention provide such error checking using constant-time operations. Embodiments of the invention rely on fat pointers, whose size is contained within standard scalar sizes (up to two words) so that atomic hardware support for operations upon the pointers is obtained along with meaningful casts in-between pointers and other scalars. Optimized compilation of code becomes possible since the scalarized-for-free encoded pointers get register allocated and manipulated. Backward compatibility is enabled by the scalar pointer sizes, with automatic support provided for encoding and decoding of fat pointers in place for interaction with unprotected code. | 04-28-2011 |
20110107150 | GOVERNANCE IN WORK FLOW SOFTWARE - The disclosure presents categorization of users into groups comprising expert users and novice users. A system and method analyzes the users' inputted data in helpdesk troubleshooting software to determine the deviation of novice users from expert users, or the deviation of novice users to a preconfigured behavior as determined by management policy. Other embodiments are also disclosed. | 05-05-2011 |
20110107151 | Method and System of Deadlock Detection in a Parallel Program - A method and system of deadlock detection in a parallel program, the method comprising: recording lock events during the operation of the parallel program and a first order relation among the lock events; converting information relevant to the operation of the parallel program into gate lock events and recording the gate lock events; establishing a second order relation among the gate lock events and lock events associated with the gate lock events and adding the second order relation to the first order relation; constructing a lock graph corresponding to the operation procedure of the parallel program based on the added first order relation; and performing deadlock detection on the constructed lock graph. The deadlock detection method of the invention can improve the accuracy of deadlock detection without depending on the deadlock detection algorithm per se, and can be applied with facility to various development environments and reduce development costs. | 05-05-2011 |
20110113288 | Generating random sequences based on stochastic generative model having multiple random variates - Random sequences are generated based on a stochastic generative model having multiple random variates. Inputs representative of the stochastic generative model are received. The inputs include a first random variate having a finite set of alphabets, a second random variate having a set of alphabets, and a third random variate having a finite set of alphabets. Outputs representative of the random sequences are generated based on the stochastic generative model. The outputs include a first random sequence that is a finite-length random sequence of alphabets randomly selected from the first random variate, a second random sequence having a set of alphabets selected from the second random variate, and a third random sequence having a set of alphabets randomly selected from the third random variate. | 05-12-2011 |
20110145649 | Method and a System for Dynamic Probe Authentication for Test and Monitoring of Software - The present invention is related to a method, a system and a computer readable device for authenticated configuration used for configuration of software probes in software modules to be tested in an electrical mobile device. The invention will create and make use of a configuration file by inserting an authentication signature, Probe Identifications (PID) and Probe Locations (PL). | 06-16-2011 |
20110145650 | Analyzing computer programs to identify errors - A method of analyzing a computer program under test (CPUT) using a system comprising a processor and a memory can include performing, by the processor, static analysis upon the CPUT and runtime analysis upon at least a portion of the CPUT. A static analysis result and a runtime analysis result can be stored within the memory. Portions of the CPUT analyzed by static analysis and not by runtime analysis can be determined as candidate portions of the CPUT. The candidate portions of the CPUT can be output. | 06-16-2011 |
20110145651 | SOFTWARE PERFORMANCE COUNTERS - A system for providing software performance counters includes an operating system that receives a first request of a first application to monitor performance of a second application, the first request identifying a type of event to monitor during the execution of the second application. The operating system determines that the event is a software event, monitors the performance of the second application with respect to the type of the software event, and updates a counter associated with the type of the software event based on the monitoring. Further, the operating system receives a second request of the first application for performance data associated with the type of the software event counter, and provides the value of the counter to the first application. | 06-16-2011 |
20110145652 | Computer-Implemented Systems And Methods For An Automated Application Interface - In accordance with the teachings described herein, systems and methods are provided for an automated application interface. One or more wizards may be used to receive user input in order to perform one or more software interface operations to manipulate a first set of data between data analysis software and database software. Information associated with the user input may be captured and used to generate one or more template data stores. A user interface may be used to modify at least one template data store to identify a subsequent set of data. The template data stores may be automatically executed in an identified sequence to perform software interface and data analysis operations for the subsequent set of data. | 06-16-2011 |
20110145653 | METHOD AND SYSTEM FOR TESTING COMPLEX MACHINE CONTROL SOFTWARE - A method and system for testing complex machine control software A method of formally testing a complex machine control software program in order to determine defects within the software program is described. The software program to be tested (SUT) has a defined test boundary, encompassing the complete set of visible behaviour of the SUT, and at least one interface between the SUT and an external component, the at least one interface being defined in a formal, mathematically verified interface specification. The method comprises: obtaining a usage model for specifying the externally visible behaviour of the SUT as a plurality of usage scenarios, on the basis of the verified interface specification; verifying the usage model, using a usage model verifier, to generate a verified usage model of the total set of observable, expected behaviour of a compliant SUT with respect to its interfaces; extracting, using a sequence extractor, a plurality of test sequences from the verified usage model; executing, using a test execution means, a plurality of test cases corresponding to the plurality of test sequences; monitoring the externally visible behaviour of the SUT as the plurality of test sequences are executed; and comparing the monitored externally visible behaviour with an expected behaviour of the SUT. | 06-16-2011 |
20110154121 | CONCURRENCY TEST EFFICTIVENESS VIA MUTATION TESTING AND DYNAMIC LOCK ELISION - One embodiment described herein is directed to a method practiced in a computing environment. The method includes acts for determining test suite effectiveness for testing for concurrency problems and/or product faults. The method includes identifying a plurality of synchronization primitives in a section of implementation source code. One or more of the synchronization primitives are iteratively modified and a same test suite is run for each iteration. For each iteration, a determination is made whether or not the test suite returns an error as a result of modifying one or more synchronization primitives. When the test suite does not return an error, the method includes providing to a user an indication which indicates at least one of a test adequacy hole for the test suite; an implementation source code fault; or an equivalent mutant of the implementation source code. | 06-23-2011 |
20110154122 | SYSTEM AND METHOD FOR OVERFLOW DETECTION USING SYMBOLIC ANALYSIS - A method for demand-driven symbolic analysis involves obtaining a section of code comprising an instruction from a source code file and determining a critical variable in the section of code and data dependencies related to the critical variable. The method further involves iteratively computing a symbolic value representing a range of values of the critical variable according to the data dependencies, determining a set of control predicates relevant to the critical variable at the instruction, refining the range of values according to the set of control predicates to generate a second range of values for the symbolic value, and reporting an error when the second range of values exceeds a predetermined value. | 06-23-2011 |
20110167303 | GUI EVALUATION SYSTEM, GUI EVALUATION METHOD, AND GUI EVALUATION PROGRAM - The consistency of the heading expressions used in each screen in a plurality of evaluated screens is exhaustively and reliably evaluated. The GUI evaluation system comprises: GUI information storage means for storing GUI information that concerns heading included in an evaluation target screen and includes information indicative of heading expression which is the expression used for the heading; heading group specification means for grouping headings included in each evaluation target screen by expression used for the headings in accordance with the GUI information stored in the GUI information storage means; and heading expression evaluation means for evaluating a consistency of heading expressions between a plurality of evaluation target screens by comparing heading groups that are grouped by the heading group specification means and included in all possible combinations of two of the plurality of evaluation target screens. | 07-07-2011 |
20110173501 | MEMORY MANAGEMENT TECHNIQUES SELECTIVELY USING MITIGATIONS TO REDUCE ERRORS - A mitigation enablement module for a computer that improves application reliability. When performing memory management operations, the mitigation enablement module and associated memory manager selectively use mitigations that are intended to prevent an application bug from cause an application error. The memory manager may selectively apply mitigations for each of one or more applications based on the likelihood that such mitigations are successful at preventing bugs from causing application errors. The likelihood is determined from historical information on whether the mitigations, when applied, prevented bugs from causing memory operations that could cause application errors. This historical information can be gathered on a single computer over multiple invocations of the application or may be aggregated from multiple computers, each invoking the application. The determined likelihood may then be used to determine whether or for how long to apply the mitigation actions for memory operations requested by the application. | 07-14-2011 |
20110197098 | METHOD AND APPARATUS FOR TEST COVERAGE ANALYSIS - A method provides for a way to test coverage data used in testing small computing platforms by assigning unique signatures to each node in the control flow graph and embedding control function calls. Signatures are embedded into the program during compilation time using the custom parser. When the program is executed the “exercised” signatures sequence is checked for correctness and used for deriving test coverage metric. This metric is used for improving unit and black-box tests. Thus, a way to collect the path-based test coverage with minimal memory and code/size impact on target system is provided. | 08-11-2011 |
20110214021 | SYSTEMS AND METHODS FOR INITIATING SOFTWARE REPAIRS IN CONJUNCTION WITH SOFTWARE PACKAGE UPDATES - Embodiments relate to systems and methods for systems and methods for initiating software repairs in conjunction with software package updates. A physical or virtual client machine can host a set of installed software packages, including operating system, application, and/or other software. A package manager tracks the set of installed packages and updates available for the installed set. A notification tool, in conjunction with the package manager, can monitor the user's selection of package update options, and compare those updates to a diagnostic database, current state of the client machine, or other resources. Based on those determinations, the notification tool can generate one or more potential software repair actions to correct or avoid potential conflicts, faults, or other conditions that may arise due to, or may surround, the prospective package update. | 09-01-2011 |
20110225461 | APPARATUS AND METHOD TO DETECT AND TRACK SOFTWARE INSTALLATION ERRORS - A virtual installation map, and method involving installing a software functionality using the same, the virtual installation map including a first software installation map including a plurality of software elements representative of a related software file, the software elements also including at least one dependency to another software element. The virtual installation map further including a second software installation map also including a second plurality of software elements representative of related software file along with related dependencies. The first and second software installation maps may be hosted in separate databases and may relate to software products provided by different vendors. One or both software installation maps may include a pointer or other reference to the other installation map thereby providing a virtual installation map, in one example. | 09-15-2011 |
20110231708 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR AUTOMATED TEST CASE GENERATION AND SCHEDULING - In accordance with embodiments, there are provided mechanisms and methods for automated test case generation and scheduling. These mechanisms and methods for automated test case generation and scheduling can provide an automated manner of generating test cases and scheduling tests associated with such test cases. The ability to provide this automation can improve efficiency in a testing environment. | 09-22-2011 |
20110231709 | Method for checking data consistency in a system on chip - The invention aims to provide a method and a system on chip able to detect at once hardware and software errors to prevent manipulations for retrieving cryptographic keys, inserting or suppressing instructions to bypass security processes, modifying programs or memory content etc. The system on chip comprises a core including at least two processors, registers, and a data consistency check module. The core is connected to at least one set of memories containing zones for instructions of a first program and of a second program, said instructions being to be executed respectively by the first and second processor, which respectively produce and store result data into the registers and the memories. The data consistency check module is configured to verify conformity of the produced result data by comparing a test result obtained by carrying out a predetermined function F over one of the first or second result data with the corresponding second or first result data and to continue execution of instructions of each program when the comparison is successful, or stop execution when the comparison shows an error. | 09-22-2011 |
20110246834 | TESTING SOFTWARE IN ELECTRONIC DEVICES - Software in an electronic device can be tested using a combination of random testing and deterministic testing. In various embodiments, deterministic tests can run for a prescribed duration and/or a prescribed number of iterations before and/or after random testing. Test results can be weighted using a metric representing an amount of code that was stressed during testing. This metric can be determined by tracking software code that is loaded into memory during testing. | 10-06-2011 |
20110252277 | ELECTRONIC DEVICE WITH DEBUGGING AND SOFTWARE UPDATING FUNCTION - An electronic device includes a registered jack 45 (RJ45) port, a network card, a universal asynchronous receiver/transmitter (UART), and a chip. The RJ45 port includes two receiving signal pins, two transmitting signal pins, a data transmitting pin, and a data receiving pin. The network card is connected to the two receiving signal pins and the two transmitting signal pins. The UART includes a data receiving pin and a data transmitting pin. The data transmitting pin is connected to the data receiving pin. The data receiving pin pin is connected to the data transmitting pin. The chip is connected to the UART. The UART is operable to debug or software-update the chip according to signals transmitted through the data transmitting pin and the data receiving pin. | 10-13-2011 |
20110252278 | Virtual computer system, test method, and recording medium - A method for testing an application in a virtual computer system includes transmitting a request to select one of first and second conditions of a test of the application from a first virtual machine to execute the test of the application to a second virtual machine to control the virtual computer system via a virtual computer monitor, generating, if the virtual machine monitor receives the request to select the one of the first and second conditions of the test of the application, a clone of the first virtual machine by the virtual machine monitor, and executing the test of the application based on the first condition in the first virtual machine while executing the test of the application based on the second condition in the generated clone of the first virtual machine. | 10-13-2011 |
20110252279 | PROCESSING EXECUTION REQUESTS WITHIN DIFFERENT COMPUTING ENVIRONMENTS - A computerized method, computer system, and a computer program product for processing an execution request within different computing environments. Execution requests and generated reference information are forwarded to the different computing environments, where the requests are processed using the reference information. Results of the processed execution requests are collected from the different computing environments. The results are compared to find any discrepancy, possibly giving indication of a software or hardware error. | 10-13-2011 |
20110264961 | SYSTEM AND METHOD TO TEST EXECUTABLE INSTRUCTIONS - This document discusses, among other things, a method of testing an Application Programming Interface (API) call that includes receiving data identifying a schema associated with web services together with an API call. Various example embodiments may relate to accessing a data repository associated with the schema to identify an API response corresponding to the API call. In some example embodiments, a message is returned that is based on a determination of whether the API call is valid. The example message may simulate an API response from web services. | 10-27-2011 |
20110276833 | STATISTICAL ANALYSIS OF HEAP DYNAMICS FOR MEMORY LEAK INVESTIGATIONS - Embodiments of the invention provide systems and methods for analyzing memory heap information for investigation into a memory leak caused by an application. According to one embodiment, a method of analyzing heap data can comprise obtaining the heap data from a memory. The heap data can represent a plurality of objects of one or more classes, each object identifying a referrer instance, a field in the referrer, and a referent instance. A statistical analysis can be performed on the heap data to identify objects within the heap that are contributing to a growth of the heap. The heap can be traversed based on the referrer instance of one or more objects identified as contributing to the growth of the heap to a root object identified as not contributing to the growth of the heap. | 11-10-2011 |
20110276834 | TECHNIQUES FOR TESTING COMPUTER READABLE CODE - The present invention is directed to methods and systems of testing computer-readable code. The method includes executing a first testing module in a computer browser; launching a second testing module in the computer browser under control of the first testing module; locating an executable portion of a web-based application with the first testing module and ascertaining operational characteristics of the executable portion with the second testing module; and producing test results from the operational characteristics. | 11-10-2011 |
20110276835 | APPARATUS AND METHOD FOR PREVENTING ABNORMAL ROM UPDATE IN PORTABLE TERMINAL - An apparatus and method for determining an abnormal ROM update in a portable terminal. The apparatus includes a ROM update unit for increasing a value of an update start counter when a ROM update process is performed, and increasing a value of an update finish counter when the ROM update process is finished. The ROM update unit loads the values of the update start counter and the update finish counter, and compares the values of the two counters to determine that the ROM update process has been normally performed before the portable terminal abnormally operates. | 11-10-2011 |
20110276836 | PERFORMANCE ANALYSIS OF APPLICATIONS - Embodiments of methods and systems for analyzing performance of an application are provided. In that regard, an embodiment of a method for analyzing performance, among others, comprises collecting performance metric data from the application over time; segmenting the performance metric data into time segments representing sets of contiguous time samples which exhibit similar performance metric behaviour; determining the presence of an anomaly in a time segment; and correlating the anomalous segment with other data available to the system to determine the cause of the anomaly. | 11-10-2011 |
20110283147 | Generating Software Application User-Input Data Through Analysis of Client-Tier Source Code - In one embodiment, analyze client-tier source code of a client-server software application to extract one or more software modules that handle user-input data of the software application. For each one of the software modules, extract from the software module one or more user-input constraints placed on the user-input data, comprising: analyze source code of the software module to determine one or more failure points in the source code; perform symbolic execution on the software module to extract one or more first expressions that cause the software module to reach the failure points, respectively; obtain a second expression as the disjunction of all the first expressions; obtain a third expression as the negation of the second expression; and extract the user-input constraints from the third expression. Determine one or more user-input data that satisfy all the user-input constraints. | 11-17-2011 |
20110283148 | GENERATING REUSABLE TEST COMPONENTS OUT OF REMOTE APPLICATION PROGRAMMING INTERFACE - In an aspect, the present application relates to a computer-implemented method, computer system, and computer program product for (automatically) generating reusable test components to test software applications. The computer-implemented method for generating reusable test components to test software applications may comprise: accessing an object model relating to at least part of a software application; and generating at least one test component applicable to test the software application, comprising: analyzing the object model, generating a meta-description from the object model and store the meta information in at least one descriptor according to a meta model, and generating the test component and a corresponding component implementation based on the descriptor. | 11-17-2011 |
20110289356 | METHODS AND SYSTEMS FOR TESTING METHODS IN A MULTI-TENANT DATABASE ENVIRONMENT - In accordance with embodiments disclosed herein, there are provided systems, devices, and methods for testing methods in a multi-tenant database environment, including, for example, hosting a plurality of customer codebases within a host organization, where each of the plurality of customer codebases includes a plurality of operational statements and one or more test methods. Such a method further includes generating a first test result set by executing the one or more test methods associated with each of the plurality of customer codebases against a production release codebase of the host organization; generating a second test result set by executing the one or more test methods associated with each of the plurality of customer codebases against a pre-release codebase of the host organization; and identifying errors associated with the pre-release codebase based on a comparison of the first test result set and the second test result set. | 11-24-2011 |
20110289357 | INFORMATION PROCESSING DEVICE - The invented device includes a central processing unit(s), each CPU including an execution unit coupled to an operand bus and a control unit that controls operation of the execution unit, based on fetched instructions, and a debugging circuit that obtains trace data about how a program is executed in each CPU. The control unit includes a debugging function unit that collects instruction execution analysis data in the CPU. The debugging circuit includes a trace acquisition circuit(s) that imports instruction execution analysis data collected by the debugging function unit and data received from the operand bus via logic circuits used for separate purposes and a trace output circuit(s) for delivering outside the output of the trace acquisition circuit. In the trace acquisition circuit, a sorting logic unit is provided that sorts instruction execution analysis data collected by the debugging function unit and data received from the operand bus. | 11-24-2011 |
20110296245 | SYSTEM AND METHOD FOR A STAGGERED EXECUTION ENVIRONMENT - A staggered execution environment is provided to safely execute an application program against software failures. In an embodiment, the staggered execution environment includes one or more probe virtual machines that execute various portions of an application program and an execution virtual machine that executes the same application program within a time delay behind the probe virtual machines. A virtualization supervisor coordinates the execution of the application program on one or more probe virtual machines. The probe virtual machines are used to detect and correct software failures prior to the execution virtual machine encountering them. The virtualization supervisor embargos output data in order to ensure that erroneous data is not released which may adversely affect external processes. | 12-01-2011 |
20110296246 | TECHNIQUES FOR DEBUGGING AN APPLICATION - Techniques for debugging applications are provided. Access to an application is controlled by a wrapper. The wrapper intercepts calls to the application and records the calls. The calls are then passed to the application for processing. The recorded calls form a log which may be analyzed or mined to detect error conditions or undesirable performance characteristics associated with the application independent of source associated with the application. | 12-01-2011 |
20110296247 | SYSTEM AND METHOD FOR MITIGATING REPEATED CRASHES OF AN APPLICATION RESULTING FROM SUPPLEMENTAL CODE - Provided is a method for mitigating the effects of an application which crashes as the result of supplemental code (e.g., plug-in), particularly a plug-in from a source other than the source of the operating system of the device or the source of the application that crashes. The method includes executing the application. As the application is running, it may be monitored to determine if normal execution of instructions ceases. When that occurs, the system will make a determination if code from a supplemental code module was the cause of the crash, and will make an evaluation if that supplemental code module is from a source other than the source(s) of the operating system and application in question. In some implementations, remedial steps may be provided, such as providing information on subsequent executions of the application. | 12-01-2011 |
20110302454 | PERFORMING ASYNCHRONOUS TESTING OF AN APPLICATION OCCASIONALLY CONNECTED TO AN ONLINE SERVICES SYSTEM - In a method, system, and computer-readable medium having instructions for performing asynchronous testing of an application that is occasionally connected to an online services system, metadata describing at least a portion of an online services database is retrieved and the at least a portion of the online services database is authorized for replication at a software application, information is determined for an entity for an application database from the metadata, a request is sent for a database using the software application interface and the request has an asynchronous operation call to the database for the entity, an execution of the asynchronous operation call is recorded within a callback function, a response is received for the asynchronous operation call, and a result is determined for the software application performance. | 12-08-2011 |
20110307741 | Non-intrusive debugging framework for parallel software based on super multi-core framework - A non-intrusive debugging framework for parallel software based on a super multi-core framework is composed of a plurality of core clusters. Each of the core clusters includes a plurality of core processors and a debug node. Each of the core processors includes a DCP. The DCPs and the debug node are interconnected via at least one channel to constitute a communication network inside each of the core clusters. The core clusters are interconnected via a ring network. In this way, the memory inside each of the debug nodes constitutes a non-uniform debug memory space for debugging without affecting execution of the parallel program, such that it is applicable to current diversified dynamic debugging methods under the super multi-core system. | 12-15-2011 |
20110314341 | METHOD AND SYSTEMS FOR A DASHBOARD TESTING FRAMEWORK IN AN ONLINE DEMAND SERVICE ENVIRONMENT - Testing a dashboard framework includes creating a model that captures the states of a GUI application and validates the states of the application by comparing it with benchmarks. The testing can include user interaction between the captured states of the GUI application. The ability to provide testing based upon recorded states of a web application can enable the test system to adapt to changes to the GUI software during product development or modification. Testing a dashboard framework is more efficient and flexible testing methods for GUI software. | 12-22-2011 |
20110320876 | Systems and methods for processing source code during debugging operations - Systems and methods consistent with the invention may include displaying, during debugging of source code having corresponding executable code, a screen including a first section, wherein a variable name included in the source code is displayed in a first format in the first section, receiving a user selection of the variable name, converting, by using a processor, the first format of the variable name to a second format in response to the received selection, wherein the variable name includes a plurality of characters and converting the first format of the variable name to the second format includes converting the characters to uppercase, searching for a corresponding variable name in the executable code, and displaying, on the display device, a second section including the corresponding variable name, wherein the variable name is displayed in a third format in the second section. | 12-29-2011 |
20110320877 | REPLAYING ARCHITECTURAL EXECUTION WITH A PROBELESS TRACE CAPTURE - A system and method provide for capturing architecture data for software executing on a system, wherein the architecture data can include state data and event data. The captured architecture data may be replayed in a simulator, wherein failure information corresponding to the software is obtained from the simulator. | 12-29-2011 |
20110320878 | Parametric Trace Slicing - A program trace is obtained and events of the program trace are traversed. For each event identified in traversing the program trace, a trace slice of which the identified event is a part is identified based on the parameter instance of the identified event. For each trace slice of which the identified event is a part, the identified event is added to an end of a record of the trace slice. These parametric trace slices can be used in a variety of different manners, such as for monitoring, mining, and predicting. | 12-29-2011 |
20110320879 | Methods and systems for a mobile device testing framework - A mobile device test framework is used in combination with client controllers and device controllers so that a single mobile device API test can be used with mobile devices having different operating system platforms. The client controllers can provide information specific to the client and the device controllers can provide information needed to apply the test to each of the mobile device platforms. The test framework can navigate through the controls of the mobile device GUIs and input information. The test framework can then check that the text and images displayed by the mobile devices match the expected information. | 12-29-2011 |
20120005537 | IDENTIFYING BUGS IN A DATABASE SYSTEM ENVIRONMENT - A system and method for identifying bugs in a database system. In one embodiment, a method includes running a plurality of tests on a software application, and rerunning one or more tests of the plurality of tests. The method also includes identifying one or more bugs in the one or more tests based on inconsistent test results. | 01-05-2012 |
20120005538 | Dynamic Discovery Algorithm - A system and method for identifying an application exception generated in response to a software application operating on a system is provided, wherein the method includes identifying an occurrence of an application exception, examining the application exception to identify characteristics of the application exception and processing the application exception, prior to the application exception being logged, responsive to the characteristics of the application exception. The processing includes determining whether application exception environment data is to be collected and if the application exception environment data is to be collected, logging the application exception environment data. | 01-05-2012 |
20120017119 | Solving Hybrid Constraints to Generate Test Cases for Validating a Software Module - In one embodiment, a method includes analyzing one or more first numeric constraints and one or more first string constraints associated with a software module including one or more numeric variables and string variables; inferring one or more second numeric constraints applying to specific ones of the string variables; inferring one or more second string constraints applying to specific ones of the numeric variables; representing each one of the first and second numeric constraints with an equation; representing each one of the first and second string constraints with a finite state machine; and testing the software module for one or more possible errors by attempting to solve for a solution including one or more values for specific ones of the numeric and string variables that satisfies all the first and second numeric constraints and all the first and second string constraints. | 01-19-2012 |
20120030514 | MODULE TESTING ADJUSTMENT AND CONFIGURATION - In one embodiment, a method for testing adjustment and configuration is disclosed. The method can include accessing source code of a test framework that is configured for testing a module, creating a configuration folder having a property override for a test suite for the module testing, determining a source root folder for the test suite, starting the test framework by passing in an identifier for the test suite, and adding a custom test to the source root folder using the configuration folder to customize the test suite. The method can further include compiling the test framework with each of the plurality of test folders enabled. The method also may use a refactoring tool to make changes in a file within the test framework. | 02-02-2012 |
20120030515 | USE OF ATTRIBUTE SETS FOR TEST ENTITY IDENTIFICATION DURING SOFTWARE TESTING - An attribute collector may collect an attribute set for each test entity of a plurality of test entities associated with a software test executed in a software environment. An attribute analysis signal handler may receive an attribute analysis signal associated with a change in the software environment, and a view generator may provide an attribute-based view associated with an affected attribute set associated with the change, the attribute-based view identifying an affected test entity that is affected by the change. | 02-02-2012 |
20120030516 | METHOD AND SYSTEM FOR INFORMATION PROCESSING AND TEST CARE GENERATION - A method and system for information processing and test case generation. The system includes: a pattern storage module for storing at least one resource identifier patterns, where the resource identifier patterns are extracted from a server code of a web application by analyzing the server code; a client code analyzer module for analyzing a client code generated from the server code and finding at least one event sequences matching with the resource identifier patterns; and a test case generator module for fetching a client state established from the client code, executing the event sequences on the client state, and generating a test case, where the test case includes a second resource identifier generated as an execution result of the event sequence. | 02-02-2012 |
20120030517 | EXPOSING APPLICATION PERFORMANCE COUNTERS FOR .NET APPLICATIONS THROUGH CODE INSTRUMENTATION - Disclosed is a method for adding performance counters to a .NET application after compilation of the .NET application to Common Intermediate Language code without a requirement for code changes to the original .NET application code or application recompilation from the development side. With regard to a further aspect of a particularly preferred embodiment, the invention may provide a method for adding the performance counters by declarative instrumentation of a .NET application at runtime or compile time, without the need for an application developer to hardcode instrumentation logic into the application. An instrumentation configuration file provides declarative definition for performance counters that are to be added to a particular application, and particularly includes a complete list of performance counters that need to be added and settings for each performance counter. | 02-02-2012 |
20120042210 | ON-DEMAND SERVICES ENVIRONMENT TESTING FRAMEWORK - In one embodiment, a method of providing a test framework in an on-demand services environment can include: accessing a plurality of tests via plug-ins to a core platform of the test framework; receiving, by a user interface, a selection of tests for execution from the plurality of tests, where the selected tests are configured to test a plurality of layers of a product; executing, by an execution engine coupled to the core platform, the selected tests; storing test results for the executed selected tests on a configurable repository; and reporting the stored test results in a summarized form on the user interface. | 02-16-2012 |
20120066550 | APPARATUS, SYSTEM AND METHOD FOR INTEGRATED TESTING OF SERVICE BASED APPLICATION - A service based application integrated testing apparatus, system and method is provided. The service based application integrated testing apparatus comprises an application integrated testing unit performs an integrated test on the at least one component service and the service based application by use of a control flow and a data flow, which are generated from an interaction between the at least one component service and the service based application. | 03-15-2012 |
20120072776 | FAULT ISOLATION USING CODE PATHS - Techniques are provided for isolating faults in a software program by providing at least two code paths that are capable of performing the same operation. When a fault occurs while the one of the code paths is being used to perform an operation, data that indicates the circumstances under which the fault occurred is stored. For example, a fault-recording mechanism may store data that indicates the entities that were involved in the failed operation. Because they were involved in an operation that experienced a fault, one or more of those entities may be “quarantined”. When subsequent requests arrive to perform the operation, a check may be performed to determine whether the requested operation involves any of the quarantined entities. If the requested operation involves a quarantined entity, a different code path is used to perform the operation, rather than the code path from which the entity is quarantined. | 03-22-2012 |
20120072777 | DEBUGGING DEVICE, DEBUGGING METHOD, AND COMPUTER PROGRAM FOR SEQUENCE PROGRAM - To provide a debugging device for a sequence program that provides a debugging environment in which debugging of a sequence program can be executed easily and efficiently. A range setting unit that sets a skipping range to be skipped when a sequence program is executed; an extracting unit that extracts an output contact that is included in the skipping range, and that outputs a value to another range; and a value setting unit that sets a value to the extracted output contact are included. | 03-22-2012 |
20120079325 | System Health Monitor - Described are computer-based methods and apparatuses, including computer program products, for system health monitoring. Backup set metadata is received, wherein the backup set metadata comprises information about backup data sets that are received by a backup storage system. One or more processes that process the backup set metadata through an emulated processing flow path are executed, wherein the one or more processes are also implemented in the backup storage system. Two or more potential processing states are determined within the emulated processing flow path. A reason code is determined for each backup set metadata entry of the backup set metadata indicative of a reason that the backup set metadata entry is in a processing state of the two or more potential processing states. A problem with the manner in which the backup set metadata is flowing through the emulated processing flow path is identified based on the reason codes. | 03-29-2012 |
20120079326 | System Health Monitor - Backup set metadata is received, wherein the backup set metadata comprises information about backup data sets that are received by a backup storage system that stores the backup data sets. The manner in which the backup data sets flow through a processing flow path of the backup storage system is emulated. One or more processes that process the backup set metadata through an emulated processing flow path are executed, wherein the emulated processing flow path is indicative of the manner in which the backup data sets flow through the processing flow path of the backup storage system when the backup storage system stores the backup data sets. One or more timing statistics are calculated based on the flow of the backup set metadata through the emulated processing flow path. | 03-29-2012 |
20120079327 | METHOD FOR DEBUGGING RECONFIGURABLE ARCHITECTURES - A method for debugging reconfigurable hardware is described. According to this method, all necessary debug information is written in each configuration cycle into a memory, which is then analyzed by the debugger. | 03-29-2012 |
20120084607 | FACILITATING LARGE-SCALE TESTING USING VIRTUALIZATION TECHNOLOGY IN A MULTI-TENANT DATABASE ENVIRONMENT - A system and method for testing in a database system. In one embodiment, a method includes receiving an indication of one or more changes to a software application, wherein each change corresponds to a different version of the software application. The method further includes generating one or more virtual machines for a version of the software application in response to the indication, wherein the one or more virtual machines test the version of the software application. | 04-05-2012 |
20120089875 | MULTI-USER TEST FRAMEWORK - Two user sessions can run concurrently on the same computer. A module executed by a first user instantiates a session manager in a first user session. The session manager receives input identifying a second user and providing credentials for the second user. A backup is made of auto-run and logon registry keys. A control file is created that directs actions in the second user session. The second user's credentials are registered in the registry file. The first session continues to execute while the second user is automatically logged on based on the registry auto login keys. The session manager is notified that login of the second user is complete. The session manager rewrites the auto login keys to the first user keys stored in the backup. The second user is logged off. The first user is automatically reconnected based on the rewritten registry keys. | 04-12-2012 |
20120096317 | METHOD AND SYSTEM FOR DETECTING PROGRAM DEADLOCK - A method and/or system for detecting deadlock, comprising: obtaining lock information related to locking operation in a program; generating a first lock graph based on the obtained lock information, wherein each node in the first lock graph comprises a set of locks comprising at least one lock and a set of program locations comprising at least one lock location; extracting a strongly connected sub graph in the first lock graph; unfolding the strongly connected sub graph in the first lock graph to generate a second lock graph, wherein each node in the second lock graph comprises a single lock; and extracting a strongly connected sub graph in the second lock graph, the strongly connected sub graph in the second lock graph indicating a deadlock in the program. | 04-19-2012 |
20120096318 | Method for Computer-Aided Detection of Errors During the Execution of One or More Software-Based Programs in a System of Components - A method detects errors during execution of software based programs in a system of motor vehicle components. During execution a component executes its assigned program, and the components call each other interactively. When a component is called, a program identity and an error parameter are transmitted from the other component to the component. If a component identifies an error during execution, it stores an active error entry that contains the program identity, the component identification and an error status. If a component, which has called another component, receives the component identification, it stores a passive error entry that contains the program identity, its component identification and the identification of the other component. A component, which stores one or more active or passive error entries, returns the program identity and the component identification of the component, at least once during program execution, to the component that has called it. | 04-19-2012 |
20120102364 | SYSTEM AND METHOD FOR BUSINESS FUNCTION REVERSIBILITY - Embodiments of the present invention may provide “undo” (e.g., rollback) features, along with data management simplification features, to an update package model of software suite development/evolution. New functions, which may have disruption effects for customers, may be installed into the core configuration data with inactive switches. Upon activation, a switch status may change, and a query filter may use the activated function (e.g., as associated with the switch ID). Original functions may be maintained, giving the user the ability to deactivate an activated function, and thereby reverting the system back to the prior configuration status. | 04-26-2012 |
20120131387 | MANAGING AUTOMATED AND MANUAL APPLICATION TESTING - An application for which approval is requested is identified and multiple automated tests are applied to the application in groups of automated tests. Each of the groups of automated tests includes multiple ones of the multiple automated tests. If one or more automated tests in a group of automated tests returns an inconclusive result, then a manual check is initiated for the application based on the one or more automated tests that returned the inconclusive result. If one or more automated tests in a group, or a manual test applied in the manual check, returns a fail result then an indication that the application is rejected is returned, the indication that the application is rejected including an identification of why the application is rejected. If none of the multiple automated tests returns a fail result, then a manual testing phase is initiated. | 05-24-2012 |
20120144241 | DEVICE FOR DYNAMIC ANALYSIS OF EMBEDDED SOFTWARE OF VEHICLE - The present invention provides a device for conducting dynamic analysis of embedded software of a vehicle. More particularly, it relates to a device for dynamically analyzing embedded software of the vehicle to detect real time errors of embedded software based on the analysis. More specifically, a data communication unit communicates data in real time with an electronic unit of the vehicle; and a control unit that monitors the condition of one or more hardware components which are used by embedded software of an electric field based on the data received through the data communication unit, and thereby outputs the monitored result accordingly. | 06-07-2012 |
20120151267 | SYSTEM FOR EXTENDING USE OF A DATA ADDRESS BREAK POINT REGISTER TO IMPLEMENT MULTIPLE WATCH POINTS - A method is provided for implementing multiple watchpoints or a watchpoint that is greater than one word in length. The method comprises a debugger receiving a watchpoint from a user, wherein the watchpoint identifies a portion of memory to be watched. The debugger then sends a read trap or write trap flag, for example READ_TRAP or WRITE_TRAP, to a memory protection module of an operating system identifying the portion of memory to be watched. A read or write operation is allowed on the watched portion of memory, but, after completion of the read or write operation, an exception signal is sent that indicates that the read or write operation occurred on the watched portion of memory. The debugger then provides output to a user regarding the exception. | 06-14-2012 |
20120151268 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR ERROR CODE INJECTION - In one embodiment, a computer program product for injecting error code includes a computer readable storage medium having computer readable program code embodied therewith. The computer readable program code includes computer readable program code configured to determine critical points in executing code of software under test, computer readable program code configured to determine an appropriate response action for each critical point based on an error encountered at each critical point, computer readable program code configured to inject a critical point segment into the executing code at a corresponding critical point, and computer readable program code configured to output a unique identifier of each critical point segment. In another embodiment, a system includes a processor, and a computer readable storage medium having computer readable program code embodied therewith having the above described functionality. Other systems and computer program products are described according to more embodiments. | 06-14-2012 |
20120151269 | MOBILE COMMUNICATION TERMINAL CAPABLE OF TESTING APPLICATION AND METHOD THEREOF - Disclosed is a method for testing an application in a testing agent which resides on an application layer of a mobile communication terminal mounted with a platform designed so that applications of the application layer operate independently from each other and a command is not directly transferred between the applications. The method includes: receiving a command for testing a test target application from a testing apparatus; generating an event corresponding to the transferred command; and registering the generated event in a window manager positioned on a framework layer in order to transfer the generated event to the test target application. | 06-14-2012 |
20120151270 | METHODS, MEDIA, AND SYSTEMS FOR DETECTING ANOMALOUS PROGRAM EXECUTIONS - Methods, media, and systems for detecting anomalous program executions are provided. In some embodiments, methods for detecting anomalous program executions are provided, comprising: executing at least a part of a program in an emulator; comparing a function call made in the emulator to a model of function calls for the at least a part of the program; and identifying the function call as anomalous based on the comparison. In some embodiments, methods for detecting anomalous program executions are provided, comprising: modifying a program to include indicators of program-level function calls being made during execution of the program; comparing at least one of the indicators of program-level function calls made in the emulator to a model of function calls for the at least a part of the program; and identifying a function call corresponding to the at least one of the indicators as anomalous based on the comparison. | 06-14-2012 |
20120151271 | MAT-REDUCED SYMBOLIC ANALYSIS - A computer implemented testing framework for symbolic trace analysis of observed concurrent traces that uses MAT-based reduction to obtain succinct encoding of concurrency constraints, resulting in quadratic formulation in terms of number of transitions. We also present encoding of various violation conditions. Especially, for data races and deadlocks, we present techniques to infer and encode the respective conditions. Our experimental results show the efficacy of such encoding compared to previous encoding using cubic formulation. We provided proof of correctness of our symbolic encoding. | 06-14-2012 |
20120159258 | DEBUGGING IN DATA PARALLEL COMPUTATIONS - The debugging of a program in a data parallel environment. A connection is established between a debugging module and a process of the data parallel environment. The connection causes the data parallel environment to notify the debugging module of certain events as they occur in the execution of the process. Upon notification of such an event, the process execution is paused, and the debugging module may query the data parallel environment for information regarding the process at the device independent virtual machine layer. Upon completion of this querying, the process may then resume execution. This may occur repeatedly if multiple events are encountered. | 06-21-2012 |
20120159259 | Optimizing Performance Of An Application - An indication of a start of an execution of a process can be received, and a time counter associated with measuring a time elapsed can be initiated by the execution of the process. The time elapsed by the execution of the process can be compared with a predetermined threshold timeout value, and a report indicating the time elapsed by the execution of the process and whether the elapsed time exceeded the predetermined threshold timeout value can be automatically generated. | 06-21-2012 |
20120159260 | RESOURCE INDEX IDENTIFYING MULTIPLE RESOURCE INSTANCES - A resource index on a computing device identifies multiple resource instances (e.g., multiple user interface (UI) resource instances) of multiple resource items (e.g., of multiple UI resource items), each resource instance having one or more resource instance conditions. In response to a request for a resource item received from an application, a determination is made based on the resource index of one of the multiple resource instances that satisfy conditions associated with the request, and the one of the multiple resource instances is returned to the application. Additionally, the resource index can be used to identify potential errors in running an application in various potential contexts. | 06-21-2012 |
20120166884 | LEVERAGING THE RELATIONSHIP BETWEEN OBJECT IDs AND FUNCTIONS IN DIAGNOSING SOFTWARE DEFECTS DURING THE POST-DEPLOYMENT PHASE - A hashing tool can be used to generate Object UIDs from a software application. The software application can be tested. A change and release management system can receive Object UIDs involved in a defect uncovered during the testing. The change and release management system can receive names of functions involved in the defect uncovered during the testing and defect fixing. A graphical representation of function names versus Object UIDs for which the defect occurred can be created. | 06-28-2012 |
20120185731 | PRECISE FAULT LOCALIZATION - Systems and methods for identifying expressions that are potential causes of program bugs are disclosed. A program and at least one input resulting in at least one passing test of the program can be received. Further, at least one plausible repair candidate expression in the program can be identified. In addition, the methods and systems can determine whether replacement of the at least one identified expression with at least one value, which is different from a value provided by the at least one identified expression, maintains the passage of the at least one passing test. Moreover, the at least one identified expression can be output when the replacement maintains the passage of the at least one passing test to enable a determination of a modification of the program that repairs a bug in the program. | 07-19-2012 |
20120185732 | METHOD OF MEASURING AND DIAGNOSING MISBEHAVIORS OF SOFTWARE COMPONENTS AND RESOURCES - Systems and methods are described for diagnosing behavior of software components in an application server. The application server can comprise a plurality of components that process incoming requests. A diagnostics advisor can be deployed with the application server and can determine an efficiency and/or inefficiency of each of the components of the application server or other middleware system. The efficiency determined by computing a ratio of a number of requests that completed execution in the component during a particular sampling time period to the number of requests that were received by the component during the sampling time period. The inefficiency is the inverse of efficiency, i.e. it is a ratio of the number of requests that are still being executed by the one or more components at the end of the sampling time period to the number of requests that were received by the one or more components during the sampling time period. The diagnostics advisor employs the determined efficiency and/or inefficiency to diagnose a misbehavior or other problem of the components in the application server. | 07-19-2012 |
20120204065 | METHOD FOR GUARANTEEING PROGRAM CORRECTNESS USING FINE-GRAINED HARDWARE SPECULATIVE EXECUTION - A method for checking program correctness may include executing a program on a main hardware thread in speculative execution mode on a hardware execution context on a chip having a plurality of hardware execution contexts. In this mode, the main hardware thread's state is not committed to main memory. Correctness checks by a plurality of helper threads are executed in parallel to the main hardware thread. Each helper thread runs on a separate hardware execution context on the chip in parallel with the main hardware thread. The correctness checks determine a safe point in the program up to which the operations executed by said main hardware thread are correct. Once the main hardware thread reaches the safe point, the mode of execution of the main hardware thread is switched to non-speculative. The runtime then causes the main thread to re-enter speculative mode of execution. | 08-09-2012 |
20120216076 | METHOD AND SYSTEM FOR AUTOMATIC MEMORY LEAK DETECTION - A method and apparatus for automatic memory leak detection is described. The method may include collecting memory usage data for a software application running in a computer system. The method may also include automatically determining from the data that the software application has a memory leak. | 08-23-2012 |
20120216077 | DYNAMIC LAZY TYPE SYSTEM - A dynamic, lazy type system is provided for a dynamic, lazy programming language. Consequently, programs can benefit from runtime flexibility and lightweight notation in combination with benefits afforded by a substantial type system. | 08-23-2012 |
20120216078 | FRAMEWORK FOR CONDITIONALLY EXECUTING CODE IN AN APPLICATION USING CONDITIONS IN THE FRAMEWORK AND IN THE APPLICATION - A computer implemented method, apparatus, and computer usable program code for returning a return code to an error hook in an application using a framework. An identifier and a pass-through are received from the error hook. The error hook is software code in the application. The pass-through is a set of parameters. If the identifier has an active status, a set of framework conditions is retrieved using the identifier. If the set of framework conditions is met, an inject callback is retrieved using the error identifier. The inject callback is called with the error identifier and the pass-through. An inject callback return code is received. If the inject callback return code is an execute return code, the execute return code is returned to the error hook. | 08-23-2012 |
20120226946 | Assist Thread Analysis and Debug Mechanism - A processor recognizes a request from a program executing on a first hardware thread to initiate software code on a second hardware thread. In response, the second hardware thread initiates and commences executing the software code. During execution, the software code uses hardware registers of the second hardware thread to store data. Upon termination of the software code, the second hardware thread invokes a hypervisor program, which extracts data from the hardware registers and stores the extracted data in a shared memory area. In turn, a debug routine executes and retrieves the extracted data from the shared memory area. | 09-06-2012 |
20120226947 | SYSTEM AND METHOD FOR A STAGGERED EXECUTION ENVIRONMENT - A staggered execution environment is provided to safely execute an application program against software failures. In an embodiment, the staggered execution environment includes one or more probe virtual machines that execute various portions of an application program and an execution virtual machine that executes the same application program within a time delay behind the probe virtual machines. A virtualization supervisor coordinates the execution of the application program on one or more probe virtual machines. The probe virtual machines are used to detect and correct software failures prior to the execution virtual machine encountering them. The virtualization supervisor embargos output data in order to ensure that erroneous data is not released which may adversely affect external processes. | 09-06-2012 |
20120239981 | Method To Detect Firmware / Software Errors For Hardware Monitoring - Error reporting software-based method where an error list for a currently-running version of some target software (or firmware) is compared to an error list for a previous versions. Helpful information can be gleaned from the comparison of error lists. For example, if it is known that the hardware configuration has not changed, as between the two lists, and there is an error on the current list that does not appear on the previous list, then this indicates that the error is in the software update and is not a hardware problem. | 09-20-2012 |
20120239982 | METHODS FOR DIAGNOSING ENTITIES ASSOCIATED WITH SOFTWARE COMPONENTS - In one embodiment, a method includes recording event history information for one or more events associated with an entity; evaluating the event history information for each of the one or more events associated with the entity against a symptom rule, wherein the symptom rule defines a validity state of a diagnosis; issuing a subscription to one or more subscribers, wherein the subscription enables the one or more subscribers to receive diagnosis information; and indicating the validity state of the diagnosis to the subscriber, wherein the recording and the evaluating are performed independently such that the issuing and the event history information and the are substantially decoupled. Other methods are also described, according to various embodiments. | 09-20-2012 |
20120260132 | TEST SELECTION BASED ON AN N-WISE COMBINATIONS COVERAGE - Based on a functional coverage by a test suite, a functional coverage model of a System Under Test (SUT) may be defined to represent all covered combinations of functional attributes. Based on an n-wise combination criteria, a subset of the possible combinations of values may be determined A subset of the test suite may be selected such that the selected subset is operative to cover the subset of the determined possible combinations of values. The disclosed subject matter may be used to reduce a size of the test suite while preserving the n-wise combinations coverage of the original test suite. | 10-11-2012 |
20120260133 | Visualizing Transaction Traces As Flows Through A Map Of Logical Subsystems - A method for diagnosing problems in a computer system by visualizing flows through subsystems of the computer system. Diagnostic tools include a user interface which includes a triage map which graphically depicts subsystems, such as applications, through which a Business Transaction flows, and the calling relationship between the subsystems. The subsystems can be depicted by nodes which include alerts and performance information. The user can run a command to find transactions of a specific Business Transaction and/or front end subsystem which meet filter criterion such as response time and user identifier. Each captured transaction can be listed with information such as response time and reporting agent. Details of a particular transaction instance, such as its invoked components, can also be viewed in a transaction trace. | 10-11-2012 |
20120260134 | METHOD FOR DETERMINING AVAILABILITY OF A SOFTWARE APPLICATION USING COMPOSITE HIDDEN MARKOV MODEL - The embodiments herein provide a method and system for determining availability of a software application using Composite Hidden Markov Model (CHMM). The software application is divided into plurality of layers which are further divided into sub-components. The configurations and dependencies of the sub-components are identified and also the state of the sub-components is determined. The state of the sub-components is represented in CHMM using state space diagram. The failure rate and recovery time of the sub-components is computed using the state space diagram and the respective transition tables are derived from the CHMM to determine the availability of the layers. The availability of the layers is combined to determine the availability of the software application. | 10-11-2012 |
20120266026 | DETECTING AND DIAGNOSING MISBEHAVING APPLICATIONS IN VIRTUALIZED COMPUTING SYSTEMS - Misbehaving applications may be detected by monitoring system resource utilization in a virtualized computer system. Utilization may be forecasted based on historical utilization data for the system resources when the application is known to be behaving normally. When the monitored utilization of system resources deviates from the forecasted utilization, an alert may be generated. When the alert is generated, system resources allocated to the application may be increased or decreased to prevent abnormal behavior in the virtualized computer system executing to misbehaving application. | 10-18-2012 |
20120278658 | Analyzing Software Performance Issues - Execution traces are collected from multiple execution instances that exhibit performance issues such as slow execution. Call stacks are extracted from the execution traces, and the call stacks are mined to identify frequently occurring function call patterns. The call patterns are then clustered, and used to identify groups of execution instances whose performance issues may be caused by common problematic program execution patterns. | 11-01-2012 |
20120278659 | Analyzing Program Execution - A call pattern database is mined to identify frequently occurring call patterns related to program execution instances. An SVM classifier is iteratively trained based at least in part on classifications provided by human analysts; at each iteration, the SVM classifier identifies boundary cases, and requests human analysis of these cases. The trained SVM classifier is then applied to call pattern pairs to produce similarity measures between respective call patterns of each pair, and the call patterns are clustered based on the similarity measures. | 11-01-2012 |
20120278660 | METHOD AND DEVICE FOR TESTING A SYSTEM COMPRISING AT LEAST A PLURALITY OF SOFTWARE UNITS THAT CAN BE EXECUTED SIMULTANEOUSLY - A programmable operating time period of at least one software unit is changed to a settable operating time period. Furthermore, a testing system for validating the system for setting the at least one settable operating time period is provided. Furthermore, the system is tested using the testing system, wherein the testing includes varying the at least one settable operating time period for detecting synchronization errors of the system. Thus, a test of a system including software units for synchronization errors is enabled by the targeted change of operating time period. | 11-01-2012 |
20120304014 | PERFORMING ASYNCHRONOUS TESTING OF AN APPLICATION OCCASIONALLY CONNECTED TO AN ONLINE SERVICES SYSTEM - In a method, system, and computer-readable medium having instructions for performing asynchronous testing of an application that is occasionally connected to an online services system, metadata describing at least a portion of an online services database is retrieved and the at least a portion of the online services database is authorized for replication at a software application, information is determined for an entity for an application database from the metadata, a request is sent for a database using the software application interface and the request has an asynchronous operation call to the database for the entity, an execution of the asynchronous operation call is recorded within a callback function, a response is received for the asynchronous operation call, and a result is determined for the software application performance. | 11-29-2012 |
20120317443 | VIRTUAL DEBUGGING SESSIONS - An approach to providing multiple concurrently executing debugging sessions for a currently executing operating system. The approach involves providing one first debugging session for debugging the currently executing operating system. The first debugging session has read access and write access to the data of the currently executing operating system. The approach also involves providing one or more second debugging sessions for the currently executing operating system. Each of the second debugging sessions has read-only access to the data of the currently executing operating system. The second debugging sessions run simultaneously with the first debugging session if the second debugging sessions are started while the first debugging session is active. As a result, multiple users can simultaneously debug the currently executing operating system. A lock may be used to ensure that only the first debugging session has write access to the data. The lock may be shared between the various debugging sessions for the operating system. | 12-13-2012 |
20120324292 | DYNAMIC COMPUTER PROCESS PROBE - An apparatus, system, and method are disclosed for probing a computer process. A probe parameter module determines a process identifier, a probe interval, and a probe action. The process identifier uniquely identifies a computer process. A start timer module starts a timer with a timer interval in response to the computer process entering an executing state on a processor core. The timer interval is based on the probe interval and on an amount of time elapsed between a probe start time and the computer process entering the executing state on the processor core. An action module executes the probe action in response to the timer satisfying the timer interval while the computer process is in the executing state on the processor core. | 12-20-2012 |
20120331350 | SYSTEM AND METHOD FOR DYNAMIC CODE ANALYSIS IN PRESENCE OF THE TABLE PROCESSING IDIOM - Systems and methods execute a computer program to produce a trace of the computer program and divide the trace into independent threads of execution. Each of the independent threads of execution comprises an execution sequence of the lines of programming code that ends with an identified write line of programming code that outputs an incorrect result. These systems and methods also identify key field within each of the independent threads of execution. In programming, which is processing records in a table one by one, key-fields are a subset of the fields of the table. The key fields impact the computations sequence leading up to the identified write line of the programming code. These systems and methods identify key-based dynamic slices from the independent threads of execution. Each of the key-based dynamic slices includes lines of programming code that are used in computations, processing the table records corresponding to the key fields. | 12-27-2012 |
20120331351 | N-WAY RUNTIME INTEROPERATIVE DEBUGGING - Simultaneous debugging of code running in multiple types of runtime environment can be performed by an n-way interoperative debugging environment. Code running within a particular runtime can be debugged simultaneously with a code running within other runtimes within a single process. Out-of-process debugging support is provided for inspection and execution control. A compatible debugger or runtime communication protocol is used. Transitions from one runtime to another runtime can be detected. Exceptions thrown in one runtime can be caught by another runtime. Stepping operations can occur in multiple runtimes. A callstack including frames from multiple runtimes can be walked. | 12-27-2012 |
20120331352 | Troubleshooting System for Industrial Control Programs - A system for troubleshooting control programs employs an event log that captures the values of inputs to outputs from the control program only at event times determined by changes in input or output data. The program allows the event log to be reviewed in jumps to only events which cause a change in output value of an instruction or particular change in output value of a particular instruction, greatly simplifying the troubleshooting process. The event log records a particular instruction instance associated with the event permitting the operation of the program to be studied in reverse order. The event log may also record a timestamp of the event allowing time stamped data from different devices to be synchronized with the review of the events. | 12-27-2012 |
20120331353 | TESTING A SOFTWARE APPLICATION INTERFACING WITH MULTIPLE EXTERNAL SOFTWARE APPLICATIONS IN A SIMULATED TEST ENVIRONMENT - A method and system for testing a software application. A description of a test suite for testing the software application being tested (ABT) is inserted into a test database. The ABT is invokes multiple external software applications during execution of a test script of the test suite. Each external application invoked by the ABT is replaced by a corresponding simulator during execution of the test script. Output data to be returned to the ABT by each invoked simulator is inserted into the test database, after which each test script of the test suite is executed. The executing includes: sending a request, by the ABT to each simulator invoked in each test script, for requested data; and receiving, by the ABT, the requested data from each simulator invoked in each test script. The received requested data includes the output data that had been inserted into the test database. | 12-27-2012 |
20130007529 | STATIC ANALYSIS BASED ON OBSERVED STRING VALUES DURING EXECUTION OF A COMPUTER-BASED SOFTWARE APPLICATION - Improving static analysis precision by recording a value pointed to by a string variable within the computer-based software application during the execution of a computer-based software application, modeling an invariant based on the recorded value, where the invariant represents at least one possible value pointed to by the string variable, performing a first static analysis of the computer-based software application to determine whether the invariant is valid with respect to the computer-based software application, and seeding a second static analysis of the computer-based software application with the invariant if the invariant is valid with respect to the computer-based software application. | 01-03-2013 |
20130019126 | Method and System for Test Suite ControlAANM Frohlich; JoachimAACI MunchenAACO DEAAGP Frohlich; Joachim Munchen DEAANM Ndem; Guy CollinsAACI MunchenAACO DEAAGP Ndem; Guy Collins Munchen DE - Methods and systems are provided for computer software testing using test suite data. A method may include defining a plurality of testing goals and a testing strategy for the code of the software application, determining objects under test within said code of a software application, designing test cases and test suites for said defined testing strategy, defining test categories for said designed test suites, defining a test execution sequence for said designed test suites and said test categories, defining whether a test execution sequence shall continue or stop after an error in a test object or a fail event in the test system, based on the results of the previous steps, parametrizing a test automation framework with the test suites, running the test automation framework on said code of a software application, and analyzing the results obtained from running the test automation framework on said code of a software application. | 01-17-2013 |
20130024731 | REAL TIME MONITORING OF COMPUTER FOR DETERMINING SPEED AND ENERGY CONSUMPTION OF VARIOUS PROCESSES - The presently disclosed subject matter includes a system and method which enable to identify one or more causes for excessive energy consumption in a computer executing one or more processes. Information indicating that consumption of a computer-resource of at least one of said processes is greater than a predefined threshold is obtained and one or more threads of said at least one process which are in running state are identified. Thread performance information of at least one thread in running state is collected and used for identifying one or more functions that are the cause for said state of the respective thread. The identified functions are associated with their respective modules in order to identify one or more modules of said process, which are the cause for said excessive energy consumption. | 01-24-2013 |
20130031415 | Entity Oriented Testing of Data Handling Systems - An apparatus and program product in which test components—here denominated entities—are handled by a test framework and wrapped in a common API (application programming interface) which provides command execution, file handling and inter-communication. The entities are interchangeable parameters to the test, hiding platform-specific code from the test developer and promoting code re-use. Retargettability is enabled by allowing specific systems—physical machines, for example—to be specified on a per test run basis, without changing generic test code. | 01-31-2013 |
20130031416 | Method for Entity Oriented Testing of Data Handling Systems - Test components—here denominated entities—are handled by a test framework and wrapped in a common API (application programming interface) which provides command execution, file handling and inter-communication. The entities are interchangeable parameters to the test, hiding platform-specific code from the test developer and promoting code re-use. Retargettability is enabled by allowing specific systems—physical machines, for example—to be specified on a per test run basis, without changing generic test code. | 01-31-2013 |
20130031417 | METHOD AND DEVICE FOR TESTING A PROGRAM STORED IN THE MEMORY OF AN ELECTRIC TOOL - A method for testing a program stored in the memory of an electric tool, from a plurality of modules, comprises the following steps: testing the program using at least one predefined safety test while the program is being executed, and testing at least one module from the plurality of modules using at least one predefined module test while the program is being executed. | 01-31-2013 |
20130036330 | EXECUTION DIFFERENCE IDENTIFICATION TOOL - Displaying instrument output is disclosed. Instrument output data is received. A difference between two or more corresponding portions of data included in the received instrument output data is determined. At least a selected part of the received instrument output data is displayed in a manner that highlights the difference. | 02-07-2013 |
20130042149 | ANALYZING A PROCESS OF SOFTWARE DEFECTS HANDLING USING PERCENTILE-BASED METRICS - A system for analyzing one or more process of software defect handling using one or more percentile-based statistical metric is provided herein. The system may include: a monitoring unit that is configured to monitor one or more processes of software defect handling, to yield monitored samples. The system further includes a percentile-based generator configured to generate one or more statistical metric that are at least partially based on percentile, further based on the monitored samples and further responsive to user selection; and a statistical calculation unit configured to apply the generated one or more statistical metric to real-time handling time samples obtained from the one or more processes of software defect handling, to yield a percentile-based analysis of the processes of software defect handling. The system may further include and a visual representation unit configured to visually present the percentile-based analysis responsive to preferences specified by the user. | 02-14-2013 |
20130042150 | Checkpoint Debugging Using Mirrored Virtual Machines - A system of debugging computer code includes a processor: obtaining state information corresponding to a first machine at a checkpoint initiated during execution of the computer code on the first machine; and configuring the second machine to a same operating state as the first machine at the checkpoint to create a mirrored version of the first machine. The system also includes receiving a notification that execution of the program on a first machine has failed, and in response to receiving the notification: triggering a processor of the second machine to initiate execution of a copy of the code from a specific code execution point at which the checkpoint was; activating a debugger module to run concurrently with the execution of the program on the second machine and collect and store the debug data as corresponding to execution failure of the computer code at the first machine. | 02-14-2013 |
20130042151 | Integrated Testing Measurement and Management - Systems, methods, apparatuses, and computer readable media for testing information technology systems and/or applications are provided. In some examples, data may be categorized as frequently used and stored at an information technology system testing system. One or more portions of the frequently used data may then be identified for use in testing an information technology system. The systems, methods, and the like may further include building a testing environment and receiving a test script. In some examples, one or more data types may be identified for use in testing the information technology system based on various project criteria, the received test script, and the like. In some examples, additional data types and data associated therewith may be identified as associated with the one or more identified data types based on a predefined relationship. This additional data may then be automatically included in the testing of the information technology system. | 02-14-2013 |
20130042152 | DECLARATIVE TESTING USING DEPENDENCY INJECTION - Methods and systems for declarative testing using dependency injection are described. In one embodiment, a computing system inspects a first annotation that declares an injection point in source code of a test subject and a second annotation that declares a set of test values to be injected at the injection point. The first and second annotations are metadata in an input domain and are added in a designated place in the source code. The computing system executes a test runner that creates a set of one or more tests during a configuration phase based on the inspection of the source code, including the first and second annotations. Each of the set of tests includes one of the test values injected at the injected point as declaratively provided by the second annotation. The set of tests are to be executed during a run phase. | 02-14-2013 |
20130042153 | Checkpoint Debugging Using Mirrored Virtual Machines - A computer-implemented method of debugging computer code includes: obtaining state information corresponding to a first machine at a checkpoint initiated during execution of the computer code on the first machine; and configuring the second machine to a same operating state as the first machine at the checkpoint to create a mirrored version of the first machine. The method also includes receiving a notification that execution of the program on a first machine has failed, and in response to receiving the notification: triggering a processor of the second machine to initiate execution of a copy of the code from a specific code execution point at which the checkpoint was; activating a debugger module to run concurrently with the execution of the program on the second machine and collect and store the debug data as corresponding to execution failure of the computer code at the first machine. | 02-14-2013 |
20130047036 | SELF VALIDATING APPLICATIONS - An application server operating in a production environment receives an application for deployment. A test deployer in the application server determines whether the application includes a validation test. If the application includes a validation test, the test deployer performs an auxiliary deployment of the application and runs the validation test. If the validation test succeeds, the test deployer performs a full deployment of the application on the application server. | 02-21-2013 |
20130047037 | METHOD AND DEVICE FOR CONTROLLING DEBUG EVENT RESOURCES - Software executed at a data processor unit includes a software debugger. The software debugger can be assigned responsibility for servicing a debug event, and be authorized to allow software control of debug event resources associated with the debug event. An indicator, when asserted, prevents a authorized request by software to control a debug event resource. | 02-21-2013 |
20130047038 | ENHANCED SYSTEM AND METHOD FOR IDENTIFYING SOFTWARE-CREATED PROBLEMS AND OPERATIONAL DISRUPTIONS IN MOBILE COMPUTING DEVICES WITH CELLULAR CONNECTIONS - A system and method for discovering fault conditions such as conflicts between applications and an operating system, driver, hardware, or a combination thereof, installed in mobile computing devices uses a mobile device running a diagnostic application. A list of applications that were launched or installed during a time period prior to an operational disruption is retrieved. A data table of combinations of incompatible programs and drivers is used to analyze the list of the applications that were launched or installed to create a list of potential fault-causing interactions due to software incompatibilities of software installed in the mobile computing device. A knowledge database is updated with data identifying at least one of the potential fault-causing interactions. Further disclosed is a computer program that identifies hardware-created or software-created problems and operational disruptions in mobile computing devices by collecting data on incompatibilities in particular mobile computing devices on the internet. | 02-21-2013 |
20130055028 | METHODS AND SYSTEMS FOR CREATING SOFTWARE TESTS AS EXECUTABLE RESOURCES - Described herein is a new approach for testing in which tests are instrumented and exposed as addressable resources using a REST-ful approach. With this new approach, instrumentation, provisioning and execution of tests are de-coupled, which is not the case with current, traditional testing approaches. | 02-28-2013 |
20130055029 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR AUTOMATED TEST CASE GENERATION AND SCHEDULING - In accordance with embodiments, there are provided mechanisms and methods for automated test case generation and scheduling. These mechanisms and methods for automated test case generation and scheduling can provide an automated manner of generating test cases and scheduling tests associated with such test cases. The ability to provide this automation can improve efficiency in a testing environment. | 02-28-2013 |
20130061095 | SOFTWARE FAILURE DETECTION - A method detects soft failures as follows. A set of artifacts being generated by at least one process in a system is monitored. A number of artifacts being generated by the process is determined to be below a given threshold in response to the monitoring. The process is monitored in response to the determination. A current state of the process is determined in response to the analyzing. A notification is generated in response to the current state of the process including a set of abnormal behaviors. | 03-07-2013 |
20130073909 | ASSERTIONS IN A BUSINESS RULE MANAGEMENT SYSTEM - Embodiments of the present invention provide a method, system and computer program product for assertion management in a dynamically assembled programmatic environment. In an embodiment of the invention, a method for assertion management in a dynamically assembled programmatic environment can include dynamically assembling different execution units into a dynamically assembled computer program, applying an assertion to at least one of the different execution units through an introspection of the one of the different execution units, and generating an assertion result reporting a failure of the assertion responsive to the failure of the assertion. | 03-21-2013 |
20130080837 | FAULT LOCALIZATION FOR DATA-CENTRIC PROGRAMS - Methods and arrangements for localizing faults in programs. A program is assimilated, the program comprising statements. Output behavior of the statements is modeled, and statement occurrences are annotated. Passing and failing spectra are differenced to yield a difference, and a fault is located via employing the difference. | 03-28-2013 |
20130080838 | Programming in a Simultaneous Multi-Threaded Processor Environment - A system, method, and product are disclosed for testing multiple threads simultaneously. The threads share a real memory space. A first portion of the real memory space is designated as exclusive memory such that the first portion appears to be reserved for use by only one of the threads. The threads are simultaneously executed. The threads access the first portion during execution. Apparent exclusive use of the first portion of the real memory space is permitted by a first one of the threads. Simultaneously with permitting apparent exclusive use of the first portion by the first one of the threads, apparent exclusive use of the first portion of the real memory space is also permitted by a second one of the threads. The threads simultaneously appear to have exclusive use of the first portion and may simultaneously access the first portion. | 03-28-2013 |
20130086429 | SYSTEM AND METHOD FOR SELF-DIAGNOSIS AND ERROR REPORTING - A system for self-diagnosing and error reporting of a software application in a computer system having a plurality of software applications and background processes, the system comprising a diagnosis module configured to collect and monitor usage data of resources of the computer system, execution status of the software applications and background processes of the computer system, and software application error conditions, adjust logging level of log files according to the execution status of the software applications and background processes of the computer system and the software application error conditions, and generate diagnosis advisory based on the usage data of the resources of the computer system and the software error conditions, and a reporting module configured to collect and report the usage data of the resources of the computer system, the log files and the generated diagnosis advisory automatically to a user. | 04-04-2013 |
20130091387 | Method for Automatically Generating a Trace Data Set for a Software System, a Computer System, and a Computer Program Product - The invention relates to a method, a computer system, and a computer program product for automatically generating a trace data set for a software system on a computer system. The method includes the step of providing a software system comprising a source code. Binary code is provided by compiling the source code by inserting a plurality of tracing instructions into the binary code. The tracing instructions initiate trace data generation during runtime of the software system. The method also includes modifying the binary code by replacing at least one tracing instruction of the plurality of tracing instructions with a neutral instruction. The modified binary code is run by activating trace data generation by re-replacing the neutral instruction with the at least one tracing instruction. The method further includes recording the trace data set. The recording step is initiated by the at least one tracing instruction. | 04-11-2013 |
20130097461 | METHOD OF FAST REINITIALIZATION FOR INSTRUMENT PANEL VIEWING DEVICE - The general field of the invention is that of the management of the faults of the viewing devices used on aircraft. An aircraft instrument panel viewing device comprises an electronic assembly, embedded software and a viewing screen. When the viewing device detects a fault arising after a predetermined time of proper operation, the reinitialization method according to the invention executes just the software verification tests without executing the electronic assembly verification tests termed “safety tests”, no specific presentation of fault being displayed on the viewing screen. The duration for which the pilot is deprived of information is thus considerably reduced. | 04-18-2013 |
20130111271 | USING A MANIFEST TO RECORD PRESENCE OF VALID SOFTWARE AND CALIBRATION | 05-02-2013 |
20130124924 | PROGRAM ANALYZING SYSTEM AND METHOD - Main functional units of a program analyzing system that analyzes a program while adjusting a time passage speed of a program performance circumstance includes four functional units, that is, an analysis management unit, a sample performing unit, an activity recording unit, and an activity analyzing unit. The analysis management unit sets analysis conditions such as a time passage speed, a program performance starting time, and a performance ending time. The sample performing unit adjusts the time passage speed and the program performance starting time in accordance with determination of the analysis management unit and performs the program until performance ending time. The activity recording unit monitors the performance circumstance and obtains an activity record of the program. The activity analyzing unit analyzes the activity record and clarifies a behavior of the program. Further, the analysis management unit resets the analysis condition based on an analysis result to perform a reanalysis. | 05-16-2013 |
20130145215 | ELIMINATING FALSE-POSITIVE REPORTS RESULTING FROM STATIC ANALYSIS OF COMPUTER SOFTWARE - A system for eliminating false-positive reports resulting from static analysis of computer software is provided herein. The system includes the following components executed by a processor: a modeler configured to model a computer code into a model that defines sources, sinks, and flows; a static analyzer configured to apply static analysis to the code or the model, to yield reports indicative of at least one issue relating to one or more of the flows; a preconditions generator configured to generate preconditions for eliminating false-positive issues in the reports, based on the model and user-provided input; and a preconditions checker configured to apply the generated preconditions to the reports for eliminating false-positive issues in the reports. | 06-06-2013 |
20130145216 | SYSTEMS AND METHODS FOR HARDWARE-ASSISTED TYPE CHECKING - Devices and methods of providing hardware support for dynamic type checking are provided. In some embodiments, a processor includes a type check register and support for one or more checked load instructions. In some embodiments, normal load instructions are replaced by a compiler with the checked load instructions. In some embodiments, to perform a checked load, an error handler instruction location is stored in the type check register, and a type tag operand is compared to a type tag stored in the loaded memory location. If the comparison succeeds, execution may proceed normally. If the comparison fails, execution may be transferred to the error handler instruction. In some embodiments, type prediction is performed to determine whether a checked load instruction is likely to fail. | 06-06-2013 |
20130145217 | TESTING METHOD AND TESTING APPARATUS FOR TESTING FUNCTION OF ELECTRONIC APPARATUS - A method for testing a function of an electronic apparatus is provided. The method includes steps of: searching for a location corresponding to the function to be tested, sending a command according to the location to perform the function to be tested, and determining whether an error occurs in the function according to a response from the function in response to the command. | 06-06-2013 |
20130151906 | Analysis of Tests of Software Programs Based on Classification of Failed Test Cases - A solution is proposed for analyzing a test of a software program comprising a plurality of software components, the test comprising a plurality of test cases each one for exercising a set of corresponding exercised software components. A corresponding method comprises the steps of receiving an indication of each failed test case whose current execution has failed, retrieving a suspicion attribute of each failed test case indicative of a change to the corresponding exercised software components since a previous execution of the failed test case, retrieving a change attribute of each failed test case indicative of a change to the failed test case since the previous execution thereof, retrieving a regression attribute of each failed test case indicative of a regression of the failed test case since the previous execution thereof, and classifying each failed test case into a plurality of disjoint classes according to the corresponding suspicion attribute, change attribute and regression attribute. | 06-13-2013 |
20130185594 | AUTOMATED TESTING OF MECHATRONIC SYSTEMS - An arrangement for providing integrated, model-based testing of industrial systems in the form of a model-based test design module, a test execution engine and an automated test infrastructure (ATI) component. The ATI component includes a keyword processor that interfaces with test commands created by the design module to implement the testing of a specific industrial system. Configuration and deployment information is also automatically created by the design module and used by the ATI component to set up and control the specific industrial system being tested. | 07-18-2013 |
20130185595 | Analysis of Tests of Software Programs Based on Classification of Failed Test Cases - A solution is proposed for analyzing a test of a software program comprising a plurality of software components, the test comprising a plurality of test cases each one for exercising a set of corresponding exercised software components. A corresponding method comprises the steps of receiving an indication of each failed test case whose current execution has failed, retrieving a suspicion attribute of each failed test case indicative of a change to the corresponding exercised software components since a previous execution of the failed test case, retrieving a change attribute of each failed test case indicative of a change to the failed test case since the previous execution thereof, retrieving a regression attribute of each failed test case indicative of a regression of the failed test case since the previous execution thereof, and classifying each failed test case into a plurality of disjoint classes according to the corresponding suspicion attribute, change attribute and regression attribute. | 07-18-2013 |
20130185596 | Serialized Error Injection Into a Function Under Test - System, and computer program product embodiments for triggering error injection into a function under test using a serialization resource are provided. A test process invokes the function under test immediately after relinquishing exclusive control of the serialization resource. An error-injection process injects the error into the running function after gaining exclusive control of the serialization resource from the test process. The error-injection process may add a delay to inject the error. If the processes are repeated, the error-injection process may vary the delay, perhaps randomly, over a specified time window to thoroughly exercise the function's error recovery routine. | 07-18-2013 |
20130185597 | SERVER THROTTLED CLIENT DEBUGGING - Systems and methods of debugging client applications may provide for detecting a runtime error in a first version of a client application, and obtaining a second version of the client application server in response to the runtime error. The second version of the client application may be used to conduct a diagnosis of the runtime error. | 07-18-2013 |
20130191691 | IMPORTANCE-BASED CALL GRAPH CONSTRUCTION - Call graph construction systems that utilize computer hardware are presented including: a processor a candidate pool configured for representing a number of calls originating from a root node of a computer software application; an importance value assigner configured for assigning an importance value for any of the number of calls represented in the candidate pool; a candidate selector configured for selecting from the number of calls represented in the candidate pool for inclusion in a call graph based on a sufficient importance value; and an importance value adjuster configured for adjusting the importance value of any call represented in the call graph. | 07-25-2013 |
20130198572 | MANAGING CODE-TRACING DATA - A method of managing code-tracing data in a target program is described. The method comprises the steps of: identifying when an exception occurs in the target program; accessing a stack trace of a call stack to identify a module in the target program that threw the exception; and activating code-tracing at a high detail level in that module. | 08-01-2013 |
20130198573 | EVENT LOGGING AND PERFORMANCE ANALYSIS SYSTEM FOR APPLICATIONS - An event logging and analysis mechanism which creates an event object for event of an application to be logged. The event logging mechanism logs into the event object the start time, end time and other information regarding the event. The analysis of the collected event objects may include hierarchical and contextual grouping as well as aggregation of events considered to be identical. The mechanism operates independent of the application whose events it logs and can be turned on and off independently. A user may define the levels of hierarchy and contexts upon which to analyze the event objects. | 08-01-2013 |
20130205172 | Integrated System and Method for Validating the Functionality and Performance of Software Applications - The system and method presented provides a multi-phase, end-to-end integrated process for testing application software using a standard software testing tool. The system and method involve integrating the functional, automated regression and performance phases of software application testing by leveraging deliverables at each phase so that the deliverables may be efficiently reused in subsequent test phases. Deliverables such as functional and technical test conditions and manual test scripts are used as inputs for each phase of the integrated tests. The use of leveraged requirements-based deliverables between test phases significantly reduces much of the repetitive testing typically associated with functionality and performance testing and minimizes repetition of testing errors discovered in earlier test phases. This integrated system and method for validating the functionality and performance of software applications by leveraging deliverables provides enhanced efficiencies, test procedure consistency throughout multiple test phases, consistent test results and high quality software applications. | 08-08-2013 |
20130212438 | STACK-BASED TRACE MESSAGE GENERATION FOR DEBUG AND DEVICE THEREOF - During a debug mode of operation of a data processor it is determined whether a data access request is to a stack of the data processor. If not, a data trace message based on the data access request is generated for transmission to a debugger so long as an address being accessed by data access request meets a predefined address range criteria. Otherwise, if the data access request is to the stack of the data processor, a data trace message based on the data access request is prevented from being generated for transmission to the debugger regardless the predefined address range criteria. | 08-15-2013 |
20130219226 | DISTRIBUTED TESTING WITHIN A SERIAL TESTING INFRASTRUCTURE - A serial testing infrastructure includes the capability to execute a distributed test on multiple virtual processors. A test executable may be stored in a library and the test description, including the name of the test, the test library, and other test characteristics, may be stored in a separate test data file. The serial testing infrastructure initiates multiple distributed test executors that each launch an instance of the distributed test as a process that runs concurrently with other instances of the distributed test. Each distributed test executor monitors execution of it corresponding process until completion or timeout. | 08-22-2013 |
20130219227 | Multi-Entity Test Case Execution Workflow - The present subject matter relates to a method for managing a testing workflow, based on execution of at least one Multi Entity Test Case (METC) of the testing workflow. The method includes assigning at least one role to each of a plurality of test steps of the METC, where the at least one role is indicative of a privilege level to execute each of the plurality of test steps. The method also includes defining a failure condition for each of the plurality of test steps, where the failure condition is indicative of an expected result of execution of each of the plurality of test steps. The method further includes specifying a failure action associated with the failure condition for execution of each of the plurality of test steps, executing one of the plurality of test steps, and applying the failure action to proceed with the testing workflow. | 08-22-2013 |
20130238938 | METHODS AND APPARATUS FOR INTERACTIVE DEBUGGING ON A NON-PRE-EMPTIBLE GRAPHICS PROCESSING UNIT - Systems and methods are disclosed for performing interactive debugging of shader programs using a non-preemptible graphics processing unit (GPU). An iterative process is employed to repeatedly re-launch a workload for processing by the shader program on the GPU. When the GPU encounters a hardware stop event, such as by reaching a breakpoint in any thread of the shader program, encountering a hardware exception, or failing a software assertion in the shader program, the state of any executing threads is saved, graphics memory is copied to system memory, and any currently executing threads are killed to enable the GPU to process graphics data for updating a display device. Each pass of the workload may result in incrementally more data being processed. In effect, the changing state and variable data resulting from each pass of the workload has the effect that the debugger is incrementally stepping through the shader program. | 09-12-2013 |
20130238939 | METHOD FOR RANKING ANALYSIS TOOLS - Analysis tools are used for resolving a service request for software performance problems. Ranking of the analysis tools includes measuring a plurality of times to resolution of a plurality of service requests for software performance problems after runnings of a plurality of analysis tools are initiated; capturing sets of errors in the plurality of service requests; storing identities of the plurality of analysis tools with the times to resolution of the service requests and the sets of errors; determining an average time to resolution of each of the plurality of analysis tools for each set of errors; organizing the plurality of analysis tools into one or more categories using the sets of errors; and ranking the analysis tools within each category using the average times to resolution of the analysis tools within the category. | 09-12-2013 |
20130246856 | VERIFICATION SUPPORTING APPARATUS AND VERIFICATION SUPPORTING METHOD OF RECONFIGURABLE PROCESSOR - A verification supporting apparatus and a verification supporting method of a reconfigurable processor is provided. The verification supporting apparatus includes an invalid operation determiner configured to detect an invalid operation from a result of scheduling on a source code, and a masking hint generator configured to generate a masking hint for the detected invalid operation. | 09-19-2013 |
20130262933 | MANAGING CODE-TRACING DATA - A method of managing code-tracing data is described. The method comprises the steps of: analyzing a log of code-tracing data to identify a module in which an error occurred; activating code-tracing at a high detail level in that module; identifying modules associated with that module; and activating code-tracing at a high detail level in those identified modules. | 10-03-2013 |
20130262934 | METHOD AND APPARATUS FOR AUTOMATICALLY GENERATING A TEST SCRIPT FOR A GRAPHICAL USER INTERFACE - Embodiments of the present invention relate to the technical field of automatic testing of a graphical user interface in the technical field of software testing. Embodiments of the invention provide a method for automatically generating a test script for a graphical user interface. This method comprises defining information of each component in a tested graphical user interface, writing a test case file, and generating a file of combined values of components, and adding an operation type to each component in each file of combined values. The method further comprises determining a sequence of operations in each of the files of combined values, and generating a test script. This method reduces the tester's workload when manually programming test scripts and facilitates maintenance of test scripts. | 10-03-2013 |
20130283102 | Deployment of Profile Models with a Monitoring Agent - A distributed tracing system may use independent trace objectives for which a profile model may be created. The profile model may be deployed as a monitoring agent on non-instrumented devices to evaluate the profile models. As the profile models operate with statistically significant results, the sampling frequencies may be adjusted. The profile models may be deployed as a verification mechanism for testing models created in a more highly instrumented environment, and may gather performance related results that may not have been as accurate using the instrumented environment. In some cases, the profile models may be distributed over large numbers of devices to verify models based on data collected from a single or small number of instrumented devices. | 10-24-2013 |
20130283103 | FACILITATING LARGE-SCALE TESTING USING VIRTUALIZATION TECHNOLOGY IN A MULTI-TENANT DATABASE ENVIRONMENT - A system and method for testing in a database system. In one embodiment, a method includes receiving an indication of one or more changes to a software application, wherein each change corresponds to a different version of the software application. The method further includes generating one or more virtual machines for a version of the software application in response to the indication, wherein the one or more virtual machines test the version of the software application. | 10-24-2013 |
20130305094 | OBSERVABILITY CONTROL WITH OBSERVABILITY INFORMATION FILE - Methods of managing observability code in an application program include generating an application program including an observability point, the observability point including a location in the application at which observability code, or a call to observability code, can be inserted, loading the application program into a memory of a target system, retrieving observability information from an observability point information file, and inserting the observability code, or the call to the observability code, at the observability point in the memory of the target system using the observability information retrieved from the observability point information file. | 11-14-2013 |
20130305095 | METHOD FOR GENERATING TEST DATA FOR EVALUATING PROGRAM EXECUTION PERFORMANCE - Test data used in evaluating the performance of a program is generated. First, a source program targeted for performance evaluation, sample data, and a generation parameter used for determining the size of the test data to be generated are received from an input device. A processor then executes the source program using the sample data and obtains the number of executions for each of a plurality of statements in the source program. In addition, on the basis of the obtained number of executions, the processor generates test data having a size that is a multiple of the generation parameter of the sample data size, the test data being such that the frequency of executions for each of the plurality of statements in the source program is the same as the frequency of executions for each of the plurality of statements when executing the source program using the sample data. | 11-14-2013 |
20130305096 | SYSTEM AND METHOD FOR MONITORING WEB SERVICE - Provided are a system and a method for monitoring a web service. The web service monitoring system includes a management module configured to provide an interface for receiving an test scenario and a policy for a simulation test of a target system from an administrator and outputting the simulation test result of the target system to the administrator, a database configured to store the received policy and test scenario, and an agent configured to access the target system according to the test scenario and the policy stored in the database and carry out the simulation test of the target system. | 11-14-2013 |
20130305097 | COMPUTER PROGRAM TESTING - To centrally manage execution of tests of software in an event oriented manner, a test execution engine reads a first test case from a test case component, where the test case represents tasks that have to be run to test a first procedure of a software program under evaluation. Further, the test execution engine identifies a participant node configured for sending events to an event queue and obtains events from the event queue. With those obtained events, the test execution engine evaluates whether the first procedure of the software program executed successfully and indicates whether the first procedure executed properly. The participant node has a node agent transmits events about the procedure and the first test case to the event queue. | 11-14-2013 |
20130305098 | METHODS, MEDIA, AND SYSTEMS FOR DETECTING AN ANOMALOUS SEQUENCE OF FUNCTION CALLS - Methods, media, and systems for detecting an anomalous sequence of function calls are provided. The methods can include compressing a sequence of function calls made by the execution of a program using a compression model; and determining the presence of an anomalous sequence of function calls in the sequence of function calls based on the extent to which the sequence of function calls is compressed. The methods can further include executing at least one known program; observing at least one sequence of function calls made by the execution of the at least one known program; assigning each type of function call in the at least one sequence of function calls made by the at least one known program a unique identifier; and creating at least part of the compression model by recording at least one sequence of unique identifiers. | 11-14-2013 |
20130318402 | Software Systems Testing Interface - A system includes a manager module that oversees execution of a business process by a test module. The business process includes a plurality of process steps, and the test module comprises a plurality of test cases, a plurality of software test tools, and a plurality of parameters. The test module is configured to permit a user to select a particular process step of the business process, to select a particular test case for the particular process step, to select a particular software test tool for the particular test case, and to select a particular parameter flow for the particular software test tool. The test module is also configured to execute the selected process step using the selected test case, the selected software test tool, and the selected parameter flow. | 11-28-2013 |
20130318403 | Integrated Circuit Including Clock Controlled Debugging Circuit and System-on-Chip Including the Same - An integrated circuit includes a processor core, a clock control circuit and a debugging circuit. The processor core processes target software. The clock control circuit determines whether an electrical connection exists between the processor core and an external debugger and generates a determination result. The clock control circuit generates an output clock signal based on the determination result. The external debugger performs a debugging operation for the target software. The output clock signal is selectively activated based on the determination result and an input clock signal. The debugging circuit provides information with respect to the debugging operation for the target software to the external debugger based on the output clock signal. | 11-28-2013 |
20130326278 | SERVER AND METHOD OF MANIPULATION IN RELATION TO SERVER SERIAL PORTS - A server in communication with a remote control device and a display device includes a super input/output (SIO) microchip, a basic input/output system (BIOS), and a baseboard management controller (BMC). The SIO microchip outputs debugging commands and IPMI commands. The BMC includes a setting module, receiving module, and a transmitting module. The setting module sets the BIOS to establish communication between the BMC and the SIO microchip. The receiving module receives the IPMI commands or the debugging commands to debug errors of firmware pre-stored in the BMC. The transmitting module outputs the errors of the firmware to the remote control device or the display device via the SIO microchip. | 12-05-2013 |
20130332777 | System And Method For Automatic Test Level Generation - A system and method for generating a specific level of software testing of algorithms and applications. A test plan, including input parameter values, expected output parameter values, and dataset size, is entered. The test plan is then executed, and results of the test are scored in accordance with predetermined software testing level definitions, yielding one of a predetermined possible testing levels achieved by the tested software. | 12-12-2013 |
20130339798 | METHODS FOR AUTOMATED SOFTWARE TESTING AND DEVICES THEREOF - Methods and devices for automated software testing. This includes identifying objects present in an application under test and identifying actions supported the objects present in application under test. Based on objects selected for testing, actions are also selected and some actions require input data to be received. Verification points, which are conditions for testing objects, are set. A test script is generated based on selected objects, actions and verification points. | 12-19-2013 |
20130346804 | METHODS FOR SIMULATING MESSAGE-ORIENTED SERVICES AND DEVICES THEREOF - A method, non-transitory computer readable medium, and apparatus that obtains a request message in a hierarchical format. A set of flat request records is generated based on the request message wherein each flat request record includes at least a key and a value. Each flat request record is compared to a set of criteria records to generate one or more response sets wherein each criteria record includes at least a key, a value, and a response identifier and each response set includes one or more response identifiers. One or more rules are applied to the one or more response sets to identify one or more response identifiers. One or more responses corresponding to the one or more identified response identifiers are optionally assembled and form at least part of an output. | 12-26-2013 |
20140013164 | FAULT-BASED SOFTWARE TESTING METHOD AND SYSTEM - A fault-based software testing method and system are provided. The fault-based software testing method includes: generating a plurality of error programs by injecting faults into a testing target program; grouping the generated error programs into a plurality of groups with respect to respective test data, and selecting representative error programs with respect to the respective groups; and when an error is detected in the execution result of the representative error programs with respect to the corresponding test data, determining that errors are detected in all the error programs of the corresponding group. | 01-09-2014 |
20140013165 | Method for System for Testing Websites - Methods and systems to test of web browser enabled applications are disclosed. In one embodiment, a browser application can allow a user to perform test and analysis processes on a candidate web browser enabled application. The test enabled browser can use special functions and facilities that are built into the test enabled browser. One implementation of the invention pertains to functional testing, and another implementation of the invention pertains to pertains to site analysis. | 01-09-2014 |
20140019809 | REPRODUCTION SUPPORT APPARATUS, REPRODUCTION SUPPORT METHOD, AND COMPUTER PRODUCT - A reproduction support apparatus supports reproduction of an OS is to be reproduced by a reproducing apparatus, and includes a processor configured to input a first identification data group that includes identification data identifying files of a file group constituting the OS to be reproduced; and a storage device storing a second identification data group that includes identification data identifying files of a file group constituting OSs of an OS group including the OS to be reproduced and the files of the file group constituting the OSs of the OS group. The processor is further configured to retrieve from the second identification data group, identification data matching the first identification data group; extract from the file group stored in the storage device, a file identified by the retrieved identification data; and transmit to the reproducing apparatus, information concerning files among the extracted files, for reproducing the OS to be reproduced. | 01-16-2014 |
20140025997 | Test Selection - Computer-implemented method, computerized apparatus and a computer program product for test selection. The computer-implemented method comprising: obtaining a test suite comprising a plurality of tests for a Software Under Test (SUT); and selecting a subset of the test suite, wherein the subset provides coverage of the SUT that correlates to a coverage by a workload of the SUT, wherein the workload defines a set of input events to the SUT thereby defining portions of the SUT that are to be invoked during execution. | 01-23-2014 |
20140047278 | AUTOMATIC TESTING OF A COMPUTER SOFTWARE SYSTEM - The invention relates to a method of automatic testing of a software system through test driver code that classifies test data into equivalence classes and updates the available test data after using it against the software system. One embodiment of the invention is a Test Runner that monitors the effect of calling the software system on the available test data and uses this information to automatically determine the execution order of test cases to meet a number of objectives including to: Reuse data between calls, ensure all test cases are executed, perform parallelized testing, perform time dependent testing, perform continuous testing according to a probability distribution on test cases, perform automated management of complex test data and finally to provide an easy and concise way for a user to define a large sets of test cases. | 02-13-2014 |
20140068339 | Systems and Methods for State Based Test Case Generation for Software Validation - Systems and methods for state based test case generation for software validation are disclosed. One embodiment includes determining a first input and a first input type for a program block of vehicle software for creating a test case, wherein the first input type includes a state based input, determining permutations of values for the first input, based on the first input type, and running the test case with the state based input, wherein running the test case comprises applying the permutations of values for the first input to the program block. Some embodiments include determining, by a test computing device, whether the test case meets a predetermined level of modified condition/decision coverage (MC/DC) and providing an indication of whether the test case meets the predetermined level of MC/DC. | 03-06-2014 |
20140068340 | Method and System for Compliance Testing in a Cloud Storage Environment - The invention provides automated test suite for compliance testing of cloud storage server to a Cloud Data Management Interface (CDMI) by performing functional testing of CRUD (Create, Read, Update, and Delete) operations. It offers a solution containing test scripts for validating the response from CRUD operations performed on CDMI objects and checks for the cloud storage to be CDMI compliant. | 03-06-2014 |
20140075244 | APPLICATION MANAGEMENT SYSTEM, MANAGEMENT APPARATUS, APPLICATION EXECUTION TERMINAL, APPLICATION MANAGEMENT METHOD, APPLICATION EXECUTION TERMINAL CONTROL METHOD, AND STORAGE MEDIUM - A management apparatus, based on received error information of applications and information of applications installed in a terminal being managed by the management apparatus, determines a condition that an application causes the error, send information indicating that the application satisfies the condition for causing the error to an terminal satisfying the error condition out of a plurality of terminals. An application execution terminal receives from the management apparatus the information indicating that the condition that the application causes an error is satisfied, and inhibits the execution of the application causing the error by changing a display form of the corresponding application or by displaying a message at the time of the activation of the application. | 03-13-2014 |
20140075245 | APPARATUS AND METHOD FOR DETECTING LOCATION OF SOURCE CODE ERROR IN MIXED-MODE PROGRAM - An apparatus for detecting a source code error location in a mixed-mode program is disclosed. The apparatus may include a compiler, a mapping table generator, a simulator, a comparison data generator and an error location detector. The apparatus extracts low-level data while simulating a verification program and while simulating a reference program. The low-level data is mapped to mapping tables for a verification program and a reference program, and by comparing the tables it is determined if there is an error in the mixed-mode program and if so, where. | 03-13-2014 |
20140075246 | Methods and Articles of Manufacture for Hosting a Safety Critical Application on an Uncontrolled Data Processing Device - Methods and articles of manufacture for hosting a safety critical application on an uncontrolled data processing device are provided. Various combinations of installation, functional, host integrity, coexistence, interoperability, power management, and environment checks are performed at various times to determine if the safety critical application operates properly on the device. The operation of the SCA on the UDPD may be controlled accordingly. | 03-13-2014 |
20140082424 | ETL DEBUGGER - A computer-implemented ETL debugger for a data flow associated with an extract, transform and load (ETL) process that provides a user with a graphical representation of an ETL job. The graphical representation includes individualized representations of one or more data sources, one or more data destinations, and one or more transform operations for data flowing from a data source to a data destination. The user selects a subset of the transform operations. In response, the ETL debugger generates an execution script based on the received subset, and may initiate a debug process by executing the generated execution script. | 03-20-2014 |
20140082425 | Methods and Articles of Manufacture for Hosting a Safety Critical Application on an Uncontrolled Data Processing Device - Methods and articles of manufacture for hosting a safety critical application on an uncontrolled data processing device are provided. Various combinations of installation, functional, host integrity, coexistence, interoperability, power management, and environment checks are performed at various times to determine if the safety critical application operates properly on the device. The operation of the SCA on the UDPD may be controlled accordingly. | 03-20-2014 |
20140089738 | SYSTEM AND METHOD FOR IDENTIFYING SOURCE OF RUN-TIME EXECUTION FAILURE - The present disclosure relates to identifying the source of run-time execution failure and performing static analysis on the computer program without changing actual computer program code. In one embodiment, a method for performing static analysis on run-time execution failure is disclosed, comprising: identifying a point of interest in a computer program by statically analyzing the computer program, wherein the point of interest comprises one of: a variable or an expression; identifying previous assignments of the variable or the expression by performing static analysis depending on a value associated with the variable or the expression; modifying the value to a new value or modifying the expression to a new expression; modifying the computer program based upon the new value or the new expression to generate a modified computer program; and performing incremental static analysis on the modified computer program in order to identify a change in the computer program. | 03-27-2014 |
20140095936 | System and Method for Correct Execution of Software - In an embodiment of the invention an application provider may include “tracing elements” in a target software application. While working with the application the trace elements are detected and provide a “baseline trace” indicating proper application execution. The provider then supplies the application, which still includes the trace elements, and the baseline trace to a user. The user operates the application to produce a “real-time trace” based on the application still having trace elements that produce trace events. A comparator then compares the baseline and real-time traces. If the traces are within a pre-determined range of each other the user has a level of assurance the software is operating correctly. If the level of assurance is low, an embodiment may trigger a hardware interrupt or similar event to prevent further execution of software. Other embodiments are described herein. | 04-03-2014 |
20140095937 | LATENT DEFECT INDICATION - A method of determining test data for use in testing a software. The method includes determining that at least part of a software structure of the software to be tested is similar to, or the same as, a software structure associated with a defect. The method also includes retrieving information regarding operational circumstances for causing the defect in the software associated with the defect. The method further includes creating, based upon the retrieved information, test data for testing the software to be tested. | 04-03-2014 |
20140095938 | LATENT DEFECT IDENTIFICATION - A method of determining test data for use in testing software involves identifying software that is known to have one or more bugs and which has a similar structure to software under test before using knowledge of those one or more bugs to create test data for the software under test. | 04-03-2014 |
20140101488 | SYSTEM AND METHOD FOR APPLICATION DEBUGGING - A system includes a client system comprising a memory and a processor configured to execute a debugging tool. The debugging tool is communicatively coupled to an OPC Unified Architecture (UA) server. Furthermore, the debugging tool is configured to monitor and control, from the client system, debugging of an application executing on the OPC UA server. | 04-10-2014 |
20140108867 | Dynamic Taint Analysis of Multi-Threaded Programs - Disclosed is a dynamic taint analysis framework for multithreaded programs (DTAM) that identifies a subset of program inputs and shared memory accesses that are relevant for issues related to concurrency. Computer implemented methods according to the framework generally involve the computer implemented steps of: applying independently a dynamic taint analysis to each of the multiple threads comprising a multi-threaded computer program; aggregating each independent result from the analysis for each of the multiple threads by consolidating effect of taint analysis in one or more possible re-orderings of observed shared memory accesses among threads; and outputting an indicia of the aggregated result as a set of relevant program inputs or a set of relevant shared memory accesses. | 04-17-2014 |
20140115402 | METHOD AND SYSTEM FOR POSTPONED ERROR CODE CHECKS - According to some embodiments, a system and method for determining a value for an error code for a program operation; determining whether the operation supports postponing a determination of an occurrence of an error for the operation; proceeding to evaluate a next operation in an instance the operation does support postponing the determination of an occurrence of an error for the operation; and checking the error code for the operation in an instance the operation does not support postponing the determination of an occurrence of an error for the operation. | 04-24-2014 |
20140115403 | Method and System for Software System Performance Diagnosis with Kernel Event Feature Guidance - A method includes generating a normal trace in a training stage for the monitored software systems and a monitored trace in the deployment stage for anomaly detection, applying resource transfer functions to traces to convert them to resource features, and system call categorization to traces to convert them to program behavior features, performing anomaly detection in a global scope using the derived resource features and program behavior features, in case the system finds no anomaly, generating no anomaly report, in case the anomaly is found, including the result in an anomaly report; and performing conditional anomaly detection. | 04-24-2014 |
20140122935 | Diagnosing a Problem of a Software Product Running in a Cloud Environment - The present invention provides a method for diagnosing a problem of a software product running in a cloud environment and a corresponding apparatus, the method comprising: receiving a problem in the operation of the monitored software product from a diagnosis agent deployed on a node in the cloud environment; capturing the cloud environment including the software product, and deploying the captured cloud environment in a diagnosis cloud, and the step comprising: deploying the image of each node of the cloud environment in the diagnosis cloud; and applying corresponding configuration data for a cluster system deployed in the cloud environment to configure each node in the diagnosis cloud. The method and apparatus of the present invention can diagnose problems of a software product running in a cloud environment, and rebuild the cloud environment to facilitate the diagnosis of the problems. | 05-01-2014 |
20140122936 | AUTOMATED TOP DOWN PROCESS TO MINIMIZE TEST CONFIGURATIONS FOR MULTI-FEATURE PRODUCTS - Systems and methods of conducting interoperability assessments provide for generating a feature interoperability matrix based on feature data and interoperability data, wherein the feature data defines a plurality of features of a product and the interoperability data indicates levels of interoperability of the plurality of features. A validation set can be generated based on the feature interoperability matrix, wherein the validation set includes a plurality of feature combinations. A subfeature interoperability matrix can be used to convert the validation set into a test plan for the product, wherein the test plan minimizes test configurations for the product. | 05-01-2014 |
20140129878 | INDICATING COVERAGE OF WEB APPLICATION TESTING - Testing a system under test includes intercepting, within a proxy system, a request from a client system sent to the system under test. The request is analyzed within the proxy system and sent to the system under test. Within the proxy system, a response from the system under test sent to the client system is intercepted. The response is instrumented creating a modified response indicating test coverage according to the request. The modified response is sent to the client system. | 05-08-2014 |
20140129879 | SELECTION APPARATUS, METHOD OF SELECTING, AND COMPUTER-READABLE RECORDING MEDIUM - A selection apparatus selects advantageous software testing from automated testing and manual testing. The selection apparatus includes an estimator to estimated man-hours for writing and modifying test codes for the automated testing, estimated man-hours for preparing and modifying written procedures for the manual testing, and performing the manual testing, and to select the advantageous software testing based on the comparison of the estimated man-hours for the automated testing with the estimated man-hours for the manual testing, and a presenter to present the advantageous software testing. | 05-08-2014 |
20140136901 | PROACTIVE RISK ANALYSIS AND GOVERNANCE OF UPGRADE PROCESS - An incompatible software level of an information technology infrastructure component is determined by comparing collected inventory information to a minimum recommended software level. If a knowledge base search finds that the incompatible software level is associated with a prior infrastructure outage event, an outage count score is determined for the incompatible software level by applying an outage rule to a historic count of outages caused by a similar incompatible software level, and combined with an average outage severity score assigned to the incompatible software level based on a level of severity of an actual historic failure of the component within a context of the infrastructure to generate a normalized historical affinity risk score. The normalized historical affinity risk score is provided for prioritizing the correction of the incompatible software level in the context of other normalized historical risk level scores of other determined incompatible software levels. | 05-15-2014 |
20140143603 | PROGRESSIVE VALIDATION CHECK DISABLING BASED UPON VALIDATION RESULTS - Execution statistics are gathered that represent results of execution of a validation check that evaluates code performance within an executing application. A determination is made as to whether the gathered execution statistics for the execution of the validation check match configured criteria to disable the validation check. The validation check is programmatically disabled in response to determining that the gathered execution statistics for the execution of the validation check match the configured criteria to disable the validation check. | 05-22-2014 |
20140143604 | MIXED NUMERIC AND STRING CONSTRAINT ANALYSIS - A method of determining whether a set of constraints is satisfiable may include identifying a set of constraints associated with a software module. The method may also include modeling a string associated with a string constraint of the set of constraints as a parameterized array. Further, the method may include determining the satisfiability of the set of constraints based on a representation of the string constraint as a quantified expression. The satisfiability of the set of constraints may also be based on elimination of a quantifier associated with the quantified expression such that the string constraint is represented as a numeric constraint. The representation of the string constraint as a quantified expression may be based on the parameterized array that is associated with the string. | 05-22-2014 |
20140143605 | SYSTEM AND METHOD FOR VALIDATING CONFIGURATION SETTINGS - The present disclosure relates to a system and method for providing a validation tool to automate in validation of the configuration settings of the computing devices and their interaction thereof for an enterprise application over a network. Also, the present disclosure validates the configurations of the enterprise application which is deployed and executed over the computing devices. Further, the present disclosure provides a method for verifying the configurations settings and applying the required configuration settings across the computing devices, if the existing configuration settings of said computing devices are not verified. Upon verifying and/or applying the configuration settings, said validation tool is configured to generate a compliance report and further notify said generated report to the intended user of a group. | 05-22-2014 |
20140143606 | Web Page Error Reporting - An error in a web page displayed on a device is detected. The error is assigned to a bucket to indicate a type of the error, and a record describing the current state of the device is generated. Both an indication of the bucket and the record describing the current state of the device are then sent to a server. At the server, error information including error records and bucket identifiers are received from multiple devices. Each error record describes a current state of one of the multiple devices at a time when an error in a web page displayed on the one device was detected. Each bucket identifier corresponds to one of the error records and describes a type of the error associated with that error record. The error records are grouped into multiple baskets based at least in part on the current state information in the error records. | 05-22-2014 |
20140149797 | DYNAMIC CONCOLIC EXECUTION OF AN APPLICATION - Dynamic concolic execution of an application. A first hypotheses pertaining to a nature of test payloads that satisfy a specified property, and that are expected to satisfy a condition tested by the application's program code, can be generated. A plurality of first test payloads to test first hypothesis can be synthesized and submitted to the application during respective executions of the application. Whether each of the first test payloads actually satisfy the condition tested by the application's program code can be determined. When at least one of the first test payloads does not actually satisfy the condition tested by the application's program code, a second hypotheses that is expected to satisfy the condition tested by the application's program code can be generated. A plurality of second test payloads to test the second hypothesis can be synthesized and submitted to the application during respective executions of the application. | 05-29-2014 |
20140149798 | DYNAMIC CONCOLIC EXECUTION OF AN APPLICATION - Dynamic concolic execution of an application. A first hypotheses pertaining to a nature of test payloads that satisfy a specified property, and that are expected to satisfy a condition tested by the application's program code, can be generated. A plurality of first test payloads to test first hypothesis can be synthesized and submitted to the application during respective executions of the application. Whether each of the first test payloads actually satisfy the condition tested by the application's program code can be determined. When at least one of the first test payloads does not actually satisfy the condition tested by the application's program code, a second hypotheses that is expected to satisfy the condition tested by the application's program code can be generated. A plurality of second test payloads to test the second hypothesis can be synthesized and submitted to the application during respective executions of the application. | 05-29-2014 |
20140157057 | CODE-FREE TESTING FRAMEWORK - A method, system and computer program product for testing testable code of an application comprises sending a request, from a computer-implemented client to a remote test orchestrator, for a list identifying any test agents registered with the remote test orchestrator; and receiving, by the computer-implemented client from the remote test orchestrator, a list identifying the test agent registered with the remote test orchestrator. | 06-05-2014 |
20140157058 | IDENTIFYING SOFTWARE RESPONSIBLE FOR A CHANGE IN SYSTEM STABILITY - A computer-implemented method detects a stability change in a first computer system, and compares a first set of software applications installed on the first computer system to each set of software applications installed on a plurality of other computer systems. The method then identifies a second computer system from among the plurality of other computer systems, wherein the set of software applications installed on the second computer system includes all of the first set of software applications except for a given software application. The given software application is then identified as the cause of the stability change in the first computer system. The computer systems are preferably virtual machines being managed by a management module, such as a provisioning manager. The method may be used to detect both increases in stability and instability. | 06-05-2014 |
20140164841 | ROLE-ORIENTED TESTBED ENVIRONMENTS FOR USE IN TEST AUTOMATION - In managing testing on a testbed environment a test automator executes an operation specified in a test script to be performed on a testbed environment, wherein the operation refers to a particular role identifier identifying one of a plurality of roles hosted within the testbed environment by at least one host in the testbed environment, wherein the operation does not refer to any of the at least one host. The test automator performs the operation on a particular host of the at least one host of the testbed environment using at least one value from a host description file for calling the particular host assigned to the particular role identifier in a configuration file. | 06-12-2014 |
20140164842 | ROLE-ORIENTED TESTBED ENVIRONMENTS FOR USE IN TEST AUTOMATION - In managing testing on a testbed environment a test automator executes an operation specified in a test script to be performed on a testbed environment, wherein the operation refers to a particular role identifier identifying one of a plurality of roles hosted within the testbed environment by at least one host in the testbed environment, wherein the operation does not refer to any of the at least one host. The test automator performs the operation on a particular host of the at least one host of the testbed environment using at least one value from a host description file for calling the particular host assigned to the particular role identifier in a configuration file. | 06-12-2014 |
20140164843 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR DEBUGGING AN ASSERTION - In accordance with embodiments, there are provided mechanisms and methods for debugging an assertion. These mechanisms and methods for debugging an assertion can enable improved interpretation and analysis of data validation results, more efficient development associated with data validation, etc. | 06-12-2014 |
20140173354 | Software Installation Method, Apparatus and Program Product - A software preload arrangement uses a central server to store the software repository(ries) for various computer instruction files offered for preload into a system being manufactured. To execute the preload, a client workstation is used to execute the actual preload steps for a system under test (SUT). When the SUT needs a given piece of the software release, data is moved down to the client from the server and cached there for delivery to the system under test. In accordance with an important characteristic of this invention, the caching is predictive. That is, data is held in or moved to the client workstation based upon recent activity, so that the time needed to prepare a preload for a system under test is shortened. | 06-19-2014 |
20140181590 | AUTOMATED END-TO-END TESTING VIA MULTIPLE TEST TOOLS - The development of automated tests that span end-to-end business processes, such as may be executed in part by each of multiple Enterprise Resource Planning systems, is a very complex activity. Beside expert know-how of various tools, such end-to-end business process testing requires various test automation tools to cover complex business processes to provide automated tests. Various embodiments herein are built on an approach for building and connecting automated end-to-end tests that combines test scripts from multiple test tools. These embodiments include functionality to assemble test scripts from multiple test tools into a single, composite test script that allows passing of information between the test scripts during performance of an end-to-end automated process test. These and other embodiments are illustrated and described herein. | 06-26-2014 |
20140181591 | TEST STRATEGY FOR PROFILE-GUIDED CODE EXECUTION OPTIMIZERS - Systems, methods and computer program products are described herein for testing a system that is designed to optimize the execution of code within an application or other computer program based on profile data collected during the execution of such code. The embodiments described herein utilize what is referred to as a “profile data mutator” to mutate or modify the profile data between the point when it is collected and the point when it is used to apply an optimization. By mutating the profile data at this point, testing of a system for optimized code execution can be significantly more thorough. Furthermore, such profile data mutation leads to a more scalable and efficient testing technique for profile-guided systems for optimized code execution. | 06-26-2014 |
20140181592 | DIAGNOSTICS OF DECLARATIVE SOURCE ELEMENTS - A method for diagnosing declarative source elements in an application, such as in debugging markup source elements or visual elements in an application, is disclosed. Diagnosis information is associated with an object source of a visual element. The diagnosis information is provided for the visual element during the runtime of the application. | 06-26-2014 |
20140195857 | FRAMEWORK FOR A SOFTWARE ERROR INJECT TOOL - Provided are techniques for receiving an error inject script that describes one or more error inject scenarios that define under which conditions at least one error inject is to be executed and compiling the error inject script to output an error inject data structure. While executing code that includes the error inject, an indication that an event has been triggered is received, conditions defined in the one or more error inject scenarios are evaluated using the error inject data structure, and, for each of the conditions that evaluates to true, one or more actions defined in the error inject script for the condition are performed. | 07-10-2014 |
20140201573 | DEFECT ANALYSIS SYSTEM FOR ERROR IMPACT REDUCTION - An apparatus includes a network interface, memory, and a processor. The processor is coupled with the network interface and memory. The processor is configured to analyze a first set of data associated with a plurality of data sources. Analyzing the first set of data associated with the plurality of data sources determines a plurality of relationships among the first set of data. The processor is configured to store indications of the plurality of relationships among the first set of data. An indication of a relationship indicates a possible software defect. The processor is configured to generate rules based, at least in part, on the first set of data associated with a plurality of data sources. A rule indicates a possible software defect. | 07-17-2014 |
20140215275 | CONTROL SYSTEM TO IDENTIFY FAULTY CODE MODULES - The present disclosure is directed to a control system for a machine. The control system has an electronic module containing at least one programmable controller. The at least one programmable controller stores a plurality of code modules, and be configured to identify from the plurality of code modules a module that contains a code fault. The at least one programmable controller identifies the code fault by executing, with the at least one programmable controller, the code module, writing a code execution status to a designated memory location on the electronic module, and identifying, based on the code module execution status, the code module that contains the code fault. | 07-31-2014 |
20140215276 | METHODS, MEDIA, AND SYSTEMS FOR DETECTING ANOMALOUS PROGRAM EXECUTIONS - Methods, media, and systems for detecting anomalous program executions are provided. In some embodiments, methods for detecting anomalous program executions are provided, comprising: executing at least a part of a program in an emulator; comparing a function call made in the emulator to a model of function calls for the at least a part of the program; and identifying the function call as anomalous based on the comparison. In some embodiments, methods for detecting anomalous program executions are provided, comprising: modifying a program to include indicators of program-level function calls being made during execution of the program; comparing at least one of the indicators of program-level function calls made in the emulator to a model of function calls for the at least a part of the program; and identifying a function call corresponding to the at least one of the indicators as anomalous based on the comparison. | 07-31-2014 |
20140223238 | CODEPATH INTEGRITY CHECKING - A method and apparatus for testing code is provided. The method includes inserting at least one token in program code, wherein each token comprises a code element able to provide a value during runtime, establishing a baseline code version and an executing code version from the program code, and subjecting the executing code version to various testing conditions using a processing device. Subjecting the executing code version to various testing conditions comprises periodically evaluating at least one executing code token having one associated executing error detection value against the at least one baseline code token having one associated baseline error detection value and reporting an error when at least one executing code token and associated executing error detection pair fails to match at least one baseline code token and associated baseline error detection pair. | 08-07-2014 |
20140237294 | INFORMATION PROCESSING APPARATUS AND INSTALLATION METHOD - The installation of multiple applications by an installer is executed in a mode that does not display an error message in a display device. Upon an installation performed by the installer ending, the result of the installation performed by the installer is determined. As a result of the determination, an installer that failed at the installation is caused to re-execute the installation of the application whose installation failed in a mode that displays an error message in the display device. As a result of the re-execution, an error message is displayed in the display device by the installer that failed at the installation. | 08-21-2014 |
20140237295 | SYSTEM AND METHOD FOR AUTOMATING TESTING OF COMPUTERS - An application under test may be run in a test mode that receives a series of test scenarios and produces a set of test results under the control of a verification application. The verification application utilizes “typed-data” (i.e., data having known types that are associated with the data itself, e.g., XML-based data) such that a number of parameters can be set for each event and a number of result parameters can be checked for each result in at least one script. A series of scripts can be combined into an action file that may invoke scripts and override parameters within the invoked scripts. The events can be sent and received using a number of messaging protocols and communications adapters. | 08-21-2014 |
20140245067 | USING LINKED DATA TO DETERMINE PACKAGE QUALITY - Arrangements described herein relate to determining a quality of a software package. Via linked data, the software package can be linked to at least one test plan and a requirement collection. The software package can be executed in accordance with the test plan using at least one test case. At least one test result of the execution of the software package can be generated. A score can be assigned to the test result and a score can be assigned to the test based at least on the test result. Based at least the scores on assigned to the test result and the test case, a package quality score can be assigned to the software package. | 08-28-2014 |
20140245068 | USING LINKED DATA TO DETERMINE PACKAGE QUALITY - Arrangements described herein relate to determining a quality of a software package. Via linked data, the software package can be linked to at least one test plan and a requirement collection. The software package can be executed in accordance with the test plan using at least one test case. At least one test result of the execution of the software package can be generated. A score can be assigned to the test result and a score can be assigned to the test based at least on the test result. Based at least the scores on assigned to the test result and the test case, a package quality score can be assigned to the software package. | 08-28-2014 |
20140245069 | MANAGING SOFTWARE PERFORMANCE TESTS BASED ON A DISTRIBUTED VIRTUAL MACHINE SYSTEM - Managing software performance debugging based on a distributed VM system is provided. In response to determining a debugging state of a software system running on a VM, a timing of a system clock of the VM is controlled. A data packet sent to the VM from another VM is intercepted, and an added system time and reference time that indicate when the packet was sent by the other VM is extracted from the packet. Based on the system and reference times, as well as a reference time of when the packet is intercepted, a timing at which the packet is expected to be received by the VM is calculated. The packet is forwarded to the VM as a function of a comparison of the timing at which the packet is expected to be received and a system time of the VM when the packet is intercepted. | 08-28-2014 |
20140250336 | Machine and Methods for Evaluating Failing Software Programs - A machine for evaluating failing software programs, a non-transitory computer-readable storage medium with an error analysis program stored thereon and an error analysis program executed by a microprocessor are disclosed. In one embodiment a machine for investigating an error source in a software program includes a microprocessor coupled to a memory, wherein the microprocessor is programmed to determine whether a failure of an error-prone program step occurs reproducibly by providing the software program with the error-prone program step, executing program steps preceding the error-prone program step, executing the error-prone program step a number of times and calculating a failure probability for the error-prone program step. | 09-04-2014 |
20140258783 | SOFTWARE TESTING USING STATISTICAL ERROR INJECTION - Methods, apparatus and computer program products implement embodiments of the present invention that enable a device such as a disk drive, to receive a configuration message including an error in implementing an operation on the device and a statistical frequency of an occurrence of the error. Upon configuration, the device can receive multiple requests for the operation, and at the statistical frequency, respond to a given one of the requests with the error. In some embodiments the device may convey an error message indicating an occurrence of the error. Alternatively, the device may fail to complete the operation, delay in completing the operation or perform the operation incorrectly. | 09-11-2014 |
20140258784 | Machine and Methods for Reassign Positions of a Software Program Based on a Fail/Pass Performance - A machine and methods for reassign the execution order of program steps of a multi-step test program is disclosed. In an embodiment a machine for evaluating an error in a software program includes a microprocessor coupled to a memory, wherein the microprocessor is programmed to evaluate the error by (a) providing program steps of the software program, (b) assigning a position number to each program step, (c) performing an evaluation run on the program steps, (d) evaluating a performance of each program step, (e) rearranging the position number of each program step based on the performance of each program step, and (f) repeating steps (c)-(e). | 09-11-2014 |
20140281730 | DEBUGGING SESSION HANDOVER - A method includes, during operation of a software debugging tool on a software program, and upon indication by a first user of the software debugging tool of a step of the operation as a event of interest, collecting data related to that event of interest. A unique identifier is assigned to the collected data. Access to the collected data is enabled for a second user of the software debugging tool. | 09-18-2014 |
20140281731 | MANAGED RUNTIME ENABLING CONDITION PERCOLATION - A method, apparatus, and/or computer program product protects a managed runtime from stack corruption due to native code condition handling. A native condition handler, which is associated with a managed runtime, percolates a condition. A condition handler of the managed runtime receives notification of the condition in a native code portion, and the condition handler of the managed runtime marks a thread associated with the condition. Responsive to a determination by the native code handler to resume execution of the marked thread by either call back into or a return to the managed runtime, the managed runtime determines whether a request is associated with the marked thread. Responsive to a determination that the request is associated with the marked thread, the managed runtime performs diagnostics and the managed runtime is terminated. | 09-18-2014 |
20140281732 | AUTOMATED UPDATE TESTING AND DEPLOYMENT - Systems and methods for testing and deploying an update are provided. A first server can execute a current version of an application in a production environment. A client communication from a client to the first server can be identified. The client communication can be transmitted to a second server in the production environment. The second server can be executing an updated version of the application. A first response to the client communication from the first server and a second response to the client communication from the second server can be received. The first response from the first server can be compared with the second response from the second server to determine whether the updated version of the application is compatible with the production environment. | 09-18-2014 |
20140281733 | PARALLEL SOFTWARE TESTING - A system of testing software is provided. The system comprises a first hardware system having hardware components to execute a first version of the software, and additionally comprises a second hardware system having hardware components to execute a second version of the software. Here, the first version of the software and the second version are different. In addition, the system includes a device configured to test the first hardware system and the second hardware system by providing first input data traffic to the first hardware system, providing second input data traffic to the second hardware system, and accessing performance values from the first hardware system and the second hardware system to evaluate a performance comparison between the first hardware system executing the first version of the software and the second hardware system executing the second version of the software. | 09-18-2014 |
20140289563 | AUTOMATIC CORRECTION OF APPLICATION BASED ON RUNTIME BEHAVIOR - A method and associated system for automatically correcting an application based on runtime behavior of the application. An incident indicates a performance of the application in which a problem object produces an outcome that had not been expected by a user or by a ticketing tool. An incident flow for the problem object is automatically analyzed. Actual run of the application renders a forward data flow and at least one backward data flow is simulated from an expected outcome of the problem object. The forward data flow and the backward data flow(s) are compared to create a candidate fault list for the problem object. A technical specification to correct the candidate fault list and a solution to replace the application are subsequently devised. | 09-25-2014 |
20140289564 | SYSTEM AND METHOD FOR INJECTING FAULTS INTO CODE FOR TESTING THEREOF - Probes are employed to inject errors into code. In response to a function-entry trigger event, a probe writes a predefined test value to a return value register. The probe then cause function execution to be skipped such that the test value is returned in lieu of the value which would otherwise be returned by the function. Behavior after the error is injected may then be observed, data collected, etc. such that undesired behavior (e.g., crashes) can be identified and/or corrected. In an alternative embodiment, the probe which is triggered may write a test value to a given memory address. | 09-25-2014 |
20140289565 | Process and System for Verifying Computer Program on a Smart Card - According to an aspect of the invention, a process for verifying a computer program on a smart card is conceived, the process comprising: identifying, within said computer program, one or more instruction sequences that have a single start point and one or more end points in the program flow; identifying, in each instruction sequence, one or more basic blocks that have a single start point and a single end point in the program flow; and verifying the instruction sequences by verifying each basic block identified in said instruction sequences. | 09-25-2014 |
20140289566 | Item-Level Restoration and Verification of Image Level - Systems and methods for item-level restoration from and verification of an image level backup without fully extracting it. The method receives backup parameters and selection of an image level backup to restore or verify and initializes virtual storage. The method attaches the virtual storage to a hypervisor to launch a virtual machine (VM) to test and restore data objects. The method stores VM virtual disk data changes resulting from restoration and verification in a changes storage. The method optionally reconfigures VMs to use an isolated network. The method optionally uses a routing appliance to provide access to VMs running in the isolated network from a production network. The method determines if the VM operating system (OS) is able to start using restored copies of selected data objects and tests applications associated with selected data objects. The method displays restoration and test results in an interface and automatically delivers the results. | 09-25-2014 |
20140289567 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR ERROR CODE INJECTION - In various embodiments, a method, system, and computer program product for injecting error code include logic and/or program instructions configured for determining critical points in executing code of software under test, building a testcase to invoke the software under test, determining an appropriate response action for each critical point based on an error encountered at each critical point, injecting a critical point segment into the executing code at a corresponding critical point, and outputting a unique identifier of each critical point segment, the testcase being configured to issue commands, with each command limiting which of one or more critical points remains active based on one of: a number of times the one or more critical points have been accessed in the executing code, a number of times a critical point has been skipped, and an amount of times a critical point has been accessed versus skipped. | 09-25-2014 |
20140298104 | Method for operating an IT system, and IT system - An IT system includes at least one first processing unit and one second processing unit The first and second processing units jointly execute an application program and are each associated with an installation routine designed to control updating of a first or second program part of the application program. A first actual state is associated with the first processing unit and a second actual state is associated with the second processing unit. After system reboot, or as soon as the first and second program part have been successfully stored, or an error is detected when storing the first and/or second program part, predefined processing steps are respectively carried out in a predefined order by the first processing unit aid the second processing unit depending on the actual state of the first processing unit and the actual state of the second processing unit. | 10-02-2014 |
20140304551 | PROGRAM ANALYSIS SUPPORTING DEVICE AND CONTROL DEVICE - A program analysis supporting device includes an analysis-condition-setting operation unit a variable-dependency-relation extracting unit and a variable-dependency-relation-display processing unit, in which the analysis-condition-setting operation unit sets a first condition related to a device for which a further forward or backward device dependency relation is not extracted or a second condition related to a device for which a further forward or backward device dependency relation is extracted, the variable-dependency-relation extracting unit extracts a forward or backward device dependency relation from the ladder program starting from the set start point not to extract a further forward or backward device dependency relation concerning a device matching the first condition and to extract a further forward or backward device dependency relation concerning a device matching the second condition and generates a first extraction result, and the variable-dependency-relation-display processing unit displays a device dependency relation according to the first extraction result. | 10-09-2014 |
20140304552 | DRIVE CONTROL DEVICE - A drive control device includes: an embedded microcontroller including a program for outputting a drive control signal to a driving unit; a first timer circuit for outputting a cyclic signal to the embedded microcontroller, wherein the embedded microcontroller reads the cyclic signal outputted from the first timer circuit and transmits the cyclic signal to output a transmission signal as part of operation of the program; and a second timer circuit provided externally to the embedded microcontroller, wherein the transmission signal is inputted to the second timer circuit, the second timer circuit obtains temporal change of the transmission signal for a time set in advance, and the second timer circuit outputs, based on the obtained result, a signal indicating one of different operation states of the embedded microcontroller depending on whether or not there is continuous temporal change of the transmission signal. | 10-09-2014 |
20140310560 | METHOD AND APPARATUS FOR MODULE REPAIR IN SOFTWARE - The present application relates to a method and apparatus for module repair in software. In the method, when a module in the software has an error, correct content corresponding to the erroneous content is obtained by way of accessing a web page address; then the correct content obtained is directly loaded into a system memory and the corresponding correct content is invoked directly from the memory when the module is used. The method of the present application results in the software possessing a self-repairing function and self-detection function, and can be applied in any software device. | 10-16-2014 |
20140310561 | DYNAMIC FUNCTION-LEVEL HARDWARE PERFORMANCE PROFILING FOR APPLICATION PERFORMANCE ANALYSIS - The invention is directed to a computer implemented method and a system that implements an application performance profiler with hardware performance event information. The profiler provides dynamic tracing of application programs, and offers fine-grained hardware performance event profiling at function levels. To control the perturbation on target applications, the profiler also includes a control mechanism to constraint the function profiling overhead within a budget configured by users. | 10-16-2014 |
20140310562 | Method and Device For Signing Program Crash - A method and a device for signing a program crash are disclosed, which are applied to the filed of communication technology. The device for signing the program crash firstly acquires stack information invoked when the program crash occurs during executing a process of an application program by a computer system, then acquires first stack information corresponding to the process of the application program from the acquired stack information, and signs the occurred program crash based on the first stack information. | 10-16-2014 |
20140310563 | COMPUTER-IMPLEMENTED METHODS AND SYSTEMS FOR TESTING ONLINE SYSTEMS AND CONTENT - Computer-implemented methods and systems are provided for scanning web sites and/or parsing web content, including for testing online opt-out systems and/or cookies used by online systems. In accordance with one implementation, a computer-implemented method is provided for testing an opt-out system associated with at least one advertising system that uses cookies. The method includes transmitting a first request to an opt-out system, wherein the first request corresponds to a first test for testing at least one of the opt-out system and an advertising system; receiving a first stream sent in response to the first request; determining a first outcome of the first test based on the first stream; and generating a report based on the first outcome. | 10-16-2014 |
20140317450 | PRETEST SETUP PLANNING - A computer-implemented method comprising: obtaining a description of a test suite which comprises a plurality of tests, wherein each test of the test suite is described by values of functional attributes, wherein at least a portion of the functional attributes are setup-related attributes, wherein a combination of values of the setup-relates attributes potentially indicate a setup activity to be performed prior to executing the test to set up a test environment for the test. Identifying, based on the description of the test suite, a setup activity that is associated with two or more tests, wherein the setup activity is configured to set up a component of the test environment, wherein the identifying is performed by a processor. Providing a first instruction to perform the setup activity prior to executing a first test of the two or more tests. And, providing a second instruction to reuse the component of the test environment when executing additional tests of the two or more tests, whereby avoiding performing duplicate setup activities. | 10-23-2014 |
20140317451 | AUTOMATICALLY ALLOCATING CLIENTS FOR SOFTWARE PROGRAM TESTING - Techniques are described herein that are capable of automatically allocating clients for testing a software program. For instance, a number of the clients that are to be allocated for the testing may be determined based on a workload that is to be imposed by the clients during execution of the testing. For example, the number of the clients may be a minimum number of the clients that is capable of accommodating the workload. In accordance with this example, the minimum number of the clients may be allocated in a targeted environment so that the test may be performed on those clients. Additional clients may be allocated along with the minimum number of the clients in the targeted environment to accommodate excess workload. | 10-23-2014 |
20140317452 | ERROR DETECTING APPARATUS, PROGRAM AND METHOD - An error detecting apparatus includes: an execution history storing unit to store a first execution history which is an execution history of a first program, and a second execution history which is an execution history of a second program changed from said first program; a flow comparing means for extracting positions, which are determined on the basis of positions at which a difference between number of executions of a predetermined instruction to number of executions of said first program, and number of executions of an instruction corresponding to said predetermined instruction to number of executions of said second program is not smaller than a predetermined value, from said first execution history and said second execution history; and a cause position restricting means for outputting cause position information which indicates a preceding position form said position of said second program. | 10-23-2014 |
20140325281 | TESTING APPARATUS AND TESTING METHOD - A testing apparatus for evaluating operations by software installed in a mobile terminal includes a scenario setting unit for setting a scenario including operation information for executing functions that are performed by the mobile terminal, an operation determining unit for determining whether an operation indicated by the operation information included in the scenario is influenced by a device installed in the mobile terminal, an operation target converting unit for converting the operation determined to be influenced by the device installed in the mobile terminal according to the device installed in the mobile terminal, a scenario executing unit for executing the scenario set by the scenario setting unit according to the operation converted by the operation target converting unit, and a scenario execution result determining unit for determining whether an execution result of the scenario executed by the scenario execution unit is the same as a result assumed in advance. | 10-30-2014 |
20140325282 | ANDROID AUTOMATED CROSS-APPLICATION TESTING DEVICE AND METHOD - The present disclosure provides an Android automated cross-application testing device and method. The device comprises a primary application testing unit, a trigger monitoring unit, a secondary application testing control unit, a secondary application testing unit, a secondary application test result recording unit, a library file storage unit, a primary application test checking unit, a test result processing unit, and a test result output unit. The method comprises monitoring the starting of a secondary application in the primary application testing process; testing the secondary application and collecting and processing the test result of the secondary application; and continuing the test of the primary application. If the test of the secondary application is successful, the above steps are repeated until the test of the primary application is completed. The method further comprises terminating the test of the primary application if the test of the secondary application times out or fails. | 10-30-2014 |
20140337672 | REDUCING FALSE-POSITIVE ERRORS IN A SOFTWARE CHANGE-IMPACT ANALYSIS - A method and associated systems for reducing false-positive errors in a software change-impact analysis of a basepoint variable. A processor of a computer system identifies a first generation of change-affected parts of one or more computer programs, where each identified part is affected by a change to the basepoint variable. The processor confirms the identification of each identified part by analyzing one or more characteristics of the basepoint variable and of the identified part. If the processor confirms that an identification is the product of a false-positive error, the falsely identified part is discarded. The processor then identifies a second generation of confirmed change-affected parts by repeating the procedure performed on the basepoint variable on each confirmed part of the first generation of parts. The processor continues this iterative process through additional generations until it identifies a generation that contains no confirmed change-affected parts. | 11-13-2014 |
20140337673 | CREATING TEST TEMPLATES BASED ON STEPS IN EXISTING TESTS - Example embodiments relate to creating test templates based on steps in existing tests. In example embodiments, a testing computing device may select an existing test from multiple existing tests. Each existing test of the multiple existing tests may include a set of distinct steps. The testing computing device may determine a sub-sequence related to the selected existing test. The sub-sequence may be based on a subset of the distinct steps included in the selected existing test. The testing computing device may determine that the distinct steps in the subset occur in a number of existing tests from the multiple existing tests, and may generate a test template using the sub-sequence. The test template may include the distinct steps in the subset. | 11-13-2014 |
20140344625 | DEBUGGING FRAMEWORK FOR DISTRIBUTED ETL PROCESS WITH MULTI-LANGUAGE SUPPORT - In various embodiments, a data integration system is disclosed which enables users to debug distributed data integration scenarios which are platform and technology independent. A debugger client can connect to a plurality of local and/or remote hosts executing portions of a distributed data integration scenario. The debugger client can additionally enable line-by-line debugging of the portions of the distributed data integration scenario using a plurality of language-specific interfaces. The language-specific interfaces can further enable the user to dynamically update and debug changes to the code during debugging, reducing the time and resources required by multiple recompilations of the code. | 11-20-2014 |
20140351650 | SYSTEM AND METHOD FOR CLUSTER DEBUGGING - A system and method of cluster debugging includes detecting debug events occurring in one or more first virtual machines, storing debug records, each of the debug records including information associated with a respective debug event selected from the debug events and a timestamp associated with the respective debug event, merging the debug records based on information associated with each timestamp, starting one or more second virtual machines, each of the one or more second virtual machines emulating a selected one of the one or more first virtual machines, synchronizing the one or more second virtual machines, retrieving the merged debug records, and playing the merged debug records back in chronological order on the one or more second virtual machines. In some examples, the method further includes collecting clock synchronization records. In some examples, merging the debug records includes altering an order of one or more of the debug records based on the clock synchronization records. | 11-27-2014 |
20140351651 | PILOTING IN SERVICE DELIVERY - A method for determining piloting a first system includes receiving a first hypothesis, receiving first test parameters of a decision state space defined on a sequential probability ratio test plot of a number of failures of the first system versus a number of failures of a reference system, identifying for a arbitrary distribution of events, a first number of events to be processed by the first system and the reference system that will satisfy the first test parameters, determining a coefficient of variation of the arbitrary distribution of events, and determining whether to perform the sequential probability ratio test plot using the arbitrary distribution of events or historical data based on the coefficient of variation. | 11-27-2014 |
20140359370 | OPTIMIZING TEST DATA PAYLOAD SELECTION FOR TESTING COMPUTER SOFTWARE APPLICATIONS VIA COMPUTER NETWORKS - Testing a computer software application by configuring a first computer to execute a copy of data-checking software used by a computer software application at a second computer, processing a first copy of a test data payload using the data-checking software at the first computer, where the test data payload is configured to test for an associated security vulnerability, determining that the first copy of the test data payload is endorsed by the data-checking software at the first computer for further processing, and sending a second copy of the test data payload via a computer network to the computer software application at the second computer for processing threat. | 12-04-2014 |
20140359371 | DETERMINING BEHAVIOR MODELS - Methods, systems, and computer-readable storage media determining a behavior model of a computing system under test. In some implementations, actions include executing, using a user interface of a computing SUT, an initial test script on the SUT; recording, after executing the initial test script, a state of the SUT in the behavior model by observing one or more events that can be triggered using the user interface of the SUT; and iteratively refining the behavior model until an end condition is reached by generating one or more new test scripts, executing the new test scripts on the SUT to test unobserved behavior, and recording one or more new states reached by executing the new test scripts on the SUT in the behavior model. | 12-04-2014 |
20140359372 | METHOD OF DETECTING FAULTS OF OPERATION ALGORITHMS IN A WIRE BONDING MACHINE AND APPARATUS FOR PERFORMING THE SAME - In a method of detecting faults of operation algorithms in a wire bonding machine, individual bond parameters with respect to each of the operation algorithms of the wire bonding machine can be set based on design data including information with respect to conductive wires connected between semiconductor chips of a semiconductor package. Actual conductive wires of an actual semiconductor package can be formed using the wire bonding machine into which the design data can be inputted. Actual data with respect to actual operation algorithms of the wire bonding machine, which can form the actual conductive wires, can be obtained. The actual data can be compared with the individual bond parameters to detect the faults of the operation algorithms of the wire bonding machine. Thus, forming an abnormal conductive wire by the wire bonding machine can be prevented beforehand. | 12-04-2014 |
20140365830 | SYSTEM AND METHOD FOR TEST DATA GENERATION AND OPTIMIZATION FOR DATA DRIVEN TESTING - A system, medium and method for automatically generating test data to be applied to test a target software code is disclosed. Input parameter data is received from a user via a displayed user interface, wherein the input parameter data is directed to a user selected data type, the data type being a Boolean, string, or integer. One or more preestablished stored testing algorithms is automatically selected based on the user selected data type and one or more values are applied to the selected one or more preestablished stored testing algorithms in accordance with user selected data type. At least one set of test data from the one or more identified applicable testing algorithms is automatically generated, wherein the at least one set of test data generated from the identified testing algorithms can be used as inputs for testing the target software code. | 12-11-2014 |
20140380101 | APPARATUS AND METHOD FOR DETECTING CONCURRENCY ERROR OF PARALLEL PROGRAM FOR MULTICORE - The apparatus for detecting concurrency errors of a parallel program for a multicore includes a source code matching module that adds a trace code and a dynamic thread manager class to an input source code based on interleaving information detected from the source code, splits a thread included in the source code to set an interleaving block and executes it, and when an error occurs in the executed interleaving block, the source code matching modules stores log information output from the trace code and information of interleaving block, and stores error information based on the information. | 12-25-2014 |
20150019915 | SYSTEMS AND METHODS OF ANALYZING A SOFTWARE COMPONENT - A particular method includes initiating, at an analyzer, execution of a software component at a first computing device. The first computing device includes hardware components and sensors. The sensors are external to the hardware components. A first hardware component of the hardware components is coupled to a second hardware component of the hardware components. A first sensor of the sensors is configured to monitor communications between the first hardware component and the second hardware component. The method also includes receiving monitoring data, from the first sensor, regarding a communication between the first hardware component and the second hardware component. The method further includes analyzing first effects of executing the software component on the first computing device based at least partially on the monitoring data. | 01-15-2015 |
20150026522 | SYSTEMS AND METHODS FOR MOBILE APPLICATION A/B TESTING - Techniques for electing winning treatments in connection with A/B testing of mobile applications are described. According to various embodiments, the activation of a version of a mobile application installed on a mobile device may be detected. A database storing winning treatment information describing one or more winning treatments for one or more A/B tests is accessed. In some embodiments, each of the one or more A/B tests in the winning treatment information may be associated with a particular version of a particular mobile application. Thereafter, a specific winning treatment for a specific A/B test associated with the version of the mobile application installed on the mobile device may be determined, based on the winning treatment information. The specific winning treatment may then be implemented in the mobile application installed on the mobile device. | 01-22-2015 |
20150026523 | DEBUGGING METHOD AND COMPUTER PROGRAM PRODUCT - A method for debugging a computer program is proposed. The method comprises: running at least part of said computer program on a computer, thereby prompting said computer to execute a sequence of instructions and to generate a trace corresponding to said executed sequence of instructions; and, when said program has generated an exception, selecting a set of one or more exception strings on the basis of said trace, so that each of said exception strings is a unique substring of said trace; and indicating said exception strings to a user or to a debugging tool. The set of exception strings may notably include the ultimate shortest unique substring of said trace. A computer program product is also described. | 01-22-2015 |
20150026524 | TEST CASES GENERATION FOR DIFFERENT TEST TYPES - A method and system for generating test cases of different types for testing an application. A functional flow of the application is created. The test cases are generated, based on at least one test case generation rule and additional test information corresponding to different stages of the functional flow with respect to at least two test types. | 01-22-2015 |
20150033078 | DEBUGGING APPLICATIONS IN THE CLOUD - The present disclosure describes methods, systems, and computer program products for providing remote debugging of a cloud application across a wide area network. A method includes transmitting, from a remote communication device to a cloud computing device, instructions to adjust a running application to a debugging mode; receiving, at the remote communication device from a server coupled to the cloud, aggregated thread data in a data packet by using a second debugging data protocol different from the Java Debug Wire Protocol; receiving a debugging command and applying the debugging command to the cloud application running in the debugging mode. | 01-29-2015 |
20150033079 | Integrated Fuzzing - Integrated fuzzing techniques are described. A fuzzing system may employ a container configured as a separate component that can host different target pages to implement fuzzing for an application. A hosted target file is loaded as a subcomponent of the container and parsed to recognize functionality of the application invoked by the file. In at least some embodiments, this involves building a document object model (DOM) for a browser page and determining DOM interfaces of a browser to call based on the page DOM. The container then operates to systematically invoke the recognized functionality to cause and detect failures. Additionally, the container may operate to perform iterative fuzzing with multiple test files in an automation mode. Log files may be created to describe the testing and enable both self-contained replaying of failures and coverage analysis for multiple test runs. | 01-29-2015 |
20150039941 | Testing Coordinator - A system for testing two or more applications associated with a computerized process may include a central repository, a user interface and a testing coordinator. The central repository may be used to store at least one test case each including a test data set and two or more sets of test scripts. The user interface may facilitate a selection of one or more test cases for use by the testing coordinator. The testing coordinator may be configured to test the operation of the computerized process by initiating testing of a first application by a first test tool using the test data set and a first set of scripts and initiating testing of the second application by the second test tool using the test data set and the second set of scripts from the selected test case. In some cases, the first test tool is incompatible with the second test tool. | 02-05-2015 |
20150039942 | DASHBOARD PERFORMANCE ANALYZER - Described herein is a technology for a dashboard used for visualizing data. In some implementations, a dashboard with one or more dashboard item is provided. Performance of the dashboard is evaluated to determine a load time of the dashboard. Possible suggestions for improving performance of the dashboard are provided if performance issues are determined from evaluating performance of the dashboard. | 02-05-2015 |
20150039943 | SYSTEM, METHOD, AND COMPUTER READABLE MEDIUM FOR UNIVERSAL SOFTWARE TESTING - An automated software testing and validation system allows testing of a software application under test (SAUT) regardless of the dynamic nature of the SAUT. An abstracted set of hierarchal or linear objects model certain regions of the SAUT. Automated test scripts utilize theses regions to intuitively navigate and identify potions of the SAUT to automate. The scripts can also access specific SAUT elements contain within each defined region. These elements can then be used to invoke actions or verify outputs there from. The system uses a set of rich identification rules embodied in the system which allow the user to configure the identification of any element within the abstracted region. The rules are customizable to allow the user to configure the desired level of loose coupling between the automated scripts and the target element to adapt the scripts to the nature of the SAUT. | 02-05-2015 |
20150046752 | Redundant Transactions for Detection of Timing Sensitive Errors - A method for detecting a software-race condition in a program includes copying a state of a transaction of the program from a first core of a multi-core processor to at least one additional core of the multi-core processor, running the transaction, redundantly, on the first core and the at least one additional core given the state, outputting a result of the first core and the at least one additional core, and detecting a difference in the results between the first core and the at least one additional core, wherein the difference indicates the software-race condition. | 02-12-2015 |
20150046753 | EMBEDDED SOFTWARE DEBUG SYSTEM WITH PARTIAL HARDWARE ACCELERATION - An embedded software debug system with partial hardware acceleration includes a computer that executes a debug software stack. The debug software stack includes high level operations. The system also includes a remote microcontroller electronically connected to the computer. The system further includes an embedded processor electronically connected to the remote microcontroller. The remote microcontroller receives an applet from the computer and executes the applet in conjunction with the computer executing the debug software stack to debug the embedded processor. The applet includes low level protocol operations including performance critical tight-loops precompiled into machine code. The debug software stack may include a stub that replaces the tight-loops of the applet. The computer may send the applet to the remote microcontroller in response to executing the still). | 02-12-2015 |
20150052401 | SYSTEMS AND METHODS FOR INVASIVE DEBUG OF A PROCESSOR WITHOUT PROCESSOR EXECUTION OF INSTRUCTIONS - Methods for invasive debug of a processor without processor execution of instructions are disclosed. As a part of a method, a memory mapped I/O of the processor is accessed using a debug bus and an operation is initiated that causes a debug port to gain access to registers of the processor using the memory mapped I/O. The invasive debug of the processor is executed from the debug port via registers of the processor. | 02-19-2015 |
20150052402 | Cloud Deployment Infrastructure Validation Engine - Embodiments of the invention provide a set of validators that can be used to determine whether an installation is operating within desired parameters and is in compliance with any requirements. The validators may be provided with a software application or release, for example, and may be run during and/or after installation to test the application operation. A set of self-healing operations may be triggered when faults are detected by the validators. This allows a software application to auto-diagnose and auto-self-heal any detected faults. | 02-19-2015 |
20150058675 | SOFTWARE UNIT TEST IMMUNITY INDEX - The present disclosure describes methods, systems, and computer program products for measuring strength of a unit test. One computer-implemented method includes receiving software unit source code associated with a unit test, analyzing a line of the software unit source code for removability, initiating, by operation of a computer, modification of the software unit source code to remove the line of the software unit source code and create a modified software unit, initiating execution of the modified software unit using the unit test, determining success or failure of a unit test execution, and analyzing a next line of the software unit source code for removability. | 02-26-2015 |
20150067404 | FLEXIBLE AND MODULAR LOAD TESTING AND MONITORING OF WORKLOADS - Various embodiments monitor a distributed software system. In one embodiment, at least one monitoring policy associated with a distributed software system is selected. A policy type associated with the monitoring policy is identified. An installer is selected based on the policy type associated with the monitoring policy. Monitoring software is installed in a computing environment utilizing the installer. The monitoring software is configured to monitor the distributed software system based on the monitoring policy. | 03-05-2015 |
20150089296 | DERIVATION OF GENERALIZED TEST CASES - A first computer receives a first and a second test sample. The first computer executes the first and second test sample. The first computer determines that the value exposed by a first parameter in the second test sample is different from the value exposed by the first parameter in first test sample. The first computer creates a first value driven equivalence class. The first computer determines the value exposed by the second parameter in the second test sample is different from the value exposed by the second parameter in the first test sample and the value exposed by the second parameter in the second test sample is equivalent to the value exposed by the first parameter in the second test sample. The first computer adds the second parameter to the first value driven equivalence class and creates a generalized test case, including at least the first value driven equivalence class. | 03-26-2015 |
20150089297 | Using Crowd Experiences for Software Problem Determination and Resolution - An approach is provided to utilize experiences of a user community to identify software problems and communicate resolutions to such problems. Error reports are received from installed software systems in the user community. From these reports, a set of problematic usage patterns are generated, with each of the usage patterns having a confidence factor that is increased based on the number of problem reports that match the usage pattern. The problematic usage patterns are matched to sections of code corresponding to the installed software system with sections of code being identified with problematic usage patterns having confidence factors greater than a given threshold. | 03-26-2015 |
20150089298 | METHODS, SYSTEMS, AND COMPUTER-READABLE MEDIA FOR TESTING APPLICATIONS ON A HANDHELD DEVICE - Techniques for testing one or more applications running on a handheld device include: receiving, by a tester system, an error state corresponding to the one or more applications; retrieving, by the tester system, one or more tests script parameters corresponding to the error state from a database, wherein the one or more test script parameters corresponding to the error state are stored in the database; providing, by the tester system, the one or more test script parameters to an input subsystem, wherein the input subsystem is connected to the handheld device; notifying a user to provide one or more inputs corresponding to the error state, wherein the one or more test script parameters are not stored in the database; receiving, by the tester system, the one or more inputs from the user; and providing, by the tester system, the one or more inputs to the input subsystem. | 03-26-2015 |
20150095708 | AUTOMATIC GENERATION OF ENTITY TYPES FILES - An Entity Types File (ETF) is automatically generated from a high-level software description, where the software description describes software to be managed by middleware to achieve high availability. An ETF generation method comprises receiving the software description that describes interfaces and dependency between components of the software; verifying the software description in accordance with constraints imposed by middleware specifications; based on the verified software description, automatically creating a hierarchy of entity types and associations among the entity types compliant with the middleware specifications; and outputting the entity types and the associations as the ETF for subsequent generation of a configuration of the middleware for availability management. | 04-02-2015 |
20150095709 | PATTERN ORIENTED DATA COLLECTION AND ANALYSIS - A process for determining a problematic condition while running software includes: loading a first pattern data set having a symptom code module, a problematic condition determination module, and a set of responsive action module(s), generating a runtime symptom code in response to a first problematic condition being caused by the running of the software on the computer, determining that the runtime symptom code matches a symptom code corresponding to the first pattern data set, determining that the first problematic condition caused the generation of the runtime symptom code, and taking a responsive action from a set of responsive action(s) that corresponds to the first problematic condition. | 04-02-2015 |
20150100829 | METHOD AND SYSTEM FOR SELECTING AND EXECUTING TEST SCRIPTS - Systems and methods are disclosed herein to a method for reusing test automation framework across multiple applications, the method comprises receiving a selection of one or more test scripts from a user to test an application; creating an execution list containing every selected test script; loading the instructions of the test script into the computer-readable memory when the test script is found in the test script repository; executing the test script testing the application according to the instructions defined in the test script and according to computer instructions defined by the utility functions or the common functions when the test script calls either the common functions or the utility functions; checking the application's status after the test terminates operation; and recovering and closing the application if the application failed before executing a second test script testing the application. | 04-09-2015 |
20150100830 | METHOD AND SYSTEM FOR SELECTING AND EXECUTING TEST SCRIPTS - Systems and methods are disclosed herein to a method for reusing test automation framework across multiple applications, the method comprises receiving a selection of one or more test scripts from a user to test an application; creating an execution list containing every selected test script; loading the instructions of the test script into the computer-readable memory when the test script is found in the test script repository; executing the test script testing the application according to the instructions defined in the test script and according to computer instructions defined by the utility functions or the common functions when the test script calls either the common functions or the utility functions; checking the application's status after the test terminates operation; and recovering and closing the application if the application failed before executing a second test script testing the application. | 04-09-2015 |
20150100831 | METHOD AND SYSTEM FOR SELECTING AND EXECUTING TEST SCRIPTS - Systems and methods are disclosed herein to a method for reusing test automation framework across multiple applications, the method comprises receiving a selection of one or more test scripts from a user to test an application; creating an execution list containing every selected test script; loading the instructions of the test script into the computer-readable memory when the test script is found in the test script repository; executing the test script testing the application according to the instructions defined in the test script and according to computer instructions defined by the utility functions or the common functions when the test script calls either the common functions or the utility functions; checking the application's status after the test terminates operation; and recovering and closing the application if the application failed before executing a second test script testing the application. | 04-09-2015 |
20150113330 | DOMAIN CENTRIC TEST DATA GENERATION - A test data extraction and persistence technique that relies on a data domain based storage infrastructure is disclosed. In operation, a test data server receives a test data query that specifies selection parameters for selecting test data and any transformation operations to be performed on the test data. The test data server identifies domains associated with the selection parameters and traverses the tables in the database based on the identified domains to extract test data that satisfies the selection parameters. The test data server optionally performs transformation operations, such as masking operations, specified by the test data query on the extracted data. The identified domains are stored such that test data that satisfies the test data query may be extracted from the database repetitively without reevaluating the test data query each time. | 04-23-2015 |
20150113331 | SYSTEMS AND METHODS FOR IMPROVED SOFTWARE TESTING PROJECT EXECUTION - This disclosure relates generally to software development, and more particularly to systems and methods for improved software testing project execution. In one embodiment, a software testing system is disclosed, comprising: a processor; and a memory storing processor-executable instructions comprising instructions for: obtaining a software test execution request including one or more software test cases to execute; identifying one or more software test environmental parameters; determining one or more computing systems for performing software test execution, based on the one or more software test environmental parameters; generating one or more configuration settings associated with initiating or terminating software test execution on the one or more computing systems; and storing the one or more configuration settings. | 04-23-2015 |
20150113332 | CODE ANALYSIS METHOD, CODE ANALYSIS SYSTEM AND COMPUTER STORAGE MEDIUM - Provided is a code analysis method, a code analysis system and a computer storage medium. The method includes: obtaining a code change list; analyzing the code change list, obtaining a change list corresponding to each type of programming languages from the code change list, determining a mapping relationship between the change list and the type of programming languages; obtaining code analysis tool information and analysis rule information according to the mapping relationship, and generating an execution solution; and calling a code analysis tool and an analysis rule according to the execution solution to perform the code analysis and obtain a code analysis result. Examples of the present disclosure may integrate multiple code analysis tools and analysis rules, meet requirements of the code analysis on different types of programming languages, reduce workload of a developer and a tester, and increase efficiency of the code analysis. | 04-23-2015 |
20150121147 | METHODS, APPARATUSES AND COMPUTER PROGRAM PRODUCTS FOR BULK ASSIGNING TESTS FOR EXECUTION OF APPLICATIONS - An apparatus is provided for facilitating bulk assignment of test cases to a test cycle. The apparatus may include at least one memory and at least one processor configured to enable selection, via a user interface, of test cases to assign the test cases to a designated test cycle. The test cases are designated for testing or execution of functions of at least one application. The processor is also configured to automatically calculate an estimated duration of time in which to complete the testing or execution of the functions in response to receipt of indications of selections of the test cases via the user interface. The processor is also configured to provide visible indicia in the user interface indicating the estimated duration of time in which to complete the testing or the execution of the functions of the application. Corresponding computer program products and methods are also provided. | 04-30-2015 |
20150121148 | MALFUNCTION INFLUENCE EVALUATION SYSTEM AND EVALUATION METHOD - Provided is a malfunction influence evaluation system comprising a controller simulator that simulates the operation of a controller, an input apparatus that provides input data to the controller simulator, a simulation manager that exercises integrated management of the operation of the input apparatus and the controller simulator, and a database wherein malfunction information and simulation conditions to be referred to by the simulation manager is stored. The controller simulator retains a control program for the controller and an analysis unit, and the analysis unit has a propagation flag tracking function wherein propagation flags are assigned to a variable within the control program, bits of the variable are set by inputting a prescribed value thereto as a malfunction input value, the bits are propagated each time the variable is involved in a calculation within the control program, the states of propagation of the bits are tracked, and the result thereof is output. | 04-30-2015 |
20150143179 | System and Method for Progressive Fault Injection Testing - A system and method for performing a progressive fault injection process to verify software is provided. In some embodiments, the method comprises loading a software product into the memory of a testbed computing system, wherein the software product includes a function and a statement that calls the function. A data structure is updated based on an error domain of the function. The calling statement is executed for each of one or more error return codes of the error domain. For each iteration of the execution, a call of the function by the calling statement is detected, and, in response, an error return code of the one or more error return codes is provided in lieu of executing the function. The software product is monitored to determine a response to the provided error return code. In some embodiments, the error return code to provide is determined by querying the data structure. | 05-21-2015 |
20150143180 | VALIDATING SOFTWARE CHARACTERISTICS - Aspects of the subject matter described herein relate to software validation. In aspects, code may be instrumented to generate certain records upon execution. The code may be further instrumented to generate start and stop records that correspond to the start and stop events of a scenario of a program. The start and stop event records allow correlation of the scenario with other records written to the log. With the correlation and appropriate instrumentation, a tool may determine performance, memory usage, functional correctness, and other characteristics of program at the granularity of the scenario. | 05-21-2015 |
20150293803 | Methods and Articles of Manufacture for Hosting a Safety Critical Application on an Uncontrolled Data Processing Device - Methods and articles of manufacture for hosting a safety critical application on an uncontrolled data processing device are provided. Various combinations of installation, functional, host integrity, coexistence, interoperability, power management, and environment checks are performed at various times to determine if the safety critical application operates properly on the device. The operation of the SCA on the UDPD may be controlled accordingly. | 10-15-2015 |
20150293804 | Methods and Articles of Manufacture for Hosting a Safety Critical Application on an Uncontrolled Data Processing Device - Methods and articles of manufacture for hosting a safety critical application on an uncontrolled data processing device are provided. Various combinations of installation, functional, host integrity, coexistence, interoperability, power management, and environment checks are performed at various times to determine if the safety critical application operates properly on the device. The operation of the SCA on the UDPD may be controlled accordingly. | 10-15-2015 |
20150293805 | Methods and Articles of Manufacture for Hosting a Safety Critical Application on an Uncontrolled Data Processing Device - Methods and articles of manufacture for hosting a safety critical application on an uncontrolled data processing device are provided. Various combinations of installation, functional, host integrity, coexistence, interoperability, power management, and environment checks are performed at various times to determine if the safety critical application operates properly on the device. The operation of the SCA on the UDPD may be controlled accordingly. | 10-15-2015 |
20150301921 | Computer Implemented System and Method of Instrumentation for Software Applications - A method(s) and system(s) of monitoring and logging of various identified events of the operating system or the software application hosted on the operating system is disclosed. The method includes configuring the events associated with at least one event handler for monitoring. The method further includes assigning the at least one event handler to active processes of an operating system for handling of the events. Further, the method includes capturing of events by a different daemons and collecting the captured events. To this end, the captured similar events are grouped in one or more groups. The method further includes filtering of collected events based on a definable filter configuration and generating a dashboard representation of the filtered events. The dashboard representations of filtered events are then reported to the user. | 10-22-2015 |
20150301923 | SEQUENCE-PROGRAM-DEBUGGING SUPPORTING APPARATUS - A sequence-program-debugging supporting apparatus includes a configuration editing unit that receives a disabling unit from a PLC, a variable retaining unit that retains variables used by units on a sequence program, a program editing unit that can edit the sequence program, a converting unit that converts the sequence program into an execution code, a searching unit that acquires variables used by the disabling unit from the variable retaining unit and searches for places where the acquired variables are used in the sequence program, and a disabling setting unit that writes a section of the execution code corresponding to the places in a disabling section setting file as a disabling section not to be executed, and an execution control unit that controls, based on the disabling section setting file, an executing unit not to execute the disabling section. | 10-22-2015 |
20150301927 | METHODS FOR GENERATING A NEGATIVE TEST INPUT DATA AND DEVICES THEREOF - The present invention provides a method and system for generating negative test input data. A set of attributes and a set of attribute properties can be extracted from a requirement specification. A constraint representation syntax can be framed from the extracted set of attribute properties. A structured diagram is modeled from the framed constraint representation syntax and a set of use cases, a set of path predicates can be constructed from the structured diagram. One or more attribute classes can be determined from the set of path predicates based on an attribute constraint and an attribute dependency. The negative test input data shall be generated from the one or more attribute classes using genetic algorithm. | 10-22-2015 |
20150301929 | SYSTEM AND METHOD FOR COORDINATING FIELD USER TESTING RESULTS FOR A MOBILE APPLICATION ACROSS VARIOUS MOBILE DEVICES - Systems and methods for facilitating field testing of a test application are provided. In certain implementations, one or more metrics related to execution, at a mobile device, of one or more operations of the test application may be obtained. A determination of whether an error occurred with an operation of the one or more operations may be effectuated based on the one or more metrics. Error information relating to the error may be caused to be transmitted to one or more other mobile devices, wherein the error information includes information for replicating the error. Replication information relating to an attempt by the first other mobile device to replicate the error may be received back from at least a first other mobile device of the one or more other mobile devices. A determination of whether the first other mobile device replicated the error may be effectuated based on the replication information. | 10-22-2015 |
20150309917 | Automation Framework Interface - An automation framework interface and a method for managing and operating automation software suites using the automation framework interface. The automation framework interface provides a user access to a plurality of automation suites that can be operated on an automation framework. Each of the plurality of automation suites includes a plurality of automation cases, each being an individual test case to be run on the automation framework. The automation framework interface provides a main page for selecting a specific automation suite from the plurality of automation suites, which then directs the user to an automation suite page for the specific automation suite. The automation suite page similarly allows the user to select a specific automation case from the plurality of automation cases for the specific automation suite. Through the automation framework interface the user can view operation metrics and automation logs for each of the plurality of automation cases. | 10-29-2015 |
20150309918 | METHOD OF OPTIMIZING EXECUTION OF TEST CASES AND A SYSTEM THEREOF - The present subject matter relates to a computer implemented method and a computer system for optimizing execution of test cases. The method comprises calculating failure probability level of plurality of test cases based on plurality of test results associated to each of the plurality of test cases and determining dynamic risk profile level based on weights assigned to the failure probability level and risk impact parameter of the plurality of test cases. The method further comprises determining one or more set of optimal test cases to be executed based on the dynamic risk profile level of the plurality of test cases satisfying one or more test rule parameters. Upon determining, the method comprises identifying sequence of executing the one or more set of optimal test cases based on the one or more test sequence parameters and executing the one or more set of optimal test cases in the identified sequence. | 10-29-2015 |
20150309919 | SYSTEM AND METHOD FOR GENERATING SYNTHETIC DATA FOR SOFTWARE TESTING PURPOSES - According to one aspect, it is appreciated that it may be useful and particularly advantageous to provide a data generator that creates more realistic data for testing purposes, especially in data systems where large volumes of data are necessary. In one implementation, a data generator is provided that produces relationally consistent data for testing purposes. For instance, a synthetic data generation process may be performed that produces any number of relationally consistent data table structures. Further, in another implementation, generation of the data can be statistically influenced so that the data generated can take on the “look and feel” of production data. Also, data may be produced as needed, and its generation may be performed in parallel, depending on interdependencies in the data. | 10-29-2015 |
20150309922 | ON-DEMAND SOFTWARE TEST ENVIRONMENT GENERATION - A method and a system to create a software test environment on demand are described. An example system includes a dependency module to, upon receiving a command identifying a primary function to be created in a test environment, identify one or more dependencies of the primary function. The dependencies are other functions or databases that the primary function depends upon. The dependency module generates a topology of the test environment that indicates the relationship of the dependencies to the primary function. A provisioning module provisions a plurality of pools based on the topology. An enterprise service bus (ESB) routing module updates ESB routing of the primary function to route to the plurality of pools in the test environment. A credentials module provides credentials of the pools in the test environment. | 10-29-2015 |
20150317234 | SYSTEM, METHOD, APPARATUS AND COMPUTER PROGRAM FOR AUTOMATIC EVALUATION OF USER INTERFACES IN SOFTWARE PROGRAMS - A method includes inputting an application program to be tested to a data processing system; linking the application program to a software library; performing, in cooperation with the software library, a static analysis of a user interface of the application program, without executing the application program, to generate a set of static analysis results; performing, in cooperation with the software library, a dynamic analysis of the user interface of the application program while executing the application program to generate a set of dynamic analysis results and, based on the set of static analysis results and the set of dynamic analysis results, a step of determining if the user interface of the application program violates one or more user interface policy rules. Also disclosed is a computer program product that implements the method and a system configured to execute the computer program product in accordance with the method. | 11-05-2015 |
20150317240 | TESTING IMPLEMENTATION PARAMETERS OF A COMPUTER PROGRAM IN A DISTRIBUTED ENVIRONMENT - A method of testing implementation parameters of a computer program in a distributed environment, the method comprising; testing of alternative implementation parameters in parallel in the distributed environment, and providing a time-out mechanism that aborts testing processes when one of the following abort conditions is satisfied: a time allowed for testing has expired; and testing processes for a predefined number of alternative implementations are complete; wherein the time-out mechanism includes a hardware interface, which is arranged to cause a hardware supported abort. | 11-05-2015 |
20150317241 | TEST DESIGN ASSISTANCE DEVICE, TEST DESIGN ASSISTANCE METHOD, AND COMPUTER-READABLE MEDIUM - A test design assistance device | 11-05-2015 |
20150317243 | SYSTEM AND METHOD FOR A DIAGNOSTIC SOFTWARE SERVICE - Systems and methods for a diagnostic software service that utilizes a subscription model to distribute diagnostic software to diagnostic tools. A diagnostic application is installed on a mobile device. The mobile device communicates with an adapter which can be coupled to a vehicle. An application server provides software modules that are available to be subscribed to by a technician and, once subscribed, can be utilized via the diagnostic application. Subscription to software modules enable the technician to add and utilize specific diagnostic functionality in an a la carte manner. | 11-05-2015 |
20150324273 | GENERATING PRODUCTION SERVER LOAD ACTIVITY FOR A TEST SERVER - Replicating on a test server a production load of a production server. The production load can be created on the production server by processing client requests received from clients. While the client requests are processed, in real time, the production load can be replicated to generate a replicated production load that represents the client requests and defines state information representing unique states formed between the production server and the respective clients. In real time, the replicated production load can be communicated in order to replicate the production load on the test server. | 11-12-2015 |
20150324277 | Compliance Testing Engine for Integrated Computing System - A technique tests whether an integrated computing system having server, network and storage components complies with a configuration benchmark expressed as rules in first markup-language statements such as XML. The rules are parsed to obtain test definition identifiers identifying test definitions in a second set of markup-language statements, each test definition including a test value and an attribute identifier of system component attribute. A management database is organized as an integrated object model of all system components. An interpreter invoked with the test definition identifier from each rule process each test definition to (a) access the management database using the attribute identifier obtain the actual value for the corresponding attribute, and (b) compare the actual value to the test value of the test definition to generate a comparison result value that can be stored or communicated as a compliance indicator to a human or machine user. | 11-12-2015 |
20150331737 | Evaluating Reliability of a Software Module Using Development Life Cycle - Reliability of one or more software modules is projected according to a current state in a development life cycle of the software modules and any of various additional indicators. Preferably, a data processing support provider separate from the service-providing enterprise maintains historical field support data concerning significant field defect events with respect to various resources, and uses this data for projecting reliability of the resources. Preferably, software module reliability projections are used to support an analysis of risk of degradation of a service specified in a service requirements specification when provided by a configuration of data processing resources specified in a configuration specification. | 11-19-2015 |
20150331738 | PERFORMING DIAGNOSTIC TRACING OF AN EXECUTING APPLICATION TO IDENTIFY SUSPICIOUS POINTER VALUES - Arrangements described herein relate to performing diagnostic tracing of an executing application. A trace entry in trace data can be identified, the trace entry comprising a pointer that refers to a memory address. Whether a value that is, or has been, stored at the memory address is an erroneous value can be determined. Responsive to determining that the value that is, or has been, stored at the memory address is an erroneous value, the pointer can be indicated as being a suspicious value. | 11-19-2015 |
20150331786 | PATH EXECUTION REDUCTION IN SOFTWARE PROGRAM VERIFICATION - A method of software program verification including receiving at least a portion of a software program that may further include a function under analysis (FUA). The method includes creating an FUA path based at least partially on a path through one or more functions of the received portion of the software program. The method includes determining whether the FUA path generates new coverage for the FUA. In response to the FUA path generating new coverage, the method includes selecting an FUA path statement from the FUA path. The method includes determining whether an uncovered code fragment of the FUA is reachable from the selected FUA path statement based at least partially on a set of covered FUA code fragments. In response to the uncovered code fragment being reachable from the selected FUA path statement, the method includes adding the selected FUA path statement to a set of covered statements. | 11-19-2015 |
20150331788 | SYSTEM FOR TESTING A BROWSER-BASED APPLICATION - A system for testing multiple language versions of a browser-based application. A host language Hypertext Transfer Protocol (HTTP) request issued by a host language browser is received. The host language HTTP request is configured to be sent to a host server address. The host language HTTP request comprises parameter strings in a host language. A target language HTTP request is generated by replacing each host parameter string of at least one host parameter string of the parameter strings in the received HTTP request with a respective target parameter string associated with a target language that differs from the host language. The generated target language HTTP request is configured to be sent to a target server address associated with and different from the host server address. | 11-19-2015 |
20150339216 | Providing Testing Environments Using Virtualization - Methods, systems, computer-readable media, and apparatuses for providing testing environments using virtualization are presented. In one or more embodiments, a computer system may receive, from a client computing device, a software application. Subsequently, the computer system may receive, from the client computing device, a set of one or more testing parameters for testing the software application. Then, the computer system may create, based on the set of one or more testing parameters for testing the software application, a testing environment for the software application using a native hardware layer that represents hardware on which the software application is configured to be executed. Thereafter, the computer system may initiate a testing session in which software application is executed in the testing environment. Subsequently, the computer system may provide, to the client computing device, a control interface for controlling the testing session. | 11-26-2015 |
20150339218 | MERGING AUTOMATED TESTING REPORTS - According to one embodiment of the present invention, a method for analyzing test results is provided. The method for analyzing test results may include a computer, determining a first snapshot from a first set of snapshots, wherein the first snapshot is associated with a first set of data. The method may further include the computer determining a second snapshot from a second set of snapshots, wherein the second snapshot is substantially similar to the first snapshot, and wherein the second snapshot is associated with a second set of data. The method may further include the computer associating the first set of data and the second set of data with a third snapshot, responsive to determining that the second snapshot is substantially similar to the first snapshot, wherein the third snapshot is substantially similar to the first snapshot. | 11-26-2015 |
20150347270 | AUTOMATIC TEST SYSTEM AND TEST METHOD FOR COMPUTER, RECORD MEDIUM, AND PROGRAM PRODUCT - An automatic test method for a computer includes the following steps: reading a keyboard signal or mouse signal and a delay time in an event file in a system test directory; sending the keyboard signal or mouse signal to a to-be-tested system according to the delay time; the to-be-tested system that operates according to the keyboard signal or mouse signal sending at least one response; and verifying the response by comparing the response, which is in the form of a character string, with a character string in a correct text file in the system test directory, or sending an image acquisition signal to the to-be-tested system according to the at least one response, to acquire a screenshot, converting the screenshot into a screenshot image file, and verifying the screenshot image file corresponding to the to-be-tested system by comparing the screenshot image file with a correct screenshot image file in the system test directory. | 12-03-2015 |
20150347278 | IDENTIFYING TEST GAPS USING CODE EXECUTION PATHS - Systems and techniques are described for identifying test gaps. A described technique includes identifying production code paths for an application. Each production code path specifies a respective sequence of code of the application that was executed in a production environment. Test code paths are identified for the application. Each test code path specifies a respective sequence of code of the application that was tested in a test environment. The production code paths are compared to the test code paths to identify a set of first test gaps for the application. Each first test gap specifies a respective production code path that is not included in the test code paths. Test gap data specifying the first test gaps for the application can be provided for presentation to a user. | 12-03-2015 |
20150347279 | METHODOLOGY AND TOOL SUPPORT FOR TEST ORGANIZATION AND MIGRATION FOR EMBEDDED SOFTWARE - A method of establishing traceability for embedded software systems. A design code database is provided for an embedded software system. A test suite database including a plurality of test cases is structured for testing design code of the embedded software system. The structuring of the test cases provides a correspondence from a respective test case to a respective portion of the design code. A processor receives a design code modification to the embedded software. An associated test case is identified for testing the modified design code being based on traceability data. The associated test case is revised to accommodate the modified design code. The modified test cases are integrated into the test suite. A traceability database establishes a one-to-one correspondence between the modified design coder and the modified test case is updated. | 12-03-2015 |
20150347280 | AUTONOMOUS PROPAGATION OF SYSTEM UPDATES - A method, system, and/or computer program product propagates system upgrades to peer computers in a peer community. A peer community is defined by identifying peer computers that each have a copy of a same system component. Each of the peer computers in the peer community is autonomous, such that no peer computer controls another peer computer. A test computer is selected from the peer computers. An upgrade to a system component on the test computer is installed and tested. In response to the upgrade to the system component functioning properly within the test computer, a message is sent to other peer computers within the peer community recommending that they install the upgrade. | 12-03-2015 |
20150347285 | DETECTING ANOMALOUS FUNCTION EXECUTION IN A PROGRAM - Methods and systems for detecting anomalous function execution in a program, such as a video game or simulation program, are described herein. Certain methods attempt to isolate and score functions that behave in a particular manner that is deemed to be problematic within a repetitive program. Other methods can use the repetitive nature of the program to directly compare and isolate problematic functions. | 12-03-2015 |
20150347286 | HEALTH MONITORING USING SNAPSHOT BACKUPS THROUGH TEST VECTORS - Technologies are described for health monitoring using snapshot backups through test vectors. In some examples, health of an application deployed at a datacenter may be monitored and key metrics recorded in the metadata of progressive backup snapshots of an instance of the application such that warning metrics can be reviewed retrospectively upon failure of the instance and a snapshot can be automatically selected for restoration of the application instance based on lack of high incidence of suspect metric values. Moreover, an operating state associated with snapshot backups may be assessed as the snapshots are captured and selected ones with operating conditions desired as part of a test suite may be saved for use as test scenarios. In particular, state information from added or existing deployment monitoring may be used by a test logic process to evaluate whether each snapshot is needed for testing scenarios. | 12-03-2015 |
20150350341 | APPLICATION GATEWAY FOR CLOUD COMPUTING SYSTEMS - The present disclosure involves systems, software, and computer-implemented methods for certifying applications for execution in cloud computing systems. An example method includes identifying an application for execution in a cloud computing system; determining a set of application characteristics associated with the application based at least in part on an automatic analysis of the application; determining whether the application is suitable to be executed in the cloud computing system based at least in part on the determined set of application characteristics; and in response to determining that the application is suitable for use in the cloud computing system, storing the application and at least a portion of the determined set of application characteristics in an application repository. | 12-03-2015 |
20150355993 | DETECTING POTENTIAL CLASS LOADER PROBLEMS USING THE CLASS SEARCH PATH SEQUENCE FOR EACH CLASS LOADER - A method, system and computer program product for identifying potential class loader problems prior to or during the deployment of the classes to the production environment. A set of class loaders is loaded into memory. The set of class loaders is arranged hierarchically into parent-child relationships. The class search path sequence for each class loader in the hierarchy is generated to detect and identify potential class loader problems. Those class loaders with a duplicate class in its class search path sequence are identified as those class loaders that may pose a potential problem. A message may then be displayed to the user identifying these class loaders as posing a potential problem. By identifying these class loaders prior to or during the deployment of the classes to the production environment, class loader problems may be prevented from occurring. | 12-10-2015 |
20150356002 | DEPLOYMENT PATTERN MONITORING - A computer system can detect a request for status information relating to a particular deployment pattern; query, in response to the request, a deployment pattern registry for deployment configuration information about the particular deployment pattern; test deployment capabilities for the particular deployment pattern by: verifying installation files for the particular deployment pattern are accessible; identifying one or more candidate deployment components for a hypothetical deployment of the particular deployment pattern; installing, on the one or more candidate deployment components, a virtual machine that is configured to test computing resources of the one or more candidate deployment components; and deleting the virtual machine in response to receiving test results regarding the resources of the one or more candidate deployment components. The system can generate a notification in response to detecting a failure in the testing. | 12-10-2015 |
20150363293 | EXECUTING DEBUG PROGRAM INSTRUCTIONS ON A TARGET APPARATUS PROCESSING PIPELINE - A target apparatus | 12-17-2015 |
20150363297 | PERFORMANCE TESTING OF SOFTWARE APPLICATIONS - Identifying performance issues in an application under test (AUT). The AUT executes on a system under test (SUT) in a test environment, and uses one or more context parameters of the SUT and/or the test environment. A rule engine identifies performance antipatterns in trace data generated by the AUT when executing a set of test suites, based on a set of performance antipattern definition rules, each performance antipattern associated with one or more context parameters. One or more performance test suites are identified that cause the AUT to use at least one of the one or more context parameters associated with the identified antipatterns. The list of identified performance test suites is ranked, based on respective priority values associated with each identified antipattern. | 12-17-2015 |
20150363298 | AUTOMATED TESTING OF WEBSITES BASED ON MODE - Examples of techniques for testing websites are described herein. In one example, a method for testing a website includes receiving, via a processor, a website address of the website to be tested. The method can include determining, via the processor, whether the website is in a staging mode or a production mode. The method can also include configuring, via the processor, a testing application to test the website according to the determined mode. | 12-17-2015 |
20150363299 | PERFORMANCE TESTING OF SOFTWARE APPLICATIONS - Identifying performance issues in an application under test (AUT). The AUT executes on a system under test (SUT) in a test environment, and uses one or more context parameters of the SUT and/or the test environment. A rule engine identifies performance antipatterns in trace data generated by the AUT when executing a set of test suites, based on a set of performance antipattern definition rules, each performance antipattern associated with one or more context parameters. One or more performance test suites are identified that cause the AUT to use at least one of the one or more context parameters associated with the identified antipatterns. The list of identified performance test suites is ranked, based on respective priority values associated with each identified antipattern. | 12-17-2015 |
20150363300 | GENERATING SOFTWARE TEST SCRIPT FROM VIDEO - Methods and apparatus are disclosed to generate software test script from video. Example methods disclosed herein include determining a user action in a frame of a video comprising recorded testing of software. The example method also includes identifying an action parameter corresponding to the user action. The example method also includes based on the action parameter, generating without user intervention a script to execute on the software. | 12-17-2015 |
20150370685 | DEFECT LOCALIZATION IN SOFTWARE INTEGRATION TESTS - Defect localization can be performed in integration tests to more efficiently determine if recent source code changes caused a defect. Change locations are identified that represent code changes (e.g., source code changes) that occurred since a last integration test run. Code coverage information can be obtained indicating lines of code actually tested during the integration test. A search can be performed to find an intersection between the code changes and the code actually tested to determine one or more candidate code changes that may have caused a defect in the integration test. The candidate code changes can be ranked based on one or more different ranking algorithms. | 12-24-2015 |
20150370686 | METHODS AND APPARATUS FOR DEBUGGING OF REMOTE SYSTEMS - Methods and apparatus for debugging of remote systems are disclosed. An example apparatus includes an activator to establish a connection between a first computer system and a second computer system, a data fetcher to transfer values of a first set of data elements from the second computer system to the first computer system via the connection, an executor to execute a first software code on the first computer system using the transferred values of the first set of data elements after the connection is closed, and a debugger to debug the first software code on the first computer system after the executor executes the first software code on the first computer system. | 12-24-2015 |
20150370691 | SYSTEM TESTING OF SOFTWARE PROGRAMS EXECUTING ON MODULAR FRAMEWORKS - According to an aspect of the present disclosure, a test case specifying multiple tasks is run on a software program executing on a modular framework, with the performance of each task (by the software program) being designed to cause invocation of some of the modules of the framework. A set of modules of the modular framework as being of interest in the running of the test case is identified. Accordingly, icons representing the identified set of modules are displayed during the performance of the tasks of the test case. Upon occurrence of an error condition, a module of interest causing the error condition is diagnosed, and the icon representing the module of interest is highlighted to indicate to a user that the module is the source of the error condition. Thus, a user is enabled to perform the system testing of software programs executing on a modular framework. | 12-24-2015 |
20150378862 | SELECTION METHOD FOR SELECTING MONITORING TARGET PROGRAM, RECORDING MEDIUM AND MONITORING TARGET SELECTION APPARATUS - A computer identifies a program in which a command history issued to an operating system meets a specific pattern from among a plurality of programs run in a monitoring target system, and selects one or more residual programs as a monitoring target, the one or more residual programs being obtained by excluding the identified program from the plurality of programs. | 12-31-2015 |
20150378867 | DETECTING THE USE OF STALE DATA VALUES DUE TO WEAK CONSISTENCY - An apparatus and method detect the use of stale data values due to weak consistency between parallel threads on a computer system. A consistency error detection mechanism uses object code injection to build a consistency error detection table during the operation of an application. When the application is paused, the consistency error detection mechanism uses the consistency error detection table to detect consistency errors where stale data is used by the application. The consistency error detection mechanism alerts the user/programmer to the consistency errors in the application program. | 12-31-2015 |
20150378868 | TECHNOLOGIES FOR DETERMINING BINARY LOOP TRIP COUNT USING DYNAMIC BINARY INSTRUMENTATION - Technologies for binary loop trip count computation include a computing device that dynamically instruments binary code, executes the instrumented code, and records execution statistics during execution of the instrumented code. The computing device may instrument only instructions affecting local control flow within functions of the binary code. The computing device may combine execution statistics from multiple threads or process instances of the binary code. After completing execution of the instrumented code, the computing device generates a control flow graph indicative of control flow of the binary code and recursively detects binary loops within the binary code. The computing device calculates a trip count for reach detected binary loop using the recorded execution statistics. Other embodiments are described and claimed. | 12-31-2015 |
20150378869 | MEASURING THE LOGGING QUALITY OF A COMPUTER PROGRAM - Techniques are described for measuring or quantifying the logging behavior in the source code of a computer program. In particular, the techniques select a method identified as exhibiting the ideal logging behavior in a computer program and then compute the overall logging quality score for the entire computer program based on the deviation in logging behaviors between the selected method and all other methods in the source code of the project. This overall logging quality score can be compared to various benchmarks of existing projects with high logging quality. If the software logging quality is found to be low, various steps can be taken by the developers to improve the logging before the software release. | 12-31-2015 |
20150378873 | AUTOMATICALLY RECOMMENDING TEST SUITE FROM HISTORICAL DATA BASED ON RANDOMIZED EVOLUTIONARY TECHNIQUES - Disclosed herein are a system and a method for automated test suite optimization and recommendation, based on historical data, using randomized evolutionary techniques. The system analyzes historical data pertaining to file change pattern and test case execution history to identify test cases that match application being tested. Further, based on the test cases identified, the system generates optimized test suite recommendations to the user. | 12-31-2015 |
20150378876 | VISUAL GRAPHICAL USER INTERFACE VERIFICATION - An automated testing system is described for efficient visual verification of graphical user interfaces of software applications. A pattern is formed for the user interface of a page of the application indicating regions of the page where user interface elements should be located and identifying which user interface element should be located in which region. During test execution, image recognition is performed using previously stored snapshots of user interface elements to determine whether the application's user interface elements appear in correct positions on the page. | 12-31-2015 |
20150378877 | METHOD AND SYSTEM FOR TESTING SOFTWARE - In one embodiment, a method of testing a software is disclosed. The method comprises: providing an input event to the software under test, wherein the software under test is associated with a time delay between an input event and an output event; identifying one or more discrete time instances based on the time delay between the input event and the output event; and testing the software under test by synthetically setting a clock to the one or more discrete time instances. | 12-31-2015 |
20150378879 | METHODS, SOFTWARE, AND SYSTEMS FOR SOFTWARE TESTING - An embodiment of a method of testing software can include, as performed by at least one computing device, evaluating a first criterion for a plurality of software components, selecting a subset of the plurality of software components based on the evaluated first criterion, evaluating a second criterion for a plurality of test cases each defining a respective test to evaluate functionality of at least one of the software components, selecting a subset of the plurality of test cases based on the evaluated second criterion, and testing the selected subset of the plurality of software components utilizing the selected subset of the plurality of test cases. | 12-31-2015 |
20160004623 | ROLE-ORIENTED TESTBED ENVIRONMENTS FOR USE IN TEST AUTOMATION - A configuration manager reads a testbed description file for a particular testbed environment under test to identify multiple roles each specified by a separate role identifier. The configuration manager instantiates, for each separate role identifier defined in the test script, a separate role identifier entity referring to a separate host description file for a separate host assigned to the separate role identifier in the testbed description file. The configuration manager manages an abstraction between each separate role identifier referred to by each separate operation in the test script and each separate host currently hosting each separate role identifier for the testbed environment using the separate host description file instantiated for the separate role identifier entity for the separate host. | 01-07-2016 |
20160004629 | USER WORKFLOW REPLICATION FOR EXECUTION ERROR ANALYSIS - Examples of workflow replication and execution error analysis are provided herein. Data describing how a user interacts with a software application and describing the context within which the user is working is recorded and provided to a user workflow replication system when an execution error occurs. A simulation of the execution error can be performed by replicating a configuration of the software application and/or the computer system that executed the software application and then performing functions specified in the provided data. The results of the simulation of the execution error can then be analyzed according to a number of scenarios. | 01-07-2016 |
20160011957 | REDUCING RESOURCE OVERHEAD IN VERBOSE TRACE USING RECURSIVE OBJECT PRUNING PRIOR TO STRING SERIALIZATION | 01-14-2016 |
20160011959 | EVENT-DRIVEN SOFTWARE TESTING | 01-14-2016 |
20160019134 | ERROR ASSESSMENT TOOL - Embodiments of the invention are directed to a system, method, and computer program product for assessing error notifications associated with one or more application functions. An exemplary embodiment includes receiving an indication of an error associated with at least one function in an application; extracting information associated with the application from one or more sources; and initiating a presentation of a second user-interface to enable a user to resolve the error, wherein the second user-interface comprises at least one of an aggregation of the information extracted from the one or more sources. | 01-21-2016 |
20160026554 | INDICATING A READINESS OF A CHANGE FOR IMPLEMENTATION INTO A COMPUTER PROGRAM - A fix defining at least one unique change to at least a portion of a computer program can be identified. The fix can be applied to the computer program to generate a test version of the computer program As each of the unique changes is applied, program code unites in the computer program changed can be identified. A number of test cases available to test the program code units changed can be determined by matching each of the program code units changed to corresponding data entries. A test readiness index indicating a readiness of the fix to be tested can be generated. The test readiness index can be based on a number of unique changes to the computer program defined by the fix and a number of test cases available to test the unique changes to the computer program defined by the fix. The test readiness index can be output. | 01-28-2016 |
20160026559 | INDICATING A READINESS OF A CHANGE FOR IMPLEMENTATION INTO A COMPUTER PROGRAM - A fix defining at least one unique change to at least a portion of a computer program can be identified. The fix can be applied to the computer program to generate a test version of the computer program As each of the unique changes is applied, program code unites in the computer program changed can be identified. A number of test cases available to test the program code units changed can be determined by matching each of the program code units changed to corresponding data entries. A test readiness index indicating a readiness of the fix to be tested can be generated. The test readiness index can be based on a number of unique changes to the computer program defined by the fix and a number of test cases available to test the unique changes to the computer program defined by the fix. The test readiness index can be output. | 01-28-2016 |
20160026560 | Functional Test Automation for Gesture-Based Mobile Applications - A method for cloud-based functional testing of a mobile application includes running a functional test program on a server. The functional test program provides a graphical user interface (GUI) that allows a user to select a mobile application and a mobile computing device having a touch-sensitive display screen for receiving user input. The mobile computing device is located remote to the server. The functional test program launches the mobile application on the mobile computing device via a wireless network connection. The server receives precision elements of each gesture-based input on the touch-sensitive display screen, the precision elements being captured and transmitted from the mobile computing device to the server during execution of the mobile application. The precision elements of each gesture-based input are then recorded in a test clip. | 01-28-2016 |
20160026561 | RESOLVING NONDETERMINISM IN SYSTEM UNDER TEST BEHAVIOR MODELS - Methods, systems, and computer-readable storage media for resolving nondeterminism in a behavior model of a computing system under test (SUT). In some implementations, actions include: receiving a behavior model relating to a SUT, the behavior model including two or more nondeterministic transitions; obtaining trace data associated with execution of the SUT across the two or more nondeterministic transitions; determining based on the trace data, two or more transition guards that resolve nondeterminism of the two or more nondeterministic transitions; and associating the two or more transition guards with the two or more nondeterministic transitions to provide an extended behavior model. | 01-28-2016 |
20160026562 | SYSTEM AND METHOD FOR TESTING SOFTWARE - A computer-implemented method, computer program product, and system is provided for testing software. In an implementation, a method may include executing at least one test group during testing of a software application in a multi-platform testing environment. The method may also include detecting an error in the software application based upon, at least in part, execution of the at least one test group. The method may further include resolving the error during execution of the at least one test group in the multi-platform testing environment. | 01-28-2016 |
20160034337 | Failure Mode Identification and Reporting - When a software component is starting, such as but not limited to a task or a subtask, the component pushes its identification (ID) onto a stack. The component executes its other instructions. If the component completes its instructions so that it can terminate normally, it pops the stack, which removes its ID from the stack. If the component fails, such as by not being able to complete its instructions, it will not be able to pop the stack so its ID will remain in the stack. Another software process can read the IDs in the stack to identify which components have failed and can automatically take a specified action, such as by sending an email message to, sending a text message to, or calling by telephone, a person or persons responsible for that software component. | 02-04-2016 |
20160034378 | METHOD AND SYSTEM FOR TESTING PAGE LINK ADDRESSES - Testing page link addresses is disclosed including searching in a page to locate a link having an empty link address based on empty link attribute features, performing simulated triggering on the located link having the empty link address, determining whether the empty link address within the located link opens a new page upon the simulated triggering to obtain a determination result, and determining whether the located link having the empty link address was erroneously set as an empty link based on the determination result. | 02-04-2016 |
20160041897 | GENERATION OF AUTOMATED UNIT TESTS FOR A CONTROLLER LAYER SYSTEM AND METHOD - A method, computer program product, and computer system for receiving, by a computing device, a selection of one or more files for which to have one or more automated unit tests generated for an application under test. An action in the application under test is received while the application under test is used. Behavior data of how the application under test responds to the action is tracked. An automated unit test of the one or more automated unit tests is generated for underlying code of the application under test invoked when receiving the action based upon, at least in part, the behavior data of how the application under test responds to the action. | 02-11-2016 |
20160041898 | GENERATION OF AUTOMATED UNIT TESTS FOR A CONTROLLER LAYER SYSTEM AND METHOD - A method, computer program product, and computer system for receiving, by a computing device, a selection of one or more files for which to have one or more automated unit tests generated for an application under test. An action in the application under test is received while the application under test is used. Behavior data of how the application under test responds to the action is tracked. An automated unit test of the one or more automated unit tests is generated for underlying code of the application under test invoked when receiving the action based upon, at least in part, the behavior data of how the application under test responds to the action. | 02-11-2016 |
20160041899 | GUIDED REMEDIATION OF ACCESSIBILITY AND USABILITY PROBLEMS IN USER INTERFACES - A user interface is analyzed to identify a problem element in the user interface. A problem in the user interface is related to an initial value of an attribute of the problem element. A changed value of the attribute of the problem element is computed. A determination is made that the changed value satisfies a compliance rule applicable to the user interface. A first record and a second record are selected from a historical data. The first record includes a first value of a metric usable with the user interface, and the second record includes a second value of the metric. A difference between the second value and the first value is associated with the changed value as an expected change in the metric due to the changed value. The changed value and the expected change in the metric are presented as a remedy for the problem. | 02-11-2016 |
20160041900 | TESTING INTEGRATED BUSINESS SYSTEMS - Methods, systems, and computer readable medium are disclosed to test a first business system and a second business system. A test of the first business system is performed, wherein the first business system is integrated with the second business system. One or more calls are recorded from the first business system to the second business system during the test of the first business system. The one or more calls from the first business system are identified for a test of the second business system. | 02-11-2016 |
20160055072 | METHOD, DEVICE, AND PROGRAM STORAGE DEVICE FOR AUTONOMOUS SOFTWARE LIFE CYCLE MANAGEMENT - A method of searching for and installing a software product on a device is provided. One or more capabilities needed by the device to be served by a software product are determined. The one or more capabilities needed by the device are communicated from a software life cycle management agent on the device to a yellow pages agent outside the device, the communicating comprising formulating a request comprising a list of the capabilities encoded in a description language that defines the capabilities semantically. Then locations of one or more software products matching the one or more capabilities needed by the device may be received from the yellow pages agent. One of the one or more software products to install may be selected based on automatically evaluated criteria. Then the selected software product may be downloaded using its received location, and the selected software product may be installed on the device. | 02-25-2016 |
20160055077 | METHOD, DEVICE, AND PROGRAM STORAGE DEVICE FOR AUTONOMOUS SOFTWARE PRODUCT TESTING - A method of testing a software product is performed. The software product is downloaded to a sandbox located on a device, the sandbox constructed so that actions taken by software inside the sandbox do not affect operations of modules on the device located outside of the sandbox. Information about the software product is obtained. Then one or more test libraries are automatically generated, based on the information, each of the test libraries containing one or more executable functions to test the software product. Then the software product is tested in the sandbox using the one or more test libraries and test data, producing test results, wherein the testing includes obtaining information from one or more components of the device outside of the sandbox. Based at least on the test results, it is determined that the software product should be installed fully on the device. | 02-25-2016 |
20160062810 | METHODS AND APPARATUS FOR DETECTING SOFTWARE INTEFERENCE - The present application relates to an apparatus for detecting software interference and the method of operating thereof. A processor and at least one shared resource form a computing shell to execute a first, functional safety critical application and at least one second application in time-shared operation. One or more performance counters are provided to adjust a counter value in response to a performance related event. A reference value storage stores one or more threshold values, each of which is associated with one of the performance counters. A comparator receives the performance counter values, compares the performance counter values with the respective threshold values and generates at least one comparison signal in response to results of the comparisons. An interference indication generator receives the at least one comparison signal and generates at least one interference indication in response to the at least one received comparison signal. | 03-03-2016 |
20160062877 | GENERATING COVERAGE METRICS FOR BLACK-BOX TESTING - Generating coverage metrics for black-box testing includes performing static analysis of a program code to be tested. The static analysis includes identifying variables whose value depends on inputs of the program code. Code blocks are inserted into the program code to be tested. The code blocks insert vulnerabilities into the code at locations where the variables are modified. The code blocks violate one or more properties to be tested. A testing scan is applied to the program code and vulnerabilities are located by the test. A coverage metric is output based on the ratio of the located vulnerabilities to the total number of inserted vulnerabilities in the program code. | 03-03-2016 |
20160062879 | TESTING A MOBILE APPLICATION - The present invention discloses a manager, a test agent installed on a personal mobile device and methods thereof. The manager comprises: a first network connection module configured to establish a connection with the mobile device through Internet, the mobile device being installed with a test agent for performing test operation on a mobile application on the mobile device; and a security module configured to communicate with the test agent through the first network connection module to make the test agent perform security control on the mobile device. According to the manager, mobile devices, and methods of the present invention, the cost such as maintenance cost of the data center and purchase cost of mobile devices can be reduced dramatically. It is not necessary to analyze market demands since mobile devices owned by the users of the mobile devices are the mobile devices that need to be tested by the tester. | 03-03-2016 |
20160062880 | Methods and Systems for the Use of Synthetic Users To Performance Test Cloud Applications - A method and system for testing the end-to-end performance of cloud based applications. Real workload is created for the cloud based applications using synthetic users. The load and length of demand may be adjusted to address different traffic models allowing the measurement and analysis of user performance metrics under specified conditions. | 03-03-2016 |
20160070638 | AUTOMATED DEBUG TRACE SPECIFICATION - Debugging a debug target software by: (i) generating a first log file set, including at least one log file, based upon how the computer hardware set executes the instructions of the computer software set; (ii) responsive to a first error in the execution of the computer software set, examining at least a portion of the first log file set; and (iii) creating, based at least in part upon the examination of the first log file set, augmented logging instructions for generating augmented logging information, which is helpful for debugging. | 03-10-2016 |
20160085663 | AUTOMATIC IDENTIFICATION OF SOFTWARE TEST CASES - A method for identifying test case for software testing is disclosed. The method, receives a test case of a plurality of test associated with a software application. The test case includes a test input for processing by the software application. The test input designed for verifying compliance with a specific requirement. The method further generates mapping data for the test case. Mapping data associates one or more parts of a source code of the software application to the test case. | 03-24-2016 |
20160085664 | GENERATING A FINGERPRINT REPRESENTING A RESPONSE OF AN APPLICATION TO A SIMULATION OF A FAULT OF AN EXTERNAL SERVICE - Examples disclosed herein relate to generating a fingerprint representing a response of an application to a simulation of a fault of an external service. Examples include causing simulation of a fault of an external service in a simulation of the external service, and generating a testing application fingerprint representing a response of an application to the simulation of the fault of the external service during the testing of the application. | 03-24-2016 |
20160085665 | INTELLIGENT SOFTWARE TEST AUGMENTING - Augmenting a software module test suite is provided, which includes: providing a test suite including test cases for a module to convert an N-dimensional space into an output space, where N≧2, the cases covering a first portion of the N-dimensional space; exploring the N-dimensional space by repeating: partitioning a further portion of the N-dimensional space by exploring a partition of the further portion, the partition including partition boundaries defined by a constant value of one of the N input values, each partition having a partition boundary bordering one of the test cases; evaluating the partition and generating a further test case if the evaluation reveals that the partition is not covered by the test cases; and adding the partition to the first portion; until the further portion has been explored or a termination criterion met; and producing an augmented test suite including the generated further test cases. | 03-24-2016 |
20160085666 | AUTO-DEPLOYMENT AND TESTING OF SYSTEM APPLICATION TEST CASES IN REMOTE SERVER ENVIRONMENTS - A method for executing a system application test case of a runtime system in a server integrated environment is provided. The method includes establishing a transmission control protocol connection between a client development environment and a server integrated environment, to initiate execution of the system application test case in the server integrated environment. The method further includes issuing a data transfer protocol transmission request to the server integrated environment for a description script of the system application test case. The method further includes transmitting an extensible markup language of the requested description script. The method further includes issuing a data transfer protocol transmission request to execute a test of the system application test case. The method further includes executing the system application test case in the server integrated environment. The method further includes transmitting the extensible markup language document of the compiled test results to the client development environment. | 03-24-2016 |
20160092346 | COVERAGE GUIDED TECHNIQUE FOR BUG FINDING IN CONTROL SYSTEMS AND SOFTWARE - A computer-implemented method for automatically identifying a faulty behavior of a control system. The method includes receiving, at a test processor, a description of the faulty behavior. The method also includes selecting, using the test processor, a goal state based on a heuristic decision. The method also includes selecting, using the test processor, a selected system state. The method also includes selecting, using the test processor, a selected variable to the control system based on the goal state. The method also includes loading, from a memory, a control model of the control system. The method also includes performing, using the test processor, a simulation of the control model using the selected variable and the selected system state as parameters of the simulation. The method also includes determining, using the test processor, whether the faulty behavior was observed based on the simulation. | 03-31-2016 |
20160098340 | METHOD AND SYSTEM FOR COMPARING DIFFERENT VERSIONS OF A CLOUD BASED APPLICATION IN A PRODUCTION ENVIRONMENT USING SEGREGATED BACKEND SYSTEMS - An application is implemented in the production environment in which the application will be used. Two or more backend systems are used to implement different versions of the application using the production environment in which the application will actually be used and accessed. Actual user data is received. A first portion of the actual user data is routed and processed in the production environment using a first version of the application and a first backend system of the two or more backend systems. A second portion of the actual user data is also routed and processed in the production environment but using a second version of the application and a second backend system of the two or more backend systems. The results data is then analyzed to evaluate the various versions of the application in the production environment. | 04-07-2016 |
20160098341 | WEB APPLICATION PERFORMANCE TESTING - A system for performance testing a web application initializes to be instrumented a subset of methods of the web application to be tested in response to a request, and then tests the application based on the subset of methods. The system generates an instrumented call tree and corresponding stack traces for each request in response to the testing, and determines one or more methods that take longer than a predetermined time period to execute using the instrumented call trees and the stack traces. The system then determines additional methods to be tested and adds the determined additional methods to the subset of methods and repeats the testing. | 04-07-2016 |
20160098343 | SYSTEM AND METHOD FOR SMART FRAMEWORK FOR NETWORK BACKUP SOFTWARE DEBUGGING - A system for network software debugging comprises a processor, an input interface, and an output interface. The processor is configured to determine a set of available components of a selected component type, and determine a set of backup processes running on the component. The input interface is configured to receive a selection of a backup process of the set of backup processes. The output interface is configured to provide an indication of a change of verbosity level. | 04-07-2016 |
20160103756 | APPLICATION ARCHITECTURE ASSESSMENT SYSTEM - A system stores a plurality of chapters, a plurality of sections each associated with a chapter, a plurality of control points each associated with a section, a plurality of assessment points each associated with a control point, and a plurality of attributes each associated with an assessment point. The system retrieves application information corresponding to an application. The system determines that one of the plurality of stored attributes applies to the application and assigns an attribute score to the application based on the determination. The system calculates various scores based on the attribute score and other scores including, an assessment point score, a control point score, a section score, and a chapter score. Based at least in part upon at least one of these scores, the system determines a strength of the application. | 04-14-2016 |
20160110281 | SYSTEM AND METHOD FOR DEBUGGING FIRMWARE/SOFTWARE BY GENERATING TRACE DATA - A method for debugging firmware/software by generating trace data includes the following steps: running a debug module in a power-on stage in a test system, to record a load address and a branch instruction execution record set of a tested module into an area for temporary storage; accessing, by an analyzer, in an operating system stage in the area for temporary storage, the load address and the branch instruction execution record set and accessing a program debug symbol table, where the program debug symbol table is generated when source program code is compiled; and finding, by the analyzer, an original source file, a function name, and line numbers of executed codes from the program debug symbol table according to the load address and the branch instruction execution record set to generate an analysis report that includes a program execution path and a program code coverage. | 04-21-2016 |
20160117235 | SOFTWARE AUTOMATION AND REGRESSION MANAGEMENT SYSTEMS AND METHODS - An automation and regression management method for testing software in a highly-complex cloud-based system with a plurality of nodes, through an automation and regression management system, includes receiving a plurality of requests for automated test runs on nodes in the highly-complex cloud-based system; managing the plurality of requests by either starting an automated test run on a node or queuing the automated test run if another automated test run is already operating on the node; determining details of each of the automated test runs subsequent to completion; storing the details of each of the automated test runs in a database; and providing the details of each of the automated test runs to a requesting user. | 04-28-2016 |
20160124835 | DIAGNOSTIC WORKFLOW FOR PRODUCTION DEBUGGING - A diagnostic workflow file can be used to control the future diagnostic actions taken by a debugger without user interaction with the debugger when it executes. The diagnostic workflow file is used by a debugger during a debug session. The debugger performs the actions directed by the diagnostic workflow file to simulate an interactive live debug session. The diagnostic workflow file can include conditional diagnostic operations whose execution depends on the state of program variables, diagnostic variables and diagnostic primitives in the debug session. | 05-05-2016 |
20160132418 | OPTIMIZED GENERATION OF DATA FOR SOFTWARE PROBLEM ANALYSIS - A computer optimizes the prospective generation of data used for analysis of a software problem. The computer generates data in accordance with data generation parameters and a software problem is analyzed with reference to the data so generated. The problem analysis produces a report that details specifics of the software problem, the data that was available for analysis, a flag to indicate success or failure of the analysis to identify a root cause, and information about whether the data supplied was insufficient, sufficient, or superfluous with respect to identifying a root cause of the software problem. The method then uses the analysis report to modify the data generation parameters, thereby iteratively optimizing the data that are generated for analysis of subsequent software problems. | 05-12-2016 |
20160132419 | OPTIMIZED GENERATION OF DATA FOR SOFTWARE PROBLEM ANALYSIS - A computer optimizes the prospective generation of data used for analysis of a software problem. The computer generates data in accordance with data generation parameters and a software problem is analyzed with reference to the data so generated. The problem analysis produces a report that details specifics of the software problem, the data that was available for analysis, a flag to indicate success or failure of the analysis to identify a root cause, and information about whether the data supplied was insufficient, sufficient, or superfluous with respect to identifying a root cause of the software problem. The method then uses the analysis report to modify the data generation parameters, thereby iteratively optimizing the data that are generated for analysis of subsequent software problems. | 05-12-2016 |
20160132422 | SYSTEM AND METHOD FOR DETERMINING REQUIREMENTS FOR TESTING SOFTWARE - A computer-implemented method, computer program product, and system is provided for determining requirements for testing software. In an implementation, a method may include inspecting contents of a test case, including source code of the test case. The method may also include identifying at least one of: at least one characteristic of a test machine and at least one characteristic of a resource required to execute the test case correctly. The method may further include compiling a list of requirements for the test case to execute correctly based upon, at least in part, the at least one of the at least one characteristic of the test machine and the at least one characteristic of the resource. | 05-12-2016 |
20160132423 | SYSTEM AND METHOD FOR DETERMINING REQUIREMENTS FOR TESTING SOFTWARE - A computer-implemented method, computer program product, and system is provided for determining requirements for testing software. In an implementation, a method may include inspecting contents of a test case, including source code of the test case. The method may also include identifying at least one of: at least one characteristic of a test machine and at least one characteristic of a resource required to execute the test case correctly. The method may further include compiling a list of requirements for the test case to execute correctly based upon, at least in part, the at least one of the at least one characteristic of the test machine and the at least one characteristic of the resource. | 05-12-2016 |
20160132426 | AUTOMATED GENERATION OF SCRIPTED AND MANUAL TEST CASES - Systems and methods that provide manual test cases and scripted test cases automatically based on metadata included in a software application. In an embodiment, an application may include elements that generate an output file containing information corresponding to one or more forms with one or more fields in an application. The information may be utilized by a test device or application to automatically generate manual test cases, automated scripted test cases, or a combination of manual and automated test cases based on the information. In an embodiment, a manual test case may include a sequence of instructions in a natural language format. In an embodiment, an automated test case may be in a script language configured to interact with the application or an appropriate application emulator. | 05-12-2016 |
20160140016 | EVENT SEQUENCE CONSTRUCTION OF EVENT-DRIVEN SOFTWARE BY COMBINATIONAL COMPUTATIONS - According to an aspect of an embodiment, a method may include determining event sequences of an event-driven software application. The method may further include determining, for each event sequence, a distance with respect to each of one or more target conditions of the event-driven software application. The event sequence distance may indicate a degree to which execution of its corresponding event sequence satisfies a corresponding target condition. The method may also include prioritizing execution of the plurality of event sequences based on the event sequence distances. Further, the method may include exploring, according to the prioritization of execution, an event space that includes one or more of the event sequences and a dependent event that corresponds to the one or more target conditions. | 05-19-2016 |
20160140017 | USING LINKED DATA TO DETERMINE PACKAGE QUALITY - Arrangements described herein relate to determining a quality of a software package. Via linked data, the software package can be linked to at least one test plan and a requirement collection. The software package can be executed in accordance with the test plan using at least one test case. At least one test result of the execution of the software package can be generated. A score can be assigned to the test result and a score can be assigned to the test based at least on the test result. Based at least the scores on assigned to the test result and the test case, a package quality score can be assigned to the software package. | 05-19-2016 |
20160140020 | REQUEST MONITORING - A method of monitoring requests to a code set is provided, which includes: receiving a request to the code set; creating a trace for the request, the trace defining the path of the request through the code set; accessing a plurality of stored trace patterns, each stored trace pattern defining an acceptable path of a request through the code set; comparing the created trace to the stored trace patterns; and storing the created trace if it does not match one of the stored trace patterns. | 05-19-2016 |
20160140026 | Systems and Methods for Selection of Test Cases for Payment Terminals - The present disclosure proposes a computer implemented method for selecting test cases to be executed on a terminal by creating a configuration code and applying this code to a set of test case selection tuples. The present disclosure also proposes a method for automatically creating a set of test case selection tuples, taking a source code as an input. The created set of test case selection tuples can be used in the above-mentioned method for selecting test cases. Finally, the present disclosure proposes a method for operating a program for selecting test cases having a user interface and a selection logic. The program may apply the above-mentioned method for selecting test cases by creating a configuration code and applying this code to a set of test case selection tuples. | 05-19-2016 |
20160147634 | METHOD AND APPARATUS FOR OBTAINING CONSTRAINTS ON EVENTS - Method and apparatus for obtaining constraints on events. The method includes: obtaining a correspondence between a goal and multiple candidate constraints associated with the event from multiple event sequences including the event, wherein each event sequence among the multiple event sequences is a series of historical events that are executed for achieving the goal; identifying an impact on the goal of at least one part of candidate constraints among the multiple candidate constraints based on the correspondence; and in response to metric of the impact satisfying a predefined condition, determining the at least one part of candidate constraints as the constraint. An apparatus for determining a constraint on an event and a method and apparatus for generating a Case Management Model from multiple event sequences are also provided. | 05-26-2016 |
20160147635 | Unit Test Infrastructure Generator - A method, a system, and a computer program product for generating test infrastructure for testing of software applications are disclosed. At least one first method associated with an application is determined. A testing version of a second method associated with the application is generated. The first method calls a runtime version of the second method during execution of the application in a runtime environment. The first method is tested using the testing version of the second method in a testing environment associated with the application. | 05-26-2016 |
20160147636 | ENHANCED RESILIENCY TESTING BY ENABLING STATE LEVEL CONTROL FOR REQUEST - A computer implemented method for testing the resiliency of a software application. The computer implemented method can test the resiliency of a software application by monitoring the program state of the software application and trigger a shutdown request when the specified program state has been reached. The shutdown request can be transmitted to the application software and executed to shut down one or more functionalities of the software application. In some examples, the method can specify the functionality to shut down and the program state which the shutdown occurs can be specified in an application configuration file. | 05-26-2016 |
20160147646 | METHOD AND SYSTEM FOR EXECUTING AUTOMATED TESTS IN AN INTEGRATED TEST ENVIRONMENT - This technology relates to a method and system for executing automated tests in an integrated test environment comprising plurality of test environments. The test management module configured in the system creates one or more test sets by grouping the one or more test cases received from the input module. The control module determines status of the test environment for executing each test set. If the test environment is available then the corresponding test set is executed and if the test environment is not available an order of execution of the test sets is rearranged. The status of the test environment is checked after a predetermined time interval and if the test environment is not available, the control module determines the availability of the virtual response for providing virtual service. If the test environment is not available the control module creates a ticket indicating failure of the test environment. | 05-26-2016 |
20160147647 | Method And System Of Testing Software Using Real Time Replication - Method and system of testing software using real time replication. At least some illustrative examples include interacting by a human tester with a first software program executed on a first computer system. The interacting causes an operation to be performed on the first software program and the operation is duplicated on a second software program executed on a second computer system. The duplication on the second computer system is done programmatically in real time with the interacting and the duplicating on the first computing system. A result of the operation on the first computer system against a result of the operation on the second computer system is programmatically analyzed on the second computing system. The human tester is notified when the result of the operation on the second computer system is unexpected. | 05-26-2016 |
20160162385 | CORRELATION OF VIOLATING CHANGE SETS IN REGRESSION TESTING OF COMPUTER SOFTWARE - Embodiments of the invention provide for the correlation of violating change sets during regression testing of a computer program. A method of the invention includes annotating a test case with a reference to logical operations of different programmatic objects of a computer program. Thereafter, change sets are applied to the program and the test case is executed by a development environment such as a debugger to a point of failure. It is then determined from the annotations change sets related to the logical operations and different ones of the determined change sets are sequentially replaced and the test case repeatedly re-executed. As such, the ones of the replaced change sets resulting in failure from re-execution of the test case are determined to be violating change sets. | 06-09-2016 |
20160162392 | Adaptive Framework Automatically Prioritizing Software Test Cases - An automated, self-adaptive framework prioritizes software testing in a consistent and effective manner. A metric evaluates past test execution information for assigning regression testing priority. The metric may be calculated with reference to one or more of the following factors taken in combination: requirement, coverage, history, and cost. The requirement factor considers customer-assigned priority of testing the code, complexity of implementing the code, and proneness of the code to faults. The coverage factor considers code coverage, feature coverage, and common usage rate. The history factor considers previous bug found rate, case stable rate, and priority to calculate. The cost factor considers test case execution time, and step length. A value of each factor for one test case is measured according to that test case and is not related to other test cases. The calculation result representing the metric for each test case determines a priority of the test case. | 06-09-2016 |
20160162398 | AUTOMATED TEST GENERATION AND EXECUTION FOR TESTING A PROCESS TO CONTROL A COMPUTER SYSTEM - User interactions with a computing system are sensed and recorded. The recording represents a process for controlling a computer system. The computer system actions that are taken based upon the sensed user interactions are also recorded. The recording is parsed and a test for testing the recorded process is generated and automatically executed from the recording. | 06-09-2016 |
20160170822 | HIGH-VOLUME DISTRIBUTED SCRIPT ERROR HANDLING | 06-16-2016 |
20160170868 | METHOD AND APPARATUS FOR THE AUTOMATED TESTING OF A SUBSYSTEM OF A SAFETY CRITICAL SYSTEM | 06-16-2016 |
20160179652 | INTEGRATED PRODUCTION SUPPORT | 06-23-2016 |
20160188445 | CONDUCTING PERFORMANCE SNAPSHOTS DURING TEST AND USING FEEDBACK TO CONTROL TEST BASED ON CUSTOMER EXPERIENCE PARAMETERS - The technology disclosed enables understanding the user experience of accessing a web page under high loads. A testing system generates a simulated load by retrieving and loading a single web object. A performance snapshot is taken of accessing an entire web page from the server under load. The performance snapshot may be performed by emulating a browser accessing a web page's URL, the web page comprising multiple objects that are independently retrieved and loaded. The simulated load is configured with a number of users per region of the world where the user load will originate, and a single object from the web page to retrieve. Performance data such as response time for the single object retrieved, number of hits per second, number of timeouts per sec, and errors per second may be recorded and reported. An optimal number of users may be determined to achieve a target user experience goal. | 06-30-2016 |
20160188450 | AUTOMATED APPLICATION TEST SYSTEM - An automated application test system comprises a plurality of clients ( | 06-30-2016 |
20160196201 | MODULE SPECIFIC TRACING IN A SHARED MODULE ENVIRONMENT | 07-07-2016 |
20160203037 | Second Failure Data Capture in Co-Operating Multi-Image Systems | 07-14-2016 |
20160203074 | SYSTEM TO ENABLE MULTI-TENANCY TESTING OF BUSINESS DATA AND VALIDATION LOGIC ON THE CLOUD | 07-14-2016 |
20160378618 | RISK FORMULA FOR ERRONEOUS SOFTWARE COMPONENTS DETECTION - A method for performing software error detection and prediction. The method includes identifying a plurality of software components in a computer software product. For each of the software components of the plurality of software components, the risk-relevant historical data pertaining to the respective software component is measured, then classified into at least a set of risk-increasing data and a set of risk-decreasing data. The set of risk-increasing data and the set of risk-decreasing data are then normalized, and a failure risk value for the respective software component is calculated by subtracting a weighted sum of the normalized values for the risk-decreasing data from a weighted sum of the normalized values for the risk-increasing data. | 12-29-2016 |
20160378644 | FLEXIBLE CONFIGURATION AND CONTROL OF A TESTING SYSTEM - A method is provided to get a high test coverage through a large number of test cases with a minimum number of test programs. Tests are performed flexibly in various environments, using parameters in multiple dimensions. The parameters can be dynamically extracted from the machine or simulator either by controlling scripts or by the test program itself. Multiple ways are offered to execute subsets of the test combinations. | 12-29-2016 |
20160378645 | GENERATING DATA TABLES - The method includes identifying a first data table that includes a set of rows and a structure. The method further includes creating a second data table and a third data table having a matching structure as the first table. The method further includes distributing the set of rows of the first data table, wherein the set of rows is distributed between one or more of the second data table and the third data table based upon preset parameters. The method further includes, generating one or more operations for the set of rows. The method further includes executing one of the one or more generated operations on the second data table and the third data table. | 12-29-2016 |
20160378646 | METHOD AND SYSTEM FOR GENERATING FUNCTIONAL TEST CASES FOR SOFTWARE SYSTEMS - A method and system is provided for automated generation of the functional test cases for testing a software system. In an embodiment, the invention provides an expressive decision table (EDT), a requirement specification notation designed to reduce translation efforts, it implements a novel scalable row-guided random algorithm with fuzzing (RGRaF) (pronounced R-graph) to generate test cases. The invention also implements two new coverage criteria targeted at requirements and requirement interactions. The invention also provides fuzzing at time boundaries to achieve scalability. According to an embodiment, the invention also provides the feature of generating error in case the generated functional test case corresponds to system property violation of the software system. According to another embodiment, the system can also reject the functional test case if there is an improbable condition of the software system. | 12-29-2016 |
20160378648 | DYNAMIC RANKING OF PERFORMANCE ISSUES FOR APPLICATIONS - Identification and dynamic ranking of performance issues. For an instance of a performance anti-pattern, identifying and recording information relating to a resultant performance issue, quantifying the magnitude of the performance issue, and dynamically ranking the performance issue against other performance issues. | 12-29-2016 |
20180024915 | USER INTERFACE AUTOMATION FRAMEWORK | 01-25-2018 |
20190146901 | COGNITIVE MANUFACTURING SYSTEMS TEST REPAIR ACTION | 05-16-2019 |
20190146903 | TEST CASE MANAGEMENT SYSTEM AND METHOD | 05-16-2019 |
20190146904 | Optimizing Execution Order of System Interval Dependent Test Cases | 05-16-2019 |
20220138023 | MANAGING ALERT MESSAGES FOR APPLICATIONS AND ACCESS PERMISSIONS - Managing alert messages and access permissions for applications. In one embodiment, a method is provided. The method includes determining that one or more errors have occurred in a set of applications executing in a set of containers. The method also includes identifying a set of users in view of one or more of the set of containers and a set of files for the set of applications. The method further includes sending, via a set of messaging systems, a set of messages to the set of users to indicate that the one or more errors have occurred in the set of applications. | 05-05-2022 |
20220138094 | COMPUTER-IMPLEMENTED METHOD AND TEST UNIT FOR APPROXIMATING A SUBSET OF TEST RESULTS - The invention relates to a computer-implemented method for approximating a subset of test results of a virtual test of a device for the at least partially autonomous guidance of a motor vehicle. The invention further relates to a test unit for approximating a subset of test results of a virtual test of a device for the at least partially autonomous guidance of a motor vehicle. The invention also relates to a computer program and a computer-readable data carrier. | 05-05-2022 |