26th week of 2016 patent applcation highlights part 50 |
Patent application number | Title | Published |
20160188357 | SOFTWARE APPLICATION PLACEMENT USING COMPUTING RESOURCE CONTAINERS - Embodiments associate software applications with computing resource containers based on a placement rule and a selected failure correlation. A placement rule indicates that a first software application is to be co-located with a second software application during execution of the first and second software applications. The placement rule also indicates that the first software application is to be separated from the second software application during execution of the first and second software applications. Failure correlations are determined for a plurality of computing resources associated with the first software application. A computing resource with a lowest failure correlation is selected from the plurality of computing resources, and the second software application is associated with the selected computing resource despite the association violating the placement rule. | 2016-06-30 |
20160188358 | METHOD FOR SHARING RESOURCE USING A VIRTUAL DEVICE DRIVER AND ELECTRONIC DEVICE THEREOF - A method of sharing a resource using a virtual device driver and an electronic device thereof are provided. The method includes generating a virtual device driver, which corresponds to a real device driver of a host electronic device, in the client electronic device, receiving a resource from the host electronic device by using the virtual device driver through a first communication mechanism designated in the host electronic device, and after the first communication mechanism is changed to a second communication mechanism designated in the host electronic device, receiving the resource from the host electronic device by using the virtual device driver. | 2016-06-30 |
20160188359 | LOCATION-AWARE VIRTUAL SERVICE PROVISIONING IN A HYBRID CLOUD ENVIRONMENT - A sense of location is provided for distributed virtual switch components into the service provisioning scheme to reduce latency observed in conducting policy evaluations across a network in a hybrid cloud environment. A management application in a first virtual network subscribes to virtual network services provided by a second virtual network. A first message is sent to the second virtual network, the first message comprising information configured to start a virtual switch in the second virtual network that switches network traffic for one or more virtual machines in the second virtual network that are configured to extend services provided by the first virtual network into the second virtual network. A second message is sent to the second virtual network, the second message comprising information configured to start a virtual service node in the second virtual network that provides network traffic services for the one or more virtual machines. | 2016-06-30 |
20160188360 | REQUEST PROCESSING TECHNIQUES - A computer system implements a hypervisor which, in turn, implements one or more computer system instances and a controller. The controller and a computer system instance share a memory. A request is processed using facilities of both the computer system instance and the controller. As part of request processing, information is passed between the computer system instance and the controller via the shared memory. | 2016-06-30 |
20160188361 | SYSTEMS AND METHODS FOR DETERMINING DESKTOP READINESS USING INTERACTIVE MEASURES - Systems and methods described herein facilitate determining desktop readiness using interactive measures. A host is in communication with a server and the host includes a virtual desktop and a virtual desktop agent. The virtual desktop agent is configured to perform one or more injecting events via one or more monitoring agents, wherein each of the injecting events is a simulated input device event. The desktop agent is further configured to receive, via a display module, a response to the injecting event(s), wherein the response is a display update causing pixel color values for the display module to alter. The desktop agent is also configured to identify, via the monitoring agent(s), whether the response to the injecting event(s) is an expected response. The desktop agent is also configured to determine, via the monitoring agent(s), a readiness of the virtual desktop based on the expected response. | 2016-06-30 |
20160188362 | LIBRARY APPARATUS FOR REAL-TIME PROCESS, AND TRANSMITTING AND RECEIVING METHOD THEREOF - A library transmission method for a real-time process in a client, which includes extracting a next target address (NextTargetAddress) from workflow data, and transmitting data to an agent of a library apparatus that corresponds to a corresponding target address. | 2016-06-30 |
20160188363 | METHOD, APPARATUS, AND DEVICE FOR MANAGING TASKS IN MULTI-TASK INTERFACE - A method for managing a task in a terminal device is provided. The method includes displaying a multi-task interface. The multi-task interface includes one or more task interfaces, where at least one of the task interfaces includes a task presenting area and a task operation area, and the task operation area includes an operating element for performing a function of the task. The method may further include, based on a user selection of the operating element in the task operation area, running an application corresponding to the task and executing an operation corresponding to the operating element. | 2016-06-30 |
20160188364 | DYNAMIC REDUCTION OF STREAM BACKPRESSURE - Techniques are described for eliminating backpressure in a distributed system by changing the rate data flows through a processing element. Backpressure occurs when data throughput in a processing element begins to decrease, for example, if new processing elements are added to the operating chart or if the distributed system is required to process more data. Indicators of backpressure (current or future) may be monitored. Once current backpressure or potential backpressure is identified, the operator graph or data rates may be altered to alleviate the backpressure. For example, a processing element may reduce the data rates it sends to processing elements that are downstream in the operator graph, or processing elements and/or data paths may be eliminated. In one embodiment, processing elements and associate data paths may be prioritized so that more important execution paths are maintained. | 2016-06-30 |
20160188365 | COMPUTATIONAL UNIT SELECTION - A system and method for computing including compute units to execute a computing event, the computing event being a server application or a distributed computing job. A power characteristic or a thermal characteristic, or a combination thereof, of the compute units is determine. One or more of the compute units is selected to execute the computing event based on a selection criterion and on the characteristic. | 2016-06-30 |
20160188366 | Preemptive Operating System Without Context Switching - A device, such as a constrained device that includes a processing device and memory, schedules user-defined independently executable functions to execute from a single stack common to all user-defined independently executable functions according to availability and priority of the user-defined independently executable functions relative to other user-defined independently executable functions and preempts currently running user-defined independently executable function by placing the particular user-defined independently executable function on a single stack that has register values for the currently running user-defined independently executable function. | 2016-06-30 |
20160188367 | METHOD FOR SCHEDULING USER REQUEST IN DISTRIBUTED RESOURCE SYSTEM, AND APPARATUS - According to a method for scheduling a user request in a distributed resource system, an apparatus, and a system that are provided by embodiments of the present invention, in a T | 2016-06-30 |
20160188368 | TASK PROCESSING UTILIZING QUEUES - A system includes a plurality of queues configured to hold tasks and state information associated with such tasks. The system further includes a plurality of listeners configured to query one of the plurality of queues for a task, receive, in response to querying one of the plurality of queues for a task, a task together with state information associated with the task, effect processing of the received task, and communicate a result of the received task to another queue of the plurality of queues, the another queue of the plurality of queues being selected based on the processing of the received task. | 2016-06-30 |
20160188369 | Computing Resource Inventory System - Systems and methods of managing computing resources of a computing system are described. A computing resource list and computing resource information may be stored at a data store. The computing resource list may identify a set of computing resources of a computing system, and the computing resource information may respectively describe the computing resources. The computing resource list may be updated in response to a new computing resource being added to the computing system or in response to an existing computing resource being removed from the computing system. Evaluation tasks for the computing resources may be performed, and a resource evaluation report may be generated during performance of at least one of the evaluation reports. | 2016-06-30 |
20160188370 | Interface for Orchestration and Analysis of a Computer Environment - A host server is configured to receive information related to metrics and configurations associated with computer resources of a computer infrastructure, derive and resolve the information into capacity, performance, reliability, and efficiency, as related to attributes associated with the computer resources, including compute attributes such as application, virtual machine (VM) attributes, storage attributes, and network attributes. The host server provides the metrics and attributes in a matrix configuration as a graphical user interface (GUI) on an output device, such as a display. The GUI is configured to provide a user with a single point of view into the computer infrastructure by converging application, compute, storage, and network attributes into capacity, performance, reliability, and efficiency concepts. With such a configuration, the GUI allows the end user to readily review the environments for potential issues in a time efficient manner, as well as solutions provided by the GUI. | 2016-06-30 |
20160188371 | APPLICATION PROGRAMMING INTERFACES FOR DATA PARALLEL COMPUTING ON MULTIPLE PROCESSORS - A method and an apparatus for a parallel computing program calling APIs (application programming interfaces) in a host processor to perform a data processing task in parallel among compute units are described. The compute units are coupled to the host processor including central processing units (CPUs) and graphic processing units (GPUs). A program object corresponding to a source code for the data processing task is generated in a memory coupled to the host processor according to the API calls. Executable codes for the compute units are generated from the program object according to the API calls to be loaded for concurrent execution among the compute units to perform the data processing task. | 2016-06-30 |
20160188372 | BINARY TRANSLATION FOR MULTI-PROCESSOR AND MULTI-CORE PLATFORMS - Technologies for partial binary translation on multi-core platforms include a shared translation cache, a binary translation thread scheduler, a global installation thread, and a local translation thread and analysis thread for each processor core. On detection of a hotspot, the thread scheduler first resumes the global thread if suspended, next activates the global thread if a translation cache operation is pending, and last schedules local translation or analysis threads for execution. Translation cache operations are centralized in the global thread and decoupled from analysis and translation. The thread scheduler may execute in a non-preemptive nucleus, and the translation and analysis threads may execute in a preemptive runtime. The global thread may be primarily preemptive with a small non-preemptive nucleus to commit updates to the shared translation cache. The global thread may migrate to any of the processor cores. Forward progress is guaranteed. Other embodiments are described and claimed. | 2016-06-30 |
20160188373 | SYSTEM MANAGEMENT METHOD, MANAGEMENT COMPUTER, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A system management method for a management computer coupled to a computer system, the computer system including a plurality of computers, an operations system being built thereon the computer system, the operations system including a plurality of task nodes each having allocated thereto computer resources, the system management method including: a step of analyzing a configuration of the computer system for specifying a important node, which is an important task node in the operations system; a step of changing an allocation amount of the computer resources allocated to the important node for measuring a load of the operations system; a step of calculating a first weighting representing a strength of associations among the plurality of task nodes based on a measurement result of the load; and a step of specifying a range impacted by a change in the load of the important node based on the calculated first weighting. | 2016-06-30 |
20160188374 | METHOD AND SYSTEM FOR APPLICATION PROFILING FOR PURPOSES OF DEFINING RESOURCE REQUIREMENTS - Disclosed are a method of and system for profiling a computer program. The method comprises the steps of using a utility application to execute the computer program; and on the basis of said execution of the computer program, identifying specific performance requirements of the computer program. A profile of the computer program is determined from said identified performance requirements; and based on said determined profile, resources for the computer program are selected from a grid of computer services. | 2016-06-30 |
20160188375 | ENERGY EFFICIENT SUPERCOMPUTER JOB ALLOCATION - A technique for defragmenting jobs on processor-based computing resources including: (i) determining a first defragmentation condition, which first defragmentation condition will be determined to exist when it is favorable under a first energy consideration to defragment the allocation of jobs as among a set of processor-based computing resources of a supercomputer (for example, a compute-card-based supercomputer); and (ii) on condition that the first defragmentation condition exists, defragmenting the jobs on the set of processor-based computing resources. | 2016-06-30 |
20160188376 | Push/Pull Parallelization for Elasticity and Load Balance in Distributed Stream Processing Engines - The stream processing engine uses the Actor programming paradigm for defining the application in terms of a graph built with processing elements (PEs) that use a hash based partitioning of data, where events (key, value) are pushed towards the next element in the operator, and in case of an overloaded PE the method changes to a Producer/Consumer Model where new workers pull events from a buffer queue in order to release the amount of traffic in the overloaded PE. The programmer defines a sequential version of the PE and other parallel version that recovers the events from a buffer and, if the operator is stateless sends the result to the next PE, or if the operator is stateful sends the result to an aggregator PE before moving to the next stage of the pipeline process. Strategies for triggering changes in the graph are defined in an administrator module to provide the right amount of elasticity and load balance in the distributed stream processing engine using queues analysis of the monitoring module. | 2016-06-30 |
20160188377 | CLASSIFICATION BASED AUTOMATED INSTANCE MANAGEMENT - Systems, apparatuses, and methods for classification based automated instance management are disclosed. Classification based automated instance management may include automatically commissioning an application instance based on a plurality of classification metrics, and automatically monitoring the application instance based on the plurality of classification metrics. Automatically monitoring the application instance may include identifying a plurality of instance monitoring policies associated with the application instance based on the plurality of classification metrics. Automatically monitoring the application instance may include automatically suspending the application instance plurality of instance monitoring policies and automatically decommissioning the application based on the plurality of instance monitoring policies. | 2016-06-30 |
20160188378 | Method of Facilitating Live Migration of Virtual Machines - Embodiment pertain to facilitation of live migration of a virtual machine in a network system. The network system includes a first host, a second host, a first appliance for providing service to the first host, a second appliance for providing service to the second host, and a third appliance. At least one virtual machine is disposed on the first host and has an ongoing first network flow. The first appliance has generated state information about the first network flow. During the migration of the at least one virtual machine to the second host, the third appliance obtains a copy of the state information about the first network flow; and the third appliance takes over from the first appliance to serve the first network flow during the migration of the at least one virtual machine, until the first network flow is terminated. | 2016-06-30 |
20160188379 | ADJUSTMENT OF EXECUTION OF TASKS - A system and method for distributed computing, including executing a job of distributed computing on compute nodes. The speed of parallel tasks of the job executing on the compute nodes are adjusted to increase performance of the job or to lower power consumption of the job, or both, wherein the adjusting is based on imbalances of respective speeds of the parallel tasks. | 2016-06-30 |
20160188380 | PROGRESS METERS IN PARALLEL COMPUTING - Systems and methods may provide a set of cores capable of parallel execution of threads. Each of the cores may run code that is provided with a progress meter that calculates the amount of work remaining to be performed on threads as they run on their respective cores. The data may be collected continuously, and may be used to alter the frequency, speed or other operating characteristic of the cores as well as groups of cores. The progress meters may be annotated into existing code. | 2016-06-30 |
20160188381 | METHOD AND SYSTEM FOR ENSURING INTEGRITY OF CRITICAL DATA - A method and system for ensuring integrity of manipulatable critical data, including a processor configured to execute at least one restartable processing thread module, a shared memory communicatively coupled with the processor and having at least some manipulatable critical data wherein when request to restart the at least one restartable processing thread module is received, the at least one restartable processing thread module is restarted. | 2016-06-30 |
20160188382 | SYSTEMS, APPARATUSES, AND METHODS FOR DATA SPECULATION EXECUTION - Systems, methods, and apparatuses for data speculation execution (DSX) are described. In some embodiments, a hardware apparatus for performing DSX comprises a hardware decoder to decode an instruction, the instruction to include an opcode and an operand to store a portion of a fallback address, execution hardware to execute the decoded instruction to initiate a data speculative execution (DSX) region by activating DSX tracking hardware to track speculative memory accesses and detect ordering violations in the DSX region, and storing the fallback address. | 2016-06-30 |
20160188383 | Composing Applications on a Mobile Device - Methods, systems, and computer program products for composing applications on a mobile device are provided herein. A method includes exposing multiple capabilities from a set of multiple applications installed on an operating system of a user device to a configuration module executing on the operating system of the user device; defining one or more rules associated with using each of the multiple exposed capabilities; and invoking a combination of two or more of the multiple exposed capabilities, based on said one or more defined rules, to execute a user-defined task, wherein said invoking is executed by a super application executing on the operating system of the user device. | 2016-06-30 |
20160188384 | PROVIDING RANDOM DATA TO A GUEST OPERATING SYSTEM - Implementations for providing random data to a guest operating system are disclosed. In one implementation, a method of the disclosure comprises: receiving, by a processing device of a host computer system, a first random data item from an external computer system; updating an entropy pool using the first random data item; and providing a virtual machine running on the host computer system with a second random data derived from the host entropy pool. | 2016-06-30 |
20160188385 | OPTIMIZED SYSTEM FOR ANALYTICS (GRAPHS AND SPARSE MATRICES) OPERATIONS - A graph processing system includes a graph API (Application Program Interface), as executed on a processor of a computer and as capable of implementing any of a plurality of graph operators to express computations of input graph analytics applications. A run-time system, executed by the processor, implements graph operators specified by each graph API function and deploys the implemented graph operators to a selected computing system. A library contains multiple implementations for each graph API function, each implementation predetermined as being optimal for a specific set of conditions met by a graph being processed, for functional capabilities of a specific computing system on which the graph is being processed, and for resources available on that specific computing system. | 2016-06-30 |
20160188386 | METHOD AND SYSTEM FOR COMMUNICATING INFORMATION BETWEEN A MOBILE DEVICE AND AN ENTERPRISE SYSTEM - A method for communicating information between a mobile device and a computer system includes receiving a request from the mobile device to invoke a process of a legacy API of the computer system. The request specifies one or more input values associated with required input parameters of the process. The input values are provided in a first format that is different from a second format utilized by the legacy API for communicating data. The computer system determines required input parameters of the process, generates an input data structure in the second format that includes an entry for each of the required input parameters, determines parameters that are associated with the one or more input values communicated in the request, and sets values of entries in the input data structure associated with the one or more determined parameters to corresponding one or more input values in the request. A message call to the legacy API that includes the input data structure formatted in the second format is generated. | 2016-06-30 |
20160188387 | MANAGEMENT OF DISPLAY OF COMPUTER POP-UP NOTIFICATIONS - A method performed by a computer for managing display of pop-up notifications when the displayed content of the computer is being shared. A notification request from an application running on the computer is received by the computer. The computer determines that displayed content of the computer is viewable for multiple users. Based upon the determining, the computer blocks display of the pop-up notification to prevent leaking of sensitive information to other users. | 2016-06-30 |
20160188388 | METHOD OF PROCESS CONTEXT-AWARENESS - A process context-awareness method analyzes events arising from a process according to context concepts, compare and analyze entity contents of the events, event types, applicable contextual situations and rules, so as to subsequently trigger the other activities or yield result. The method applies to enterprise information systems, project scheme execution or meets any other operation requirement, suits different enterprise operational context, gains insight into dynamic circumstances of the enterprise context to thereby identify flexible solutions thereto. Hence, process information systems created with the method save manpower, time and costs otherwise incurred in constructing customized systems for use by different tenants and enhance the system maintainability. | 2016-06-30 |
20160188389 | COALESCING STAGES IN A MULTIPLE STAGE COMPLETION SEQUENCE - Embodiments are directed to systems and methodologies for allowing a computer program code to efficiently respond to and process events. For events having a multiple stage completion sequence, and wherein several of the events occur within relatively close time proximity to each other, portions of the multiple stages may be coalesced without adding latency, thereby maintaining responsiveness of the computer program. The disclosed coalescing systems and methodologies include state machines and counters that in effect “replace” certain stages of the event sequence when the frequency of events increases. | 2016-06-30 |
20160188390 | HIGH-PERFORMACE VIRTUAL MACHINE NETWORKING - A virtual machine (VM) runs on system hardware, which includes a physical network interface device that enables transfer of packets between the VM and a destination over a network. A virtual machine monitor (VMM) exports a hardware interface to the VM and runs on a kernel, which forms a system software layer between the VMM and the system hardware. Pending packets (both transmit and receive) issued by the VM are stored in a memory region that is shared by, that is, addressable by, the VM, the VMM, and the kernel. Rather than always transferring each packet as it is issued, packets are clustered in the shared memory region until a trigger event occurs, whereupon the cluster of packets is passed as a group to the physical network interface device. Optional mechanisms are included to prevent packets from waiting too long in the shared memory space before being transferred to the network. An interrupt offloading mechanism is also disclosed for use in multiprocessor systems such that it is in most cases unnecessary to interrupt the VM in order to request a VMM action, and the need for VMM-to-kernel context transitions is reduced. | 2016-06-30 |
20160188391 | SOPHISTICATED RUN-TIME SYSTEM FOR GRAPH PROCESSING - A graph processing system includes a graph API (Application Programming Interface), as executed by a processor on a computer, and that includes a plurality of graph operators to create graphs and to execute graph analytic applications on the created graphs, the graph operators supporting a creation and manipulation of multi-dimensional properties of graphs. A run-time system is executed by the processor and implements routines that dynamically adjust a plurality of representations and algorithms to execute sequences of operations on graph data. A library is accessible to the run-time system and stores a specification of calling signatures for the graph operators such that the graph operators can be called from any of various computer programming languages such that top-level algorithms received in an input graph application can be understood in the graph processing system when received in any of the various computer programming languages. Thereby the top-level algorithms written to the graph API are portable across multiple implementations. | 2016-06-30 |
20160188392 | FAST APPROXIMATE CONFLICT DETECTION - The present disclosure is directed to fast approximate conflict detection. A device may comprise, for example, a memory, a processor and a fast conflict detection module (FCDM) to cause the processor to perform fast conflict detection. The FCDM may cause the processor to read a first and second vector from memory, and to then generate summaries based on the first and second vectors. The summaries may be, for example, shortened versions of write and read addresses in the first and second vectors. The FCDM may then cause the processor to distribute the summaries into first and second summary vectors, and may then determine potential conflicts between the first and second vectors by comparing the first and second summary vectors. The summaries may be distributed into the first and second summary vectors in a manner allowing all of the summaries to be compared to each other in one vector comparison transaction. | 2016-06-30 |
20160188393 | AUTOMATIC PHASE DETECTION - Systems, apparatus, and methods may provide for a significant change identifier to output a value indicating a prediction error function from input system characteristics. A filter may be used to analyze the value indicating the prediction error function to identify a potential phase marker to indicate at least a beginning or end of a process phase. An extractor may obtain phase properties for a phase delineated by at least a beginning phase marker, and a predictor may determine an upcoming phase or estimate an ongoing phase from the obtained phase properties. | 2016-06-30 |
20160188394 | ERROR COORDINATION MESSAGE FOR A BLADE DEVICE HAVING A LOGICAL PROCESSOR IN ANOTHER SYSTEM FIRMWARE DOMAIN - Examples disclosed herein relate to an error coordination message for a blade device having a logical processor in another system firmware (SFW) domain. Examples include a partition of a blade system to run an operating system (OS) utilizing blade devices including respective logical processors operating in different SFW domains. Examples further include an error coordination message made available to one of the blade devices by another of the blade devices. | 2016-06-30 |
20160188395 | ROBUST SERDES WRAPPER - Light-weight, configurable error detection in a satellite communication system that detects invalid SerDes lanes via hash codes appended to packets of data in the lanes. An indication can be passed back upstream about the invalid lane so that the lane can be reset. Error correction can be provided by reconstructing the bit data in the invalid SerDes lane based on parity information in an optional parity lane. | 2016-06-30 |
20160188396 | TEMPORAL ANOMALY DETECTION ON AUTOMOTIVE NETWORKS - An anomaly detector for a Controller Area Network (CAN) bus performs state space classification on a per-message basis of messages on the CAN bus to label messages as normal or anomalous, and performs temporal pattern analysis as a function of time to label unexpected temporal patterns as anomalous. The anomaly detector issues an alert if an alert criterion is met that is based on the outputs of the state space classification and the temporal pattern analysis. The temporal pattern analysis may compare statistics of messages having analyzed arbitration IDs with statistics for messages having those analyzed arbitration IDs in a training dataset of CAN bus messages, and a temporal pattern is anomalous if there is a statistically significant deviation from the training dataset. The anomaly detector may be implemented on a vehicle Electronic Control Unit (ECU) communicating via a vehicle CAN bus. The anomaly detector does not rely on an database of messages and their periodicity from manufacturers (dbc files) and in that sense is truly a zero knowledge detector. | 2016-06-30 |
20160188397 | INTEGRITY OF FREQUENTLY USED DE-DUPLICATION OBJECTS - Disclosed herein are a system, non-transitory computer-readable medium, and method to check the integrity of de-duplication objects. An integrity check of the most frequently referenced or used de-duplication objects is given higher priority. | 2016-06-30 |
20160188398 | REESTABLISHING SYNCHRONIZATION IN A MEMORY SYSTEM - Embodiments relate to reestablishing synchronization across multiple channels in a memory system. One aspect is a system that includes a plurality of channels, each providing communication with a memory buffer chip and a plurality of memory devices. A memory control unit is coupled to the plurality of channels. The memory control unit is configured to perform a method that includes receiving an out-of-synchronization indication associated with at least one of the channels. The memory control unit performs a first stage of reestablishing synchronization that includes selectively stopping new traffic on the plurality of channels, waiting for a first time period to expire, resuming traffic on the plurality of channels based on the first time period expiring, and verifying that synchronization is reestablished for a second time period. | 2016-06-30 |
20160188399 | VALIDATE WRITTEN DATA - Data and a first error detection code related to the data is received. That the received data is written correctly to a memory is validated based on the first error detection code and/or a comparison of the written data to the received data. An alert is generated if it is determined that the written data is incorrect. | 2016-06-30 |
20160188400 | METHODS, SYSTEMS AND PRODUCTS FOR DATA BACKUP - Methods, systems and computer program products automatically back-up data. Communication is established among a first device, a second device, and a network-based storage device. Key words associated with uniform resource locators are identified and stored in the network-based storage device. When corruption is detected of the data stored in the first device, the key words are automatically retrieved from the network-based storage device and listed in a user interface displayed at the second device. | 2016-06-30 |
20160188401 | System and Method for Utilizing History Information in a Memory Device - Systems and methods for controlling blocks in a memory device using a health indicator (such as the failed bit count) for the blocks are disclosed. However, the health indicator may exhibit noise, thereby resulting in an unreliable indicator of the health of the blocks in the memory device. In order to filter out the noise, a rolling average of the health indicator may be determined, and compared to the current health indicator. The comparison with the rolling average may indicate whether the current health indicator is an outlier, and thus should not be used. The health indicator may also be used to predict a future health indicator for different blocks in the memory device. Using the predicted future health indicator, the use of the blocks may be changed in order to more evenly wear the blocks. | 2016-06-30 |
20160188402 | PROCESSING DEVICE WITH SELF-SCRUBBING LOGIC - An apparatus includes a processing unit including a configuration memory and self-scrubber logic coupled to read the configuration memory to detect compromised data stored in the configuration memory. The apparatus also includes a watchdog unit external to the processing unit and coupled to the self-scrubber logic to detect a failure in the self-scrubber logic. The watchdog unit is coupled to the processing unit to selectively reset the processing unit in response to detecting the failure in the self-scrubber logic. The apparatus also includes an external memory external to the processing unit and coupled to send configuration data to the configuration memory in response to a data feed signal outputted by the self-scrubber logic. | 2016-06-30 |
20160188403 | CRC COUNTER NORMALIZATION - The ability to accurately and efficiently calculate and report communication errors is becoming more important than ever in today's communications environment. More specifically calculating and reporting CRC anomalies in a consistent manner across a plurality of communications connections in a network is crucial to accurate error reporting. Through a normalization technique applied to a CRC computation period (e.g., the PERp value), accurate error identification and reporting for each individual connection can be achieved. | 2016-06-30 |
20160188404 | OPTIMIZING RECLAIMED FLASH MEMORY - A memory system or flash card may optimize usage of reclaimed memory. The optimization may include lists for Uncorrectable Error Correction Code (UECC) and Correctable Error Correction Code (CECC) that can be used along with a dual programming scheme. Dual programming may be utilized for blocks on the lists, but not for blocks that are not on the lists. The lists can be updated by reading data programmed to blocks on the lists. | 2016-06-30 |
20160188405 | ADAPTIVE ECC TECHNIQUES FOR FLASH MEMORY BASED DATA STORAGE - Adaptive ECC techniques for use with flash memory enable improvements in flash memory lifetime, reliability, performance, and/or storage capacity. The techniques include a set of ECC schemes with various code rates and/or various code lengths (providing different error correcting capabilities), and error statistic collecting/tracking (such as via a dedicated hardware logic block). The techniques further include encoding/decoding in accordance with one or more of the ECC schemes, and dynamically switching encoding/decoding amongst one or more of the ECC schemes based at least in part on information from the error statistic collecting/tracking (such as via a hardware logic adaptive codec receiving inputs from the dedicated error statistic collecting/tracking hardware logic block). The techniques further include selectively operating a portion (e.g., page, block) of the flash memory in various operating modes (e.g. as an MLC page or an SLC page) over time. | 2016-06-30 |
20160188406 | INTRA-RACK AND INTER-RACK ERASURE CODE DISTRIBUTION - Methods, computing systems and computer program products implement embodiments of the present invention that include detecting multiple sets of storage objects stored in a data facility including multiple server racks, each of the server racks including a plurality of server computers, each of the storage objects in each set being stored in a separate one of the server racks and including one or more data objects and one or more protection objects. A specified number of the storage objects are identified in a given server rack, each of the identified storage objects being stored in a separate one of the server computers, and one or more server computers in the given server rack not storing any of the identified storage objects are identified. Finally, in the identified one or more server computers, an additional protection object is created and managed for the identified storage objects. | 2016-06-30 |
20160188407 | ARCHITECTURE FOR IMPLEMENTING ERASURE CODING - Disclosed is an improved approach to implement erasure coding, which can address multiple storage unit failures in an efficient manner. The approach can effectively address multiple failures of storage units by implementing diagonal parity sets. | 2016-06-30 |
20160188408 | PROTECTION OF MEMORIES, DATAPATH AND PIPELINE REGISTERS, AND OTHER STORAGE ELEMENTS BY DISTRIBUTED DELAYED DETECTION AND CORRECTION OF SOFT ERRORS - This invention is data processing apparatus and method. Data is protecting from corruption using an error correction code by generating an error correction code corresponding to the data. In this invention the data and the corresponding error correction code are carried forward to another set of registers without regenerating the error correction code or using the error correction code for error detection or correction. Only later are error correction detection and correction actions taken. The differing data/error correction code registers may be in differing pipeline phases in the data processing apparatus. This invention forwards the error correction code with the data through the entire datapath that carries the data. This invention provides error protection to the whole datapath without requiring extensive hardware or additional time. | 2016-06-30 |
20160188409 | REDUCED UNCORRECTABLE MEMORY ERRORS - Uncorrectable memory errors may be reduced by determining a logical array address for a set of memory arrays and transforming the logical array address to at least two unique array addresses based, at least in part, on logical locations of at least two memory arrays within the set of memory arrays. The at least two memory arrays are then accessed using the at least two unique array addresses, respectively. | 2016-06-30 |
20160188410 | STRIPE RECONSTITUTING METHOD PERFORMED IN STORAGE SYSTEM, METHOD OF PERFORMING GARBAGE COLLECTION BY USING THE STRIPE RECONSTITUTING METHOD, AND STORAGE SYSTEM PERFORMING THE STRIPE RECONSTITUTING METHOD - A stripe reconstituting method in a storage system, a garbage collection method employing the stripe reconstituting method, and the storage system performing the stripe reconstituting method are provided. The stripe reconstituting method includes the operations of selecting a target stripe in which an imbalance between valid page ratios of memory blocks included in the target stripe exceeds an initially-set threshold value, from among stripes produced in a log-structured storage system; and reconstituting a stripe by regrouping the memory blocks included in the target stripe such that the imbalance between the valid page ratios of the memory blocks included in the target stripe is reduced. | 2016-06-30 |
20160188411 | CONTRACTING AND/OR DE-CONTRACTING STORED PARAMETERS - Briefly, methods and/or systems of contracting and/or de-contracting stored parameters are disclosed. | 2016-06-30 |
20160188412 | MANAGEMENT OF MICROCODE ERRORS IN A STORAGE OPERATION - Embodiments of the present disclosure relate to a system and computer program product for managing a microcode error in a storage operation. Embodiments include receiving an error code that corresponds to the microcode error and receiving a received error path signature for the error code. Embodiments also include identifying a metadata error path signature for the error code within a metadata table and determining whether the received error path signature for the error code substantially matches the metadata error path signature for the error code. Embodiments also include initiating a mitigation action in response to the received error path signature for the error code substantially matching the metadata error path signature for the error code. | 2016-06-30 |
20160188413 | VIRTUAL MACHINE DISTRIBUTED CHECKPOINTING - A method, system and computer program product for checkpointing virtual machines (VMs). The system includes a primary computer hosting a hypervisor and a primary VM. The hypervisor is configured instantiate the primary VM, divide the state of the primary VM into a plurality of memory blocks, and generate an error correction block based on the plurality of memory blocks. The system further includes a plurality of secondary computers. Each of the secondary computers stores a secondary VM and one of either the memory blocks or the error correction block. | 2016-06-30 |
20160188414 | FAULT TOLERANT AUTOMATIC DUAL IN-LINE MEMORY MODULE REFRESH - Methods and apparatus to fault tolerant Automatic DIMM (Dual In-line Memory Module) Refresh or ADR are described. In an embodiment, a processor includes non-volatile memory to store data from one or more volatile buffers of the processor. The data from the one or more volatile buffers of the processor are stored into the non-volatile memory in response to occurrence of an event that is to lead to a system reset or shut down. Other embodiments are also disclosed and claimed. | 2016-06-30 |
20160188415 | METHODS AND SYSTEMS FOR CLONE MANAGEMENT - Methods and systems for storage services is provided. An inventory view listing a plurality of application objects of an application from among a plurality of applications is provided on a display device by a management device that interfaces with a plurality of application plugins executed by one or more host computing devices that interface with the plurality of applications for managing backup, restore and clone operations involving objects that are stored on behalf of the plurality of applications by a storage system. A clone dataset object for an application object is selected from the plurality of application objects of the application. A clone lifecycle option for the clone dataset object is selected for managing lifecycle of a clone of a backup of the selected application object. | 2016-06-30 |
20160188416 | DEDICATED CLIENT-SIDE SIGNATURE GENERATOR IN A NETWORKED STORAGE SYSTEM - A storage system according to certain embodiments includes a client-side signature repository that includes information representative of a set of data blocks stored in primary storage. During storage operations of a client, the system can generate signatures corresponding to data blocks that are being stored in primary storage. The system can store the generated signatures in the client-side signature repository along with information regarding the location of the corresponding data block within primary storage. As additional instances of the data block are stored in primary storage, the system can store the location of the additional instances in the client-side signature repository. | 2016-06-30 |
20160188417 | CENTRALIZED MANAGEMENT CENTER FOR MANAGING STORAGE SERVICES - Methods and systems for providing storage services in a networked environment are provided. A management device interfaces with a plurality of management layers that communicates with a plurality of application plugins executed by a plurality of computing devices. Each application plugin is associated with an application for providing storage services for stored objects managed by a storage system. A same request and response format is used by the management device to obtain information from the plurality of management layers regarding storage space used by the plurality of applications for storing the stored objects and the management device maintains storage space information as a storage resource object for virtual storage resources and physical storage resources used by the plurality of applications for storing the stored objects. | 2016-06-30 |
20160188418 | DATA LOADING TOOL - In an exemplary embodiment of this disclosure, a method for loading data from a backup image of a database includes selecting a subset statement defining a subset of the data in the database. Tables of the database are identified based on metadata of the database. A target database is written having the structure but not the data of the identified tables. One or more table statements are constructed, by a computer processor, defining a subset of each identified table based on the subset statement. Selected data is unloaded from a backup image into the target database using respective table statements as filters. | 2016-06-30 |
20160188419 | SYSTEM AND METHOD FOR SELECTIVE COMPRESSION IN A DATABASE BACKUP OPERATION - Differential or selective elective data transformation, which can include compression and/or encryption, is applied to selected data subsets, such as selected table spaces, of a database during a single database operation. In response to a received backup command, a backup utility of a database management system obtains data from a number of data subsets of a source database that are specified for inclusion in a backup image. At least one of the data subsets is specified for data transformation while subsets are not. The data from the specified data subsets is identified in the obtained data, and transformed prior to writing a single backup image to archive media. The backup image therefore contains both transformed and untransformed data. The selection of data subsets for transformation can be made automatically without requiring user specification according to predefined data characteristics including subset size, data type, compressibility, or encryption. | 2016-06-30 |
20160188420 | METHODS AND APPARATUS FOR MULTI-PHASE RESTORE - Methods and apparatus to identify at least a first portion and a second portion of resources to restore to a device are described. The first portion of the resources may be restored atomically to the device before the second portion of the resources. The device may not respond to at least one user input during the restoration of the first portion of the resources. If the restoring of the first portion is successful, the second portion of the resources may be restored. The device may respond to the user input during the restoring of the second portion of the resources. | 2016-06-30 |
20160188421 | CENTRALIZED GRAPHICAL USER INTERFACE AND ASSOCIATED METHODS AND SYSTEMS FOR A CENTRALIZED MANAGEMENT CENTER FOR MANAGING STORAGE SERVICES IN A NETWORKED STORAGE ENVIRONMENT - Methods and systems for a networked storage environment are provided. For example, a method includes interfacing by a management device with a plurality of management layers that communicate with a plurality of application plugins executed by a plurality of computing devices, where each application plugin is associated with an application for providing storage services for stored objects managed by a storage system for the plurality of applications; for managing the plurality of computing devices, presenting selectable options for adding an application plugin for a computing device, configuring the application plugin, migrating the application plugin from one location to another and placing the computing device in a maintenance mode; and providing a summary for a plurality of storage service operations and a data protection summary. | 2016-06-30 |
20160188422 | ONLINE RESTORATION OF A SWITCH SNAPSHOT - One embodiment of the present invention provides a switch. The switch includes one or more ports, a persistent storage module, a restoration module, and a retrieval module. The persistent storage module stores configuration information associated with the switch in a data structure, which includes one or more columns for attribute values of the configuration information, in a local persistent storage. The restoration module instantiates a restoration database instance in the persistent storage from an image of the persistent storage. The retrieval module retrieves attribute values from a data structure in a current database instance and the restoration database instance in the persistent storage. The restoration module then applies the differences between attribute values of the restoration database instance and the current database instance in the persistent storage to switch modules of the switch, and operates the restoration database instance as the current database instance in the persistent storage. | 2016-06-30 |
20160188423 | SYNCHRONIZATION AND ORDER DETECTION IN A MEMORY SYSTEM - Embodiments relate to out-of-synchronization detection and out-of-order detection in a memory system. One aspect is a system that includes a plurality of channels, each providing communication with a memory buffer chip and a plurality of memory devices. A memory control unit is coupled to the plurality of channels. The memory control unit is configured to perform a method that includes receiving frames on two or more of the channels. The memory control unit identifies alignment logic input in each of the received frames and generates a summarized input to alignment logic for each of the channels of the received frames based on the alignment logic input. The memory control unit adjusts a timing alignment based on a skew value per channel. Each of the timing adjusted summarized inputs is compared. Based on a mismatch between at least two of the timing adjusted summarized inputs, a miscompare signal is asserted. | 2016-06-30 |
20160188424 | DATA STORAGE SYSTEM EMPLOYING A HOT SPARE TO STORE AND SERVICE ACCESSES TO DATA HAVING LOWER ASSOCIATED WEAR - A controller monitors access frequencies of address ranges mapped to a data storage array. Based on the monitoring, the controller identifies frequently accessed ones of the address ranges that have lower associated wear, for example, those that are read more often than written. In response to the identifying, the controller initiates copying of a dataset associated with the identified address ranges from the data storage array to a spare storage device while refraining from copying other data from the data storage array onto the spare storage device. The controller directs read input/output operations (IOPs) targeting the identified address ranges to be serviced by access to the spare storage device. In response to a failure of a failed storage device among the plurality of primary storage devices, the controller rebuilds contents of the failed storage device on the spare storage device in place of the dataset associated with the identified address ranges. | 2016-06-30 |
20160188425 | DEPLOYING SERVICES ON APPLICATION SERVER CLOUD WITH HIGH AVAILABILITY - Techniques are disclosed for deploying services in a server cluster environment. Certain techniques are disclosed for deploying services to a cluster based on a replication policy that includes a plurality of configurable parameters. In some embodiments, the configurable parameters (also referred to herein as replication factors) can define a number of nodes to which a service is to be deployed, a number of nodes to which a service is to be prepared, and/or a number of nodes to which a service is replicated. Based on the configurable parameters, the replication policy enables users and/or cluster providers to guarantee different levels of performance and/or reliability. | 2016-06-30 |
20160188426 | SCALABLE DISTRIBUTED DATA STORE - Described is a framework that manages a clustered, distributed NoSQL data store across multiple server nodes. The framework may include daemons running on every server node, providing auto-sharding and unified data service such that user data can be stored and retrieved consistently from any node. The framework may further provide capabilities such as automatic fail-over and dynamic capacity scaling. | 2016-06-30 |
20160188427 | FAILURE RESISTANT DISTRIBUTED COMPUTING SYSTEM - A failure resistant distributed computing system includes primary and secondary datacenters each comprising a plurality of computerized servers. A control center selects orchestrations from a predefined list and transmits the orchestrations to the datacenters. Transmitted orchestrations include less than all machine-readable actions necessary to execute the orchestrations. The datacenters execute each received orchestration by referencing a full set of actions corresponding to the received orchestration as previously stored or programmed into the computerized server and executing the referenced full set of actions. At least one of the orchestrations comprises a failover operation from the primary datacenter to the secondary datacenter. Failover shifts performance of task from a set of processing nodes of the primary datacenter to a set of processing nodes of the secondary datacenter, such tasks including managing storage accessible by one or more remote clients and running programs on behalf of remote clients. | 2016-06-30 |
20160188428 | INFORMATION PROCESSING METHOD, COMPUTER-READABLE RECORDING MEDIUM, AND INFORMATION PROCESSING SYSTEM - An information processing method includes executing a processing corresponding to a first request of a terminal apparatus using a first information processing apparatus, when a fault occurs in the first information processing apparatus, transmitting an apparatus information that identifies the first information processing apparatus from a second information processing apparatus to the terminal apparatus, after receiving the apparatus information by the terminal apparatus, discarding data transmitted from the first information processing apparatus to the terminal apparatus, transmitting, from the terminal apparatus to the second information processing apparatus, a response notification indicating that the apparatus information is received by the terminal apparatus, and after receiving the response notification by the second information processing apparatus, executing the processing corresponding to a second request of the terminal apparatus using the second information processing apparatus. | 2016-06-30 |
20160188429 | MEMORY CONTROL CIRCUIT, CACHE MEMORY AND MEMORY CONTROL METHOD - A memory control circuit has an error determination circuitry to determine whether an error-bit number is larger than a predetermined threshold value set based on a maximum number of error bits correctable by the error correction circuitry, when it is detected by the error detector that an error is contained in data read for verification of data written to the first memory or in data read from the first memory, and an access controller to control access to a second memory having an access priority lower than the first memory when it is determined that the error-bit number is larger than the threshold value, and to control access to the first memory without accessing the second memory when it is determined that the error-bit number is equal to or less than the threshold value. | 2016-06-30 |
20160188430 | Electronic Device and Firmware Recovery Program That Ensure Recovery of Firmware - An electronic device includes a first nonvolatile memory, a second nonvolatile memory, and a control circuit. The first nonvolatile memory includes an area to store firmware. The firmware includes a first kernel. The second nonvolatile memory includes an area to store an update program, the update program including a second kernel. The control circuit boots the one of the first and the second kernels, and ensures writing data to the first nonvolatile memory by the booted one of the first and the second kernels. When the firmware is incapable of being read, the control circuit reads the update program and performs the boot process to boot the second kernel, and writes updating data of the firmware to the first nonvolatile memory, the first nonvolatile memory being writable of the data by the booted second kernel. | 2016-06-30 |
20160188431 | PREDICTING PERFORMANCE OF A SOFTWARE APPLICATION OVER A TARGET SYSTEM - System and method for predicting performance of a software application over a target system is disclosed. The method comprises generating a benchmark suite such that benchmark indicates a combination of workloads applied over a set of standard software applications running on a source system. The method further comprises identifying a benchmark of the benchmark suite, wherein the benchmark has performance characteristics same as that of the software application. The method further enables remotely executing the set of standard software applications associated with the benchmark on the target system with the combination of workload as specified by the benchmark. The method further enables recording a performance of the set of standard software applications on the target system. Based on the performance of the standard software applications on the target system the performance of the software application is predicted. | 2016-06-30 |
20160188432 | Method and Apparatus for Intercepting Implanted Information in Application - The present invention discloses a method and apparatus for intercepting implanted information in an application. The method comprises: determining an Application Programming interface API invoked by an implanted information code as a key API in accordance with information collected in advance, wherein the key API is the API provided by an implanted information provider; after starting a target application, monitoring an act of the target application invoking the key API by hooking the key API; and if the target application initiates a request to invoke the key API, determining that the implanted information code is contained in the target application and intercepting the request to invoke the key API so as to stop the implanted information code from running and to realize the interception of the implanted information in the target application. | 2016-06-30 |
20160188433 | TESTING AND MITIGATION FRAMEWORK FOR NETWORKED DEVICES - The present disclosure generally relates to the automated testing of a system that includes software or hardware components. In some embodiments, a testing framework generates a set of test cases for a system under test using a grammar. Each test case may perform an action, such as provide an input to the system under test, and result in an output from the system under test. The inputs and outputs are then compared to the expected results to determine whether the system under test is performing correctly. The data can then be interpreted in the grammar system or used as input to a fault isolation engine to determine anomalies in the system under test. Based on identified faults, one or more mitigation techniques may be implemented in an automated fashion. | 2016-06-30 |
20160188434 | METHOD AND DEVICE FOR DETERMINING PROGRAM PERFORMANCE INTERFERENCE MODEL - A method and a device for determining a program performance interference model is described. The method includes: selecting programs from a determined sample program set to form multiple subsets; acquiring a value of performance interference imposed on each program in each subset and a total occupancy rate of a shared resource occupied by all the programs in each subset; dividing all the subsets into multiple analytical units; performing a regression analysis on the value of performance interference on each sample program included in each analytical unit and a total occupancy rate corresponding to a subset in which the sample program is loaded, and acquiring a target function model; acquiring a performance interference model corresponding to a target program according to the target function model. The performance interference model may be used for preventing another program whose mutual interference is relatively strong from running together with the target program. | 2016-06-30 |
20160188435 | FIXING ANTI-PATTERNS IN JAVASCRIPT - Methods, storage systems and computer program products implement embodiments of the present invention that include receiving, by a computer, source code for an application, the source code including multiple instructions to be executed in a single thread. A first static analysis is performed on the application source code in order to identify a given instruction including an asynchronous handler, and a plurality of entry points to the application. Based on the static analysis, an order of execution of the multiple instructions is determined, and an intermediate representation is generated that includes the multiple instructions arranged in the determined order of execution. In some embodiments, a second static analysis can be performed on the intermediate representation that can identify an anti-pattern in the intermediate representation, and then correct the anti-pattern in the source code. | 2016-06-30 |
20160188436 | METHOD AND APPARATUS FOR PRODUCING REGULATORY-COMPLIANT SOFTWARE - A system for producing a clinical trial software application includes a processor, comprising a validation service and an audit service, and a platform, configured to prove-in an infrastructure on which the software application operates. The software application operating on the infrastructure is the same as the software application previously validated in a validation portal. The proving-in of the infrastructure comprises receiving infrastructure requirements from a software application supplier, building the software application supplier's instances, logging an installation report to the validation portal, and comparing the log to the frozen, validated software in the validation portal. The validation service is configured to validate the software application, freeze the validated software application in the validation portal, and generate documentation that satisfies compliance rules for the clinical trial software application. The validation service receives software code, testable requirements, and test results from a software application supplier and generates documentation regarding the validation of the software application. The audit service is configured to provide evidence of operational change management for a regulatory agency according to compliance rules of the regulatory agency. A method for producing regulatory-compliant software is also described and claimed. | 2016-06-30 |
20160188437 | TESTING FUNCTIONAL CORRECTNESS AND IDEMPOTENCE OF SOFTWARE AUTOMATION SCRIPTS - Various embodiments automatically test software automation scripts. In one embodiment, at least one software automation script is obtained. The software automation script is configured to automatically place a computing system into a target state. A plurality of test cases for the software automation script is executed. Each of the plurality of test cases is a separate instance of the software automation script configured based at least on one or more different states of the computing system. The software automation script is determined to be one of idempotent and non-idempotent and/or one of convergent and non-convergent based on executing the plurality of test cases. | 2016-06-30 |
20160188438 | TESTING FUNCTIONAL CORRECTNESS AND IDEMPOTENCE OF SOFTWARE AUTOMATION SCRIPTS - Various embodiments automatically test software automation scripts. In one embodiment, at least one software automation script is obtained. The software automation script is configured to automatically place a computing system into a target state. A plurality of test cases for the software automation script is executed. Each of the plurality of test cases is a separate instance of the software automation script configured based at least on one or more different states of the computing system. The software automation script is determined to be one of idempotent and non-idempotent and/or one of convergent and non-convergent based on executing the plurality of test cases. | 2016-06-30 |
20160188439 | MANAGING ASSERTIONS WHILE COMPILING AND DEBUGGING SOURCE CODE - The present disclosure relates to maintaining assertions in an integrated development environment (IDE) tool. According to one embodiment, while the IDE tool is compiling the source code of a development project, the IDE tool generates at least a first compiler warning. The first compiler warning generally corresponds to at least one line of source code in a first source code component of the development project. A first set of assertions to add to the source code of the development project is determined based on the line of source code that resulted in the first compiler warning. The IDE tool adds the first set of assertions to the source code of the development project. The first set of assertions are compiled as part of the source code of the development project. | 2016-06-30 |
20160188440 | MANAGING ASSERTIONS WHILE COMPILING AND DEBUGGING SOURCE CODE - The present disclosure relates to maintaining assertions in an integrated development environment (IDE) tool. According to one embodiment, while the IDE tool is compiling the source code of a development project, the IDE tool generates at least a first compiler warning. The first compiler warning generally corresponds to at least one line of source code in a first source code component of the development project. A first set of assertions to add to the source code of the development project is determined based on the line of source code that resulted in the first compiler warning. The IDE tool adds the first set of assertions to the source code of the development project. The first set of assertions are compiled as part of the source code of the development project. | 2016-06-30 |
20160188441 | TESTING MULTI-THREADED APPLICATIONS - In one example, a method for testing a multi-threaded application includes running an initial test of the multi-threaded application and collecting thread generation data and determining the thread hierarchy. The thread execution is then modified to produce a modified configuration and a second test is run with the modified configuration. A device for testing of multi-threaded applications is also provided. | 2016-06-30 |
20160188442 | RECORDING PROGRAM EXECUTION - Among other things, a method includes, at a computer system on which one or more computer programs are executing, receiving a specification defining types of state information, receiving an indication that an event associated with at least one of the computer programs has occurred, the event associated with execution of a function of the computer program, collecting state information describing the state of the execution of the computer program when the event occurred, generating an entry corresponding to the event, the entry including elements of the collected state information, the elements of state information formatted according to the specification, and storing the entry. The log can be parsed to generate a visualization of computer program execution. | 2016-06-30 |
20160188443 | TESTING APPLICATION INTERNAL MODULES WITH INSTRUMENTATION - Testing internal modules of application code includes applying, via a computer processor, instrumentation hooks to internal module interface points and external module interface points of the application code, executing the application code and recording values received at the instrumented interface points, determining an accessible internal module input point and a constraint based on the recorded values from the instrumented external module interface points, and testing the accessible internal module input point based on the constraint. | 2016-06-30 |
20160188444 | DETECTING RACE CONDITION VULNERABILITIES IN COMPUTER SOFTWARE APPLICATIONS - Testing computer software applications is performed by identifying first and second executable portions of the computer software application, where the portions are configured to access a data resource, and where at least one of the portions is configured to write to the data resource, instrumenting the computer software application by inserting one or more instrumentation instructions into one or both of the portions, where the instrumentation instruction is configured to cause execution of the portion being instrumented to be extended by a randomly-determined amount of time, and testing the computer software application in multiple iterations, where the computer software application is executed in multiple parallel execution threads, where the portions are independently executed at least partially in parallel in different threads, and where the computer software application is differently instrumented in each of the iterations. | 2016-06-30 |
20160188445 | CONDUCTING PERFORMANCE SNAPSHOTS DURING TEST AND USING FEEDBACK TO CONTROL TEST BASED ON CUSTOMER EXPERIENCE PARAMETERS - The technology disclosed enables understanding the user experience of accessing a web page under high loads. A testing system generates a simulated load by retrieving and loading a single web object. A performance snapshot is taken of accessing an entire web page from the server under load. The performance snapshot may be performed by emulating a browser accessing a web page's URL, the web page comprising multiple objects that are independently retrieved and loaded. The simulated load is configured with a number of users per region of the world where the user load will originate, and a single object from the web page to retrieve. Performance data such as response time for the single object retrieved, number of hits per second, number of timeouts per sec, and errors per second may be recorded and reported. An optimal number of users may be determined to achieve a target user experience goal. | 2016-06-30 |
20160188446 | COLLABORATIVE COMPUTER AIDED TEST PLAN GENERATION - Arrangements described herein relate to generation of test plans. A list of test case selection criteria can be presented to each of a plurality of stakeholders. At least one user input is received from each of the plurality of stakeholders selecting at least one test case selection criterion from the list of test case selection criteria and, for each selected test case selection criterion, assigning a criterion priority. Test cases, which correspond to the selected test case selection criteria, can be automatically selected to include in a candidate test plan. A candidate priority can be automatically assigned to each test case selected to be included in the candidate test plan. The processor selects the test cases to include in the candidate test plan and assigns the candidate priorities to the selected test cases based on processing the criterion priorities assigned to the selected test case selection criteria by the stakeholders. | 2016-06-30 |
20160188447 | SYSTEM TESTING USING NESTED TRANSACTIONS - A computer system includes a processor and a data store coupled to the processor. An application component is operably coupled to the processor and the data store and is configured to run one or more applications stored in the data store. A test framework is coupled to the processor and the data store and is configured to perform at least one test relative to a component of the computer system that interacts with a database. A savepoint manager is configured to responsively generate at least one savepoint in the database prior to the at least one test and to roll back the at least one savepoint after the at least one test. Methods of testing the computer system are also provided. | 2016-06-30 |
20160188448 | DISCOVERY OF APPLICATION STATES - Some aspects of the disclosure provide a method comprising obtaining machine executable code of an application, the application operable to achieve a set of application states, pre-processing the machine executable code to generate reviewable code, identifying, from the reviewable code, a set of state access instructions configured to invoke or assist in invoking one of the set of application states of the application, the set of state access instructions indicating a first state access instruction configured to invoke a first state of the set of application states and a second state access instruction configured to invoke a second state of the set of application states that is different from the first state, each of the set of state access instructions including an application resource identifier referencing an application and indicating an operation for the application to perform, and storing the set of state access instructions. | 2016-06-30 |
20160188449 | SOFTWARE AGING TEST SYSTEM, SOFTWARE AGING TEST METHOD, AND PROGRAM FOR SOFTWARE AGING TEST - Load test is executed with an appropriate frequency which does not lead to a decrease in software development efficiency and a decrease in the precision of software aging detection. Load test of a version of software under test is executed in accordance with an execution criterion, presence or absence of a software aging problem is detected by comparing a test result of the load test with a test result of load test of a previous version of the software to be compared, and frequency of execution of subsequent load test is adjusted by changing the execution criterion based on a result of the detection. | 2016-06-30 |
20160188450 | AUTOMATED APPLICATION TEST SYSTEM - An automated application test system comprises a plurality of clients ( | 2016-06-30 |
20160188451 | SOFTWARE TESTING - Embodiments of the present disclosure provide a method, a computer program product and a computing device for software test by wherein a computing device, wherein at least one virtual hardware component, each virtual hardware component simulating a behavior of a hardware component associated with a to-be-tested software, and testing the to-be-tested software based on the behavior simulated by the at least one virtual hardware component. | 2016-06-30 |
20160188452 | EFFICIENT AND SECURE DIRECT STORAGE DEVICE SHARING IN VIRTUALIZED ENVIRONMENTS - A method, system and computer program product are disclosed for direct storage device sharing in a virtualized environment. In an embodiment, the method comprises assigning each of a plurality of virtual functions an associated memory area of a physical memory, and executing the virtual functions in a single root-input/output virtualization environment to provide each of a plurality of guests with direct access to the physical memory. In one embodiment, each of the guests is associated with a respective one of the virtual functions; and the assigning each of the plurality of virtual functions an associated memory area includes maintaining a per-virtual function mapping table identifying a respective one mapping function for each of the virtual functions, and each of the mapping functions mapping one of the memory areas of the physical area to an associated virtual memory. | 2016-06-30 |
20160188453 | MEMORY POOL MANAGEMENT METHOD FOR SHARING MEMORY POOL AMONG DIFFERENT COMPUTING UNITS AND RELATED MACHINE READABLE MEDIUM AND MEMORY POOL MANAGEMENT APPARATUS - A memory pool management method includes: allocating a plurality of memory pools in a memory device according to information about a plurality of computing units, wherein the computing units are independently executed on a same processor; and assigning one of the memory pools to one of the computing units, wherein at least one of the memory pools is shared among different computing units of the computing units. | 2016-06-30 |
20160188454 | MEMORY MANAGEMENT MODEL AND INTERFACE FOR NEW APPLICATIONS - A memory management system is described herein that receives information from applications describing how memory is being used and that allows an application host to exert more control over application requests for using memory. The system provides an application memory management application-programming interface (API) that allows the application to specify more information about memory allocations that is helpful for managing memory later. The system also provides an ability to statically and/or dynamically analyze legacy applications to give applications that are not modified to work with the system some ability to participate in more effective memory management. The system provides application host changes to leverage the information provided by applications and to manage memory more effectively using the information and hooks into the application's use of memory. Thus, the system provides a new model for managing memory that improves application host behavior and allows applications to use computing resources more efficiently. | 2016-06-30 |
20160188455 | Systems and Methods for Choosing a Memory Block for the Storage of Data Based on a Frequency With Which the Data is Updated - Systems and methods for choosing a memory block for the storage of data based on a frequency with which data is updated are disclosed. In one implementation, a memory management module of a non-volatile memory system receives a request to open a free memory block for the storage of data. The memory management module determines a frequency with which the data is updated. The memory management module then opens a memory block of a first portion a free block list that is associated with low program/erase cycle counts in response to determining that the data will be frequently updated or opens a memory block of a second different portion of the free block list that is associated with high program/erase cycle counts in response to determining that the data is not frequently updated. The memory management module then stores the data in the open memory block of the non-volatile memory. | 2016-06-30 |
20160188456 | NVRAM-AWARE DATA PROCESSING SYSTEM - In one form, a computer system includes a central processing unit, a memory controller coupled to the central processing unit and capable of accessing non-volatile random access memory (NVRAM), and an NVRAM-aware operating system. The NVRAM-aware operating system causes the central processing unit to selectively execute selected ones of a plurality of application programs, and is responsive to a predetermined operation to cause the central processing unit to execute a memory persistence procedure using the memory controller to access the NVRAM. | 2016-06-30 |