09th week of 2018 patent applcation highlights part 52 |
Patent application number | Title | Published |
20180060068 | MACHINE LEARNING TO FACILITATE INCREMENTAL STATIC PROGRAM ANALYSIS - Techniques for facilitating incremental static program analysis based on machine learning techniques are provided. In one example, a system comprises a feature component that, in response to an update to a computer program, generates feature vector data representing the update, wherein the feature vector data comprises feature data representing a feature of the update derived from an abstract state of the computer program, and wherein the abstract state is based on a mathematical model of the computer program that is generated in response to static program analysis of the computer program. The system can further comprise a machine learning component that employs a classifier algorithm to identify an affected portion of the mathematical model that is affected by the update. The system can further comprise an incremental analysis component that incrementally applies the static program analysis to the computer program based on the affected portion | 2018-03-01 |
20180060069 | APPARATUS AND METHODS RELATED TO MICROCODE INSTRUCTIONS - The present disclosure includes apparatuses and methods related to microcode instructions. One example apparatus comprises a memory storing a set of microcode instructions. Each microcode instruction of the set can comprise a first field comprising a number of control data units, and a second field comprising a number of type select data units. Each microcode instruction of the set can have a particular instruction type defined by a value of the number of type select data units, and particular functions corresponding to the number of control data units are variable based on the particular instruction type. | 2018-03-01 |
20180060070 | COMPARISON-BASED SORT IN A RECONFIGURABLE ARRAY PROCESSOR HAVING MULTIPLE PROCESSING ELEMENTS FOR SORTING ARRAY ELEMENTS - An array processor includes a managing element having a load streaming unit coupled to multiple processing elements. The load streaming unit provides input data portions to each of a first subset of the processing elements and also receives output data from each of a second subset of the processing elements based on a comparatively sorted combination of the input data portions provided to the first subset of processing elements. Furthermore, each of processing elements is configurable by the managing element to compare input data portions received from either the load streaming unit or two or more of the other processing elements, wherein the input data portions are stored for processing in respective queues. Each processing unit is further configurable to select an input data portion to be output data based on the comparison, and in response to selecting the input data portion, remove a queue entry corresponding to the selected input data portion. Each processing element may be further configured to provide the selected output data portion to either the managing element or as an input to one of the processing elements. | 2018-03-01 |
20180060071 | ASSOCIATING WORKING SETS AND THREADS - Associating working sets and threads is disclosed. An indication of a stalling event is received. In response to receiving the indication of the stalling event, a state of a processor associated with the stalling event is saved. At least one of an identifier of a guest thread running in the processor and a guest physical address referenced by the processor is obtained from the saved processor state. | 2018-03-01 |
20180060072 | VECTOR CROSS-COMPARE COUNT AND SEQUENCE INSTRUCTIONS - Systems and methods are provided for executing an instruction. The method may include loading a first vector into a first location, the first vector including a plurality of first data elements and loading a second vector into a second location, the second vector including a plurality of second data elements. The method may further include comparing the plurality of first data elements of the first vector to the plurality of data elements of the second vector and performing one or more operations on the plurality of first and second data elements based on at least one vector cross-compare instruction. The one or more operations include counting a number of data elements of the plurality of first and second data elements that satisfy at least one condition, counting a number of times specified values occur in the plurality of first and second data elements, and generating sequence counts for duplicated values. | 2018-03-01 |
20180060073 | BRANCH TARGET BUFFER COMPRESSION - Techniques for improving branch target buffer (“BTB”) operation. A compressed BTB is included within a branch prediction unit along with an uncompressed BTB. To support prediction of up to two branch instructions per cycle, the uncompressed BTB includes entries that each store data for up to two branch predictions. The compressed BTB includes entries that store data for only a single branch instruction for situations where storing that single branch instruction in the uncompressed BTB would waste space in that buffer. Space would be wasted in the uncompressed BTB due to the fact that, in order to support two branch lookups per cycle, prediction data for two branches must have certain features in common (such as cache line address) in order to be stored together in a single entry. | 2018-03-01 |
20180060074 | METHOD AND DEVICE FOR DETERMINING BRANCH HISTORY FOR BRANCH PREDICTION - Disclosed are a method and a processing device directed to determining global branch history for branch prediction. The method includes shifting first bits of a branch signature into a current global branch history and performing a bitwise exclusive-or (XOR) function on second bits of the branch signature and shifted bits of the current global branch history. In this way, the current global branch history is updated. The processing device implements the method using a shift logic configured to store and shift bits representing a current global branch history, a register configured to store the current global branch history, decision circuitry configured to determine whether or not a branch is taken, and XOR gates. | 2018-03-01 |
20180060075 | METHOD FOR REDUCING FETCH CYCLES FOR RETURN-TYPE INSTRUCTIONS - An apparatus is disclosed, the apparatus including a branch target cache configured to store one or more branch addresses, a memory configured to store a return target stack, and a circuit. The circuit may be configured to determine, for a group of one or more fetched instructions, a prediction value indicative of whether the group includes a return instruction. In response to the prediction value indicating that the group includes a return instruction, the circuit may be further configured to select a return address from the return target stack. The circuit may also be configured to determine a hit or miss indication in the branch target cache for the group, and to, in response to receiving a miss indication from the branch target cache, select the return address as a target address for the return instruction. | 2018-03-01 |
20180060076 | METHOD FOR IMPLEMENTING A REDUCED SIZE REGISTER VIEW DATA STRUCTURE IN A MICROPROCESSOR - A method of managing a reduced size register view data structure in a processor, where the method includes receiving an incoming instruction sequence using a global front end, grouping instructions from the incoming instruction sequence to form instruction blocks, populating a register view data structure, wherein the register view data structure stores register information references by the instruction blocks as a set of register templates, generating a set of snapshots of the register templates to reduce a size of the register view data structure, and tracking a state of the processor to handle a branch miss-prediction using the register view data structure in accordance with execution of the instruction blocks. | 2018-03-01 |
20180060077 | TRUSTED PLATFORM MODULE SUPPORT ON REDUCED INSTRUCTION SET COMPUTING ARCHITECTURES - Exemplary features pertain to providing trusted platform module (TPM) support for ARM®-based systems or other Reduced Instruction Set Computing (RISC) systems. In some examples, secure firmware (e.g., TrustZone firmware) operates as a shim between an unsecure high level operating system (HLOS) and a discrete TPM chip or other trusted execution environment component. The secure firmware reserves a portion of non-secure memory for use as a command response buffer (CRB) control block accessible by the HLOS. The secure firmware translates and relays TPM commands/responses between the HLOS and the TPM via the non-secure CRB memory. The system may also include various non-secure firmware components such as Advanced Configuration and Power Interface (ACPI) and Unified Extensible Firmware Interface (UEFI) components. Among other features, the exemplary system can expose the TPM to the HLOS via otherwise standard UEFI protocols and ACPI tables in a manner that is agnostic to the HLOS. | 2018-03-01 |
20180060078 | METHOD FOR BOOTING A HETEROGENEOUS SYSTEM AND PRESENTING A SYMMETRIC CORE VIEW - A heterogeneous processor architecture and a method of booting a heterogeneous processor is described. A processor according to one embodiment comprises: a set of large physical processor cores; a set of small physical processor cores having relatively lower performance processing capabilities and relatively lower power usage relative to the large physical processor cores; and a package unit, to enable a bootstrap processor. The bootstrap processor initializes the homogeneous physical processor cores, while the heterogeneous processor presents the appearance of a homogeneous processor to a system firmware interface. | 2018-03-01 |
20180060079 | Initializing and Reconfiguring Replacement Motherboards - Systems and methods for initializing and reconfiguring replacement motherboards are described. In some embodiments, an Information Handling System (IHS) may include: a motherboard, a processor mounted on the motherboard, and a Basic Input/Output System (BIOS) mounted on the motherboard and coupled to the processor, the BIOS having program instructions stored thereon that, upon execution by the processor, cause the IHS to: determine, while operating in a service mode, whether prefill data is available in a memory device distinct from any component mounted on the motherboard, where the prefill data is usable by the BIOS to automatically fill out at least a portion of a service menu provided by the BIOS; validate the prefill data; and in response to the validated, prefill data having changed since a previous booting of the IHS, store updated prefill data in the memory device. | 2018-03-01 |
20180060080 | Hardware for System Firmware Use - A method and a system for reserving a device for a system are provided herein. The method includes accessing a reserved device, where a basic input/output system (BIOS) uses the reserved device. The method includes initializing a register, via the BIOS firmware, to disable a port that connects to the reserved device. The method includes disabling the port that connects to the reserved device. The disabling may occur before the BIOS firmware transfers control of the system to an operating system. The disabling may hide the reserved device from the operating system and reserve the reserved device for the BIOS firmware without interference from the operating system. | 2018-03-01 |
20180060081 | INFORMATION PROCESSING APPARATUS WITH SEMICONDUCTOR INTEGRATED CIRCUITS, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM - An information processing which reduces production costs. The information processing apparatus has a first semiconductor device, a second semiconductor device, a ROM that stores both a first boot program and a second boot program, and an interface for communicating with the ROM. In response to the first semiconductor device being reset, the first semiconductor device reads out the first boot program from the ROM via the interface. In response to the second semiconductor device being reset, the second semiconductor device reads out the second boot program from the ROM via the interface. While the first semiconductor device is reading out the first boot program from the ROM, an output from the second semiconductor device to the interface is controlled to have high impedance. | 2018-03-01 |
20180060082 | SYSTEMS AND METHODS INVOLVING CONTROL-I/O BUFFER ENABLE CIRCUITS AND/OR FEATURES OF SAVING POWER IN STANDBY MODE - Systems and methods are disclosed involving control I/O buffer enable circuitry and/or features of saving power in standby mode. In illustrative implementations, aspects of the present innovations may be directed to providing low standby power consumption, such as providing low standby power consumption in high-speed synchronous SRAM and RLDRAM devices. | 2018-03-01 |
20180060083 | PORTABLE BOOT CONFIGURATION SOLUTION FOR THIN CLIENT DEVICE - Certain aspects direct to systems and methods for performing boot configuration of a thin client device with a portable storage device, such as a universal serial bus (USB) storage device. The system includes a computing device functioning as a thin client device, which has an interface under a protocol, such as the USB interface, allowing the portable storage device to be connected to the computing device via the interface. The portable storage device stores configuration data for configuring the computing device. Before booting, the computing device checks if the configuration data exists in a local storage device. If not, the computing device attempts to access the portable storage device, in order to automatically retrieve the configuration data from the portable storage device. Once the configuration data is obtained, the computing device may proceed with booting, and configure the computing device based on information of the configuration data without manual intervention. | 2018-03-01 |
20180060084 | TECHNIQUES FOR BRIDGING BIOS COMMANDS BETWEEN CLIENT AND HOST VIA BMC - In an aspect of the disclosure, a method, a computer-readable medium, and an apparatus are provided. The apparatus may be an embedded-system device. The embedded-system device receives a first message including first command or data from a client. The embedded-system device triggers a BIOS of a host of the embedded-system device to communicate with the embedded-system device. The embedded-system device receives a request from the BIOS. The embedded-system device sends the first command or data to the BIOS in response to the request. | 2018-03-01 |
20180060085 | Processor To Pre-Empt Voltage Ramps For Exit Latency Reductions - In one embodiment, a processor includes a plurality of cores and a power controller. This power controller in turn may include a voltage ramp logic to pre-empt a voltage ramp of a voltage regulator from a first voltage to a second voltage, responsive to a request for a second core to exit a low power state. Other embodiments are described and claimed. | 2018-03-01 |
20180060086 | Short-Circuiting Normal Grace-Period Computations In The Presence Of Expedited Grace Periods - A technique for short-circuiting normal read-copy update (RCU) grace period computations in the presence of expedited RCU grace periods. The technique may include determining during normal RCU grace period processing whether at least one expedited RCU grace period elapsed during a normal RCU grace period. If so, the normal RCU grace period is ended. If not, the normal RCU grace period processing is continued. Expedited RCU grace periods may be implemented by expedited RCU grace period processing that periodically awakens a kernel thread that implements the normal RCU grace period processing. The expedited RCU grace period processing may conditionally throttle wakeups to the kernel thread based on CPU utilization. | 2018-03-01 |
20180060087 | Optimizing User Interface Requests for Backend Processing - A computer-implemented method of user interface control includes receiving request to display data in a user interface and displaying data in a visible part of the user interface. Data requests in a hidden part of the user interface can be assigned to bins. Data requests assigned to a first bin can be transmitted to the backend computing system and a responsive output of the backend system can be displayed in the user interface. If the display request is still active and all of the data requests assigned to the first bin have been transmitted, data requests assigned to a second bin can be transmitted to the backend computing system and a responsive output of the backend computing system can be displayed in the user interface. Related apparatus, systems, techniques and articles are also described. | 2018-03-01 |
20180060088 | Group Interactions - Techniques for group interactions are described. In at least some implementations, content associated with a group identity is presented based on priority settings for each user from a group of users. According to various implementations, priority settings are determined for each user based on an individual identity for each user and the group identity. Thus, a group of users can interact with content optimized for priority settings of the group associated with the group identity in a single location. | 2018-03-01 |
20180060089 | SYSTEM AND COMPUTER-IMPLEMENTED METHOD FOR IN-PAGE REPORTING OF USER FEEDBACK ON A WEBSITE OR MOBILE APP - Computer-implemented techniques are disclosed for presenting an in-page console on a website for reviewing interaction data captured during user interaction with one or more web pages of the website. The web browser activates the in-page console via an activation procedure. One or more of the web pages of the website are selected after activation of the in-page console. A feedback badge on the website can be replaced with a reporting badge upon activation of the in-page console and with the reporting badge displaying an indicator of interaction data captured for the selected web page. The in-page console is overlaid one or more of the selected web pages. The in-page console displays the interaction data, or recordings of user interaction, captured during user interaction with the selected web page to enable review of the captured interaction data for the selected web page overlaid on the selected web page. | 2018-03-01 |
20180060090 | HYBRID SERVER-SIDE AND CLIENT-SIDE PORTAL AGGREGATION AND RENDERING - Rendering of a portal page that is displayable on a client system includes receiving a request for a portal page by a web portal engine, monitoring server-side aggregation and rendering performance by the web portal engine, and comparing a measured performance parameter value of the server-side aggregation and rendering against a pre-defined threshold value. The server-side aggregation and rendering is interrupted, based upon the comparison, once the threshold value is exceeded. Further, an intermediate result of the portal page is prepared based on the server-side aggregation and rendering for sending, such that a client-side processing completes the interrupted aggregation and rendering of the portal page. | 2018-03-01 |
20180060091 | EFFECTIVE MANAGEMENT OF VIRTUAL CONTAINERS IN A DESKTOP ENVIRONMENT - A method and system are provided for identifying installed software components in a container running in a virtual execution environment. The container is created by instantiating image data. The method includes determining a respective identifier for each of individual layers of a layered structure of the image data. The method further includes retrieving from a repository storage arrangement, information identifying at least one of the installed software components in the container, based on the respective identifier for at least one of the individual layers. | 2018-03-01 |
20180060092 | Group Data and Priority in an Individual Desktop - Techniques for a group data and priority in an individual desktop are described. In at least some implementations, content associated with a group identity is presented in an individual desktop based on priority settings. According to various implementations, priority settings are determined for each user of a group based on an individual identity for each user and the group identity. Thus, a group of users can interact with content associated with the group identity in an individual environment. | 2018-03-01 |
20180060093 | Platform Support For User Education Elements - A computing device platform monitors the operation of an application that runs on the computing device platform. The computing device platform includes a user education element presentation system that can access stored presentation criteria associated with a stored user education element for the application. The user education element presentation system determines, based on the monitoring, when the presentation criteria has been satisfied, and presents the associated user education element in response to the presentation criteria for the user education element being satisfied. | 2018-03-01 |
20180060094 | HELP SYSTEM AND HELP PRESENTATION METHOD - A help system include: an apparatus management server; one or more image forming apparatuses; a help server; and a mobile terminal, in which the mobile terminal acquires a serial number of the image forming apparatus of which the user wants to know help information among the one or more image forming apparatuses, and sends the serial number to the apparatus management server, the apparatus management server searches an apparatus information database based on the received serial number, extracts the corresponding apparatus information, and sends the extracted apparatus information to the help server, the help server receives the apparatus information sent from the apparatus management server, extracts help information relating to the sent apparatus information among the help information stored in a manual database, and sends the extracted apparatus information to the mobile terminal, and the mobile terminal presents the received help information to the user. | 2018-03-01 |
20180060095 | APPARATUS AND METHOD FOR PROVIDING ADAPTIVE CONNECTED SERVICE - An audio-video-navigation (AVN) system for vehicles includes: a communication unit connected with a service or device; a memory storing at least one of first operation parameters or at least one of second operation parameters corresponding to a connected service or device; and a controller determining whether a normal operation is performed with the at least one loaded operation parameter and changing the at least one loaded parameter based on whether a normal operation is performed with the at least one loaded operation parameter. The controller performs a control operation to store at least one part of adaptively changeable predetermined parameters as the at least one of second operation parameters based on whether a normal operation is performed with the at least one loaded operation parameter. | 2018-03-01 |
20180060096 | RECONFIGURABLE LOGICAL CIRCUIT - A reconfigurable logical circuit includes a data processing unit; a memory in which plural combinations of configuration control bits are stored; and a selector unit that selectively switches the plural combinations of configuration control bits stored in the memory and supplies a selected one of the plural combinations of configuration control bits to the data processing unit to reconfigure processing contents of the data processing unit. | 2018-03-01 |
20180060097 | HYPER-THREADING BASED HOST-GUEST COMMUNICATION - A system and method for hyper-threading based host-guest communication includes storing, by a guest, at least one request on a shared memory. A physical processor, in communication with the shared memory, includes a first hyper-thread and a second hyper-thread. The method also includes starting, by a hypervisor, execution of a VCPU on the first hyper-thread and sending a first interrupt to the second hyper-thread to signal a request to execute a slave task on the second hyper-thread. The slave task includes an instruction to poll the shared memory. The method further includes executing, by the second hyper-thread, the slave task on the second hyper-thread and executing the at least one request stored on the shared memory. | 2018-03-01 |
20180060098 | VOLUME MANAGEMENT BY VIRTUAL MACHINE AFFILIATION AUTO-DETECTION - Embodiments for volume management in a data storage environment. A network sniffing operation between virtual machines is performed to detect relationships between the virtual machines and thereby identify candidates for subsequent storage volume affiliation operations. | 2018-03-01 |
20180060099 | DETECTING BUS LOCKING CONDITIONS AND AVOIDING BUS LOCKS - A processor may include a register to store a bus-lock-disable bit and an execution unit to execute instructions. The execution unit may receive an instruction that includes a memory access request. The execution may further determine that the memory access request requires acquiring a bus lock, and, responsive to detecting that the bus-lock-disable bit indicates that bus locks are disabled, signal a fault to an operating system. | 2018-03-01 |
20180060100 | Virtual Machine Migration Acceleration With Page State Indicators - Methods, systems, and computer program products are included for migrating a virtual machine. An example method of migrating a virtual machine includes transmitting, from a hypervisor, a migration indicator to a guest. The hypervisor receives a free memory page indicator from the guest that identifies one or one or more free memory pages corresponding to a virtual machine. The hypervisor then places the virtual machine in a suspended state. While the virtual machine is in the suspended state, the hypervisor modifies a dirty status and a migration status corresponding to one or more dirty memory pages. After resuming operation of the virtual machine from the suspended state, the hypervisor modifies a migration status corresponding to the one or more free memory pages to exclude the one or more free memory pages from a migration. The hypervisor migrates the one or more dirty memory pages. | 2018-03-01 |
20180060101 | METHOD FOR CONNECTING A LOCAL VIRTUALIZATION INFRASTRUCTURE WITH A CLOUD-BASED VIRTUALIZATION INFRASTRUCTURE - In a computer-implemented method for connecting a local virtualization infrastructure with a cloud-based virtualization infrastructure, a first view comprising a control for connecting the local virtualization infrastructure to the cloud-based virtualization infrastructure is displayed within a graphical user interface for managing the local virtualization infrastructure. Responsive to a receiving a user selection to connect the local virtualization infrastructure to the cloud-based virtualization infrastructure, at least one workflow for effectuating a connection between the local virtualization infrastructure and the cloud-based virtualization infrastructure is displayed. Responsive to receiving a command to connect the local virtualization infrastructure to the cloud-based virtualization infrastructure at the workflow for effectuating a connection between the local virtualization infrastructure and the cloud-based virtualization infrastructure, a connection between the local virtualization infrastructure and the cloud-based virtualization infrastructure is established according to the at least one workflow. Responsive to establishing a connection between the local virtualization infrastructure and the cloud-based virtualization infrastructure, management of the local virtualization infrastructure and the cloud-based virtualization infrastructure through the graphical user interface for managing the local virtualization infrastructure is provided. | 2018-03-01 |
20180060102 | METHOD FOR MIGRATING A VIRTUAL MACHINE BETWEEN A LOCAL VIRTUALIZATION INFRASTRUCTURE AND A CLOUD-BASED VIRTUALIZATION INFRASTRUCTURE - In a computer-implemented method for migrating a virtual machine between a local virtualization infrastructure and a cloud-based virtualization infrastructure, within a graphical user interface for managing the local virtualization infrastructure, a first view comprising a control for migrating a virtual machine between the local virtualization infrastructure to the cloud-based virtualization infrastructure is displayed. Responsive to a receiving a user selection to migrate a virtual machine between the local virtualization infrastructure and the cloud-based virtualization infrastructure, a workflow for effectuating a migration of the virtual machine between the local virtualization infrastructure and the cloud-based virtualization infrastructure is displayed. Responsive to receiving a command to migrate the virtual machine between the local virtualization infrastructure and the cloud-based virtualization infrastructure at the workflow for effectuating a migration of the virtual machine between the local virtualization infrastructure and the cloud-based virtualization infrastructure, the virtual machine is migrated between the local virtualization infrastructure and the cloud-based virtualization infrastructure. Responsive to migrating the virtual machine between the local virtualization infrastructure and the cloud-based virtualization infrastructure, management of the virtual machine through the graphical user interface for managing the local virtualization infrastructure is provided. | 2018-03-01 |
20180060103 | GUEST CODE EMULATION BY VIRTUAL MACHINE FUNCTION - Systems and methods are provided for emulating guest code by a virtual machine function. An example method includes detecting, by a hypervisor, a request by a guest to access a resource. The guest includes a virtual machine function and kernel code, and runs on a virtual machine. The virtual machine and the hypervisor run on a host machine, which includes virtual machine function memory. The method also includes in response to detecting the request to access the resource, transferring, by the hypervisor, control of a virtual central processing unit (CPU) allocated to the guest to the virtual machine function. The method further includes receiving an indication that the virtual machine function has completed the access request on behalf of the guest. The virtual machine function may modify a state of the virtual CPU in virtual machine function memory. The method also includes synchronizing, by the hypervisor, a virtual machine function memory with the virtual CPU state. | 2018-03-01 |
20180060104 | PARENTLESS VIRTUAL MACHINE FORKING - Instructions to fork a source VM are received, and execution of the source VM is temporarily stunned. A destination VM is created, and a snapshot of a first virtual disk of the source VM is created. A checkpoint state of the source VM is transferred to the destination VM. The source VM has one or more virtual disks. One or more virtual disks associated with the destination VM are created and reference the one or more virtual disks of the source VM. Execution of the destination VM is restored using the transferred checkpoint state and the virtual disks of the destination VM in a way that allows the source VM to also resume execution. Forking VMs using the described operation provisions destination VMs in a manner that makes efficient use of memory and disk space, while enabling source VMs to continue execution after completion of the fork operation. | 2018-03-01 |
20180060105 | VIRTUAL MACHINE CONTROL DEVICE, METHOD FOR CONTROLLING VIRTUAL MACHINE CONTROL DEVICE, MANAGEMENT DEVICE, AND METHOD FOR CONTROLLING MANAGEMENT DEVICE - A virtual machine control device that controls a C-Plane base station virtual machine for providing a base station function of a virtual base station. The virtual machine control device includes: a virtual machine controller configured to activate a clone of the C-Plane base station virtual machine that is a target of software update; and a virtual base station switching controller configured to assign a network assigned to the C-Plane base station virtual machine to the clone after performing the software update to the clone. | 2018-03-01 |
20180060106 | MULTI-TIERED-APPLICATION DISTRIBUTION TO RESOURCE-PROVIDER HOSTS BY AN AUTOMATED RESOURCE-EXCHANGE SYSTEM - The current document is directed a resource-exchange system that facilitates resource exchange and sharing among computing facilities. The currently disclosed methods and systems employ efficient, distributed-search methods and subsystems within distributed computer systems that include large numbers of geographically distributed data centers to locate resource-provider computing facilities that match the resource needs of resource-consumer computing-facilities based on attribute values associated with the needed resources, the resource providers, and the resource consumers. The resource-exchange system monitors and controls resource exchanges on behalf of participants in the resource-exchange system in order to optimize resource usage within participant data centers and computing facilities. Virtual machines that provide the execution environment for multi-tiered applications described by hierarchically organized multi-tiered-application specifications are automatically distributed across one or more resource-provider-computing-facility hosts by the resource-exchange system. | 2018-03-01 |
20180060107 | MULTI-HYPERVISOR VIRTUAL MACHINES - Standard nested virtualization allows a hypervisor to run other hypervisors as guests, i.e. a level-0 (L0) hypervisor can run multiple level-1 (L1) hypervisors, each of which can run multiple level-2 (L2) virtual machines (VMs), with each L2 VM is restricted to run on only one L1 hypervisor. Span provides a Multi-hypervisor VM in which a single VM can simultaneously run on multiple hypervisors, which permits a VM to benefit from different services provided by multiple hypervisors that co-exist on a single physical machine. Span allows (a) the memory footprint of the VM to be shared across two hypervisors, and (b) the responsibility for CPU and I/O scheduling to be distributed among the two hypervisors. Span VMs can achieve performance comparable to traditional (single-hypervisor) nested VMs for common benchmarks. | 2018-03-01 |
20180060108 | ADMINISTERING VIRTUAL MACHINES IN A DISTRIBUTED COMPUTING ENVIRONMENT - In a distributed computing environment that includes hosts that execute a VMM, where each VMM supports execution of one or more VMs, administering VMs may include: assigning, by a VMM manager, the VMMs of the distributed computing environment to a logical tree topology, including assigning one of the VMMs as a root VMM of the tree topology; and executing, amongst the VMMs of the tree topology, a broadcast operation, including: pausing, by the root VMM, execution of one or more VMs supported by the root VMM; sending, by the root VMM, to other VMMs in the tree topology, a message indicating a pending transfer of the paused VMs; and transferring the paused VMs from the root VMM to the other VMMs. | 2018-03-01 |
20180060109 | SOFTWARE-DEFINED COMPUTING SYSTEM REMOTE SUPPORT - Methods, computing systems and computer program products implement embodiments of the present invention that include initializing, by a hypervisor executing on a processor, first and second virtual machines. A first software application configured to provide a service is executed on the first virtual machine, and a logical data connection is established between the first and the second virtual machines. Examples of the logical connection include physical and virtual serial connections, and physical and virtual data networking connections. A second software application configured to enable remote monitoring of the first software application via the logical data connection is executed on the second virtual machine. In some embodiments, the second software application can remotely monitor the first software application via an interface such as a command line interface, a graphical user interface and an application programming interface. | 2018-03-01 |
20180060110 | COLLECTING PERFORMANCE METRICS FROM JAVA VIRTUAL MACHINES - Embodiments include methods, and computing systems, and computer program products for collecting performance metrics from Java virtual machines. Aspects include: setting up a virtual storage structure of a collector on a computing system for collecting performance metrics data from one or more Java virtual machines, pushing, at each of Java virtual machines through a corresponding performance monitoring Java agent, performance metrics data collected by the Java virtual machine to the virtual storage structure of collector, pulling, at a performance monitoring system through a collector API, the performance metrics data collected by the plurality of Java virtual machines from the virtual storage structure of the collector, analyzing the performance metrics data pulled from the virtual storage structure of the collector by the performance monitoring system, and generating, at the performance monitoring system, a performance alert when the performance metrics data analyzed indicates one or more system abnormalities. | 2018-03-01 |
20180060111 | Method and Apparatus for Online Upgrade of Kernel-Based Virtual Machine Module - An apparatus and a method for online upgrading a kernel-based virtual machine module are disclosed. The method includes reorganizing and compiling a kernel-based virtual machine module to obtain a first running module, the first running module supporting a dual-active mode; and causing a machine virtualizer to use the first running module, obtaining a second running module by compiling according to an upgrade version of codes of the first running module, wherein the second running module is an upgrade version of the first running module, and the machine virtualizer switches to use the second running module. | 2018-03-01 |
20180060112 | METHOD AND APPARATUS FOR RESOLVING CONTENTION AT THE HYPERVISOR LEVEL - Aspects relate to a computer system and a computer implemented method for resolving abnormal contention on the computer system. The method includes detecting, using a processor and at a hypervisor level of the computer system, abnormal contention of a serially reusable resource caused by a first virtual machine. The abnormal contention includes the first virtual machine experiencing resource starvation of computer system resources used for processing the first virtual machine, causing the first virtual machine to block the serially reusable resource from a second virtual machine that is waiting to use the serially reusable resource. The method also includes adjusting resource allocation at the hypervisor level of the computer system resources for the first virtual machine, processing the first virtual machine based on the resource allocation, and releasing the serially reusable resource by the first virtual machine in response to the first virtual machine processing. | 2018-03-01 |
20180060113 | CONCURRENT EXECUTION OF A COMPUTER SOFTWARE APPLICATION ALONG MULTIPLE DECISION PATHS - Managing the execution of a computer software application by duplicating a primary instance of a computer software application during its execution in a primary execution context to create multiple duplicate instances of the computer software application in corresponding duplicate execution contexts, and effecting a selection of a different candidate subset of predefined elements for each of the duplicate instances. | 2018-03-01 |
20180060114 | CONCURRENT EXECUTION OF A COMPUTER SOFTWARE APPLICATION ALONG MULTIPLE DECISION PATHS - Managing the execution of a computer software application by duplicating a primary instance of a computer software application during its execution in a primary execution context to create multiple duplicate instances of the computer software application in corresponding duplicate execution contexts, and effecting a selection of a different candidate subset of predefined elements for each of the duplicate instances. | 2018-03-01 |
20180060115 | DYNAMIC PREDICTION OF HARDWARE TRANSACTION RESOURCE REQUIREMENTS - A transactional memory system dynamically predicts the resource requirements of hardware transactions. A processor of the transactional memory system predicts resource requirements of a first hardware transaction to be executed based on a resource hint, a type of hardware transaction that is associated with a given hardware transaction, and a previous execution of a prior hardware transaction that is associated with the type of hardware transaction. The processor allocates resources for the given hardware transaction based on the predicted resource requirements. The processor initiates execution of the first hardware transaction using at least a portion of the allocated resources. | 2018-03-01 |
20180060116 | EXECUTION OF TASK INSTANCES RELATING TO AT LEAST ONE APPLICATION - According to one aspect, there is provided an apparatus comprising at least one processing unit and at least one memory. The at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to cause display of executed task instances relating to at least one application on a graphical user interface on a display, detect a storing command associated with the executed task instances, and store task information relating to the executed task instances in a task file in the at least one memory for later resumption of execution of the task instances. | 2018-03-01 |
20180060117 | LIVE MIGRATION OF VIRTUAL COMPUTING INSTANCES BETWEEN DATA CENTERS - A method of migrating a virtualized computing instance between source and destination virtualized computing systems includes executing a first migration workflow in the source virtualized computing system between a source host computer and a first mobility agent simulating a destination host, executing a second migration workflow in the destination virtualized computing system between a second mobility agent simulating a source host and a destination host computer, sending, as part of the first migration workflow, a configuration of the migrated virtualized computing instance to the destination virtualized computing system, translating, as part of the second migration workflow, infrastructure-dependent information in the configuration of the migrated virtualized computing instance, and transferring, during execution of the first and second migration workflows, migration data including the virtualized computing instance between the source host and the destination host over a network. | 2018-03-01 |
20180060118 | METHOD AND SYSTEM FOR PROCESSING DATA - Embodiments of the present invention relates to a method and system for processing data. Specifically, there is provided a method for processing data, comprising: in response to receiving an adjustment request for adjusting the number of consumer instances from a first number to a second number, determining an adjustment policy on adjusting a first distribution of states associated with the first number of consumer instances to a second distribution of the states associated with the second number of consumer instances, the states being intermediate results of processing the data; migrating the states between the first number of the consumer instances and the second number of the consumer instances according to the adjustment policy; and processing the data based on the second distribution of the states at the second number of the consumer instances. In other embodiments, there are further provided a device and system for processing data. | 2018-03-01 |
20180060119 | OPERATING SYSTEM MIGRATION WHILE PRESERVING APPLICATIONS, DATA, AND SETTINGS - An enterprise management system is described for efficient operating system migration, preserving applications, data, and settings. A staging area, such as an empty folder, is created on a client device. A base layer for the new operating system and application layers for applications that will be installed on the computing device are downloaded to the staging area. After the base layer and application layers are downloaded, the layers are merged onto the computing device to instantly install the operating system and the applications. User settings, data, and other applications can be migrated to corresponding locations in the new operating system from the old operating system. | 2018-03-01 |
20180060120 | RESOURCE MIGRATION NEGOTIATION - Resource migration negotiation is disclosed. A request is received, from a remote physical node in a plurality of physical nodes, for a resource. An operating system is run collectively across the plurality of physical nodes. The request includes information pertaining to a guest thread running on the remote physical node. Based at least in part on at least some of the information included in the request, it is determined whether to send the requested resource or reject the request. A response is provided based at least in part on the determination. | 2018-03-01 |
20180060121 | DYNAMIC SCHEDULING - Dynamic scheduling is disclosed. A plurality of physical nodes is included in a computer system. Each node includes a plurality of processors. Each processor includes a plurality of hyperthreads. In response to receiving an indication of an event occurring, a search is performed for a queue in a set of queues on which to place a virtual processor that had been waiting on the event. Queues in the set of queues correspond to hyperthreads in a physical node in the plurality of physical nodes. The queues in the set of queues are visited according to a predetermined traversal order. | 2018-03-01 |
20180060122 | METHOD AND SYSTEM FOR PREDICTING TASK COMPLETION OF A TIME PERIOD BASED ON TASK COMPLETION RATES OF PRIOR TIME PERIODS USING MACHINE LEARNING - A request is received from a client for determining task completion of a first set of tasks associated with attributes, the first set of tasks scheduled to be performed within a first time period. For each of the attributes, a completion rate of one or more of a second set of tasks is calculated that are associated with the attribute. The second set of tasks has been performed during a second time period in the past. An isotonic regression operation and/or temporal smoothing are performed on the completion rates associated with the attributes of the second set of tasks that have been performed during the second time period to calibrate the completion rates. Possible completion for the attributes of the first set of tasks to be performed in the first time period is calculated based on the calibrated completion rates of the second set of tasks. | 2018-03-01 |
20180060123 | Controlling A Performance State Of A Processor Using A Combination Of Package And Thread Hint Information - In one embodiment, a processor includes: a first storage to store a set of common performance state request settings; a second storage to store a set of thread performance state request settings; and a controller to control a performance state of a first core based on a combination of at least one of the set of common performance state request settings and at least one of the set of thread performance state request settings. Other embodiments are described and claimed. | 2018-03-01 |
20180060124 | HETEROGENEOUS PARALLEL PRIMITIVES PROGRAMMING MODEL - With the success of programming models such as OpenCL and CUDA, heterogeneous computing platforms are becoming mainstream. However, these heterogeneous systems are low-level, not composable, and their behavior is often implementation defined even for standardized programming models. In contrast, the method and system embodiments for the heterogeneous parallel primitives (HPP) programming model disclosed herein provide a flexible and composable programming platform that guarantees behavior even in the case of developing high-performance code. | 2018-03-01 |
20180060125 | INTELLIGENT CONTROLLER FOR CONTAINERIZED APPLICATIONS - A system includes a plurality of storage drives configured to store data associated with at least one of homogeneous and heterogeneous applications running in containers; and a controller configured to balance workloads of the containers by grouping the containers based on characteristics of the workloads of the containers. | 2018-03-01 |
20180060126 | RESOURCE MANAGEMENT FOR UNTRUSTED PROGRAMS - Embodiments include method, systems and computer program products for resource management of untrusted programs. In some embodiments, a first request to process an asynchronous event by an untrusted application may be received. The first request may include a host memory address. A counter may be incremented in response to receiving the first request. A device memory address may be retrieved from a device translation table using the host memory address. Processing the first request by a device using the device memory address may be facilitated. A second request to unregister the host memory address may be received. The counter may be determined to be non-zero. An action may be implemented in response to determining that the counter is non-zero. | 2018-03-01 |
20180060127 | RESERVATION OF HARDWARE RESOURCES IN A COMPUTER SYSTEM BASED ON UTILIZATION MEASUREMENTS DURING TIME RANGES - A resource management computer node obtains hardware utilization values measured for a hardware resource of a computer system being used by a software application. For a set of the utilization values that were measured during a same time-of-day range on a same day of week, the node determines a count value indicating a number of times the utilization values in the set exceed a count threshold, determines a count percentage based on a ratio of the count value to a sum of count values determined for the same day of week, compares the count percentage to a busy threshold, and, responsive to if the count percentage is determined to exceed the busy threshold, sets a busy indicator object at a location in a resource utilization data structure having a defined correspondence to the time-of-day range. The node controls reservation of hardware resources for the software application responsive to whether the busy indicator object has been set. | 2018-03-01 |
20180060128 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR RESOURCE MANAGEMENT IN A DISTRIBUTED COMPUTATION SYSTEM - A method, system, and computer program product, include determining a task resource consumption predicted for each of one or more tasks being executed on a node, wherein the task resource consumption is a function of time and predicting a node resource consumption of the node based at least on the predicted task resource consumption, wherein the node resource consumption is a function of time. | 2018-03-01 |
20180060129 | TERMINATION POLICIES FOR SCALING COMPUTE RESOURCES - Approaches are described for enabling a user to specify one or more termination policies that can be used to select which instances in a group of virtual machines (or other compute resources) allocated to the user should be terminated first when scaling down the group of virtual machine instances. The termination policies can be utilized by an automatic scaling service when managing the resources in a multitenant shared resource computing environment, such as a cloud computing environment. | 2018-03-01 |
20180060130 | Speculative Loop Iteration Partitioning for Heterogeneous Execution - Embodiments include computing devices, apparatus, and methods implemented by the apparatus for implementing speculative loop iteration partitioning (SLIP) for heterogeneous processing devices. A computing device may receive iteration information for a first partition of iterations of a repetitive process and select a SLIP heuristic based on available SLIP information and iteration information for the first partition. The computing device may determine a split value for the first partition using the SLIP heuristic, and partition the first partition using the split value to produce a plurality of next partitions. | 2018-03-01 |
20180060131 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD - An information processing system includes a memory and processors. The memory stores flow information and flow identification information for each sequence of processes performed by using electronic data. The flow information defines program identification information identifying programs for executing the sequence of processes, and an execution order of the programs. The processors execute computer-executable instructions stored in the memory to execute a process including receiving information relating to the electronic data and flow identification information, from a device coupled to the information processing system; acquiring the flow information stored in association with the received flow identification information; provisionally executing the sequence of processes based on the received information and the acquired flow information; and executing the sequence of processes based on the received information and the acquired flow information, upon determining that an error has not occurred in the provisional execution of the sequence of processes. | 2018-03-01 |
20180060132 | STATEFUL RESOURCE POOL MANAGEMENT FOR JOB EXECUTION - Stateful resource pool management may be implemented for executing jobs. Metrics for pools of computing resources that are configured to execute jobs on behalf of network-based services may be collected. The metrics may be evaluated to detect a modification event for a pool of computing resources. The pool of computing resources may then be modified according to the detected modification event for the pool. Evaluation of metrics may be performed automatically as part of monitoring a resource pool, in some embodiments. | 2018-03-01 |
20180060133 | EVENT-DRIVEN RESOURCE POOL MANAGEMENT - Event-driven management may be implemented for resource pools. Pool management events may be detected at computing resources in a resource pool. Operations based on the pool management events may then be performed at the computing resources. In some embodiments, pool management events may trigger operations to a recycle a computing resource for reuse in a resource pool or perform other resource lifecycle operations. | 2018-03-01 |
20180060134 | RESOURCE OVERSUBSCRIPTION BASED ON UTILIZATION PATTERNS IN COMPUTING SYSTEMS - Techniques of managing oversubscription of network resources are disclosed herein. In one embodiment, a method includes receiving resource utilization data of a virtual machine hosted on a server in a computing system. The virtual machine is configured to perform a task. The method also includes determining whether a temporal pattern of the resource utilization data associated with the virtual machine indicates one or more cycles of resource utilization as a function of time and in response to determining that the temporal pattern associated with the virtual machine indicates one or more cycles of resource utilization as a function of time, causing the virtual machine to migrate to another server that is not oversubscribed by virtual machines in the computing system. | 2018-03-01 |
20180060135 | Network Management - According to an example aspect of the present invention, there is provided a system comprising a memory configured to store information characterizing network management actions that have occurred in the past, and at least one processing core configured to initiate a network management action based at least in part on the stored information, the network management action involving at least one virtualized network function. | 2018-03-01 |
20180060136 | TECHNIQUES TO DYNAMICALLY ALLOCATE RESOURCES OF CONFIGURABLE COMPUTING RESOURCES - Examples may include techniques to coordinate the sharing of resources among virtual elements, including service chains, supported by a shared pool of configurable computing resources based on relative priority among the virtual element and service chains. Information including indications of the performance of the service chains and also the relative priority of the service chains may be received. The resource allocation of portions of the shared pool of configurable computing resources supporting the service chains can be adjusted based on the received performance and priority information. | 2018-03-01 |
20180060137 | CONSTRAINED PLACEMENT IN HIERARCHICAL RANDOMIZED SCHEDULERS - A distributed scheduler for a virtualized computer system has a hierarchical structure and includes a root scheduler as the root node, one or more branch schedulers as intermediate nodes, and a plurality of hosts as leaf nodes. A request to place a virtual computing instance is propagated down the hierarchical structure to the hosts that satisfy placement constraints of the request. Each host that receives the request responds with a score indicating resource availability on that host, and the scores are propagated back up the hierarchical structure. Branch schedulers that receive such scores compare the received scores and further propagate a “winning” score, such as the highest or lowest score, up the hierarchical structure, until the root scheduler is reached. The root scheduler makes a similar comparison of received scores to select the best candidate among the hosts to place the virtual computing instance. | 2018-03-01 |
20180060138 | Load Balancing Systems and Methods - Methods, systems, computer-readable media, and apparatuses for performing, providing, managing, executing, and/or running a spatially-optimized simulation are presented. In one or more embodiments, the spatially-optimized simulation may comprise a plurality of worker modules performing the simulation, a plurality of entities being simulated among the plurality of worker modules, a plurality of bridge modules facilitating communication between workers and an administrative layer including a plurality of chunk modules, at least one receptionist module, and at least one oracle module. The spatially-optimized simulation may be configured to provide a distributed, persistent, fault-tolerate and spatially-optimized simulation environment. In some embodiments, load balancing and fault tolerance may be performed using transfer scores and/or tensile energies determined among the candidates for transferring simulation entities among workers. In some embodiments, the plurality of bridge modules may expose an application programming interface (API) for communicating with the plurality of worker modules. | 2018-03-01 |
20180060139 | APPLICATION PROCESSING ALLOCATION IN A COMPUTING SYSTEM - A method for allocating processing of an application performed by a computing system made up of a plurality of interconnected physical computing devices includes executing an application on a first application server associated with the computing system, the application having a number of modular software components; while executing the application, measuring processing resources consumed by one of the modular software components; and in response to one of the modular software components consuming an amount of processing resources defined by a criterion, deploying the one of the modular software components to a second application server associated with the computing system. | 2018-03-01 |
20180060140 | Short-Circuiting Normal Grace-Period Computations In The Presence Of Expedited Grace Periods - A technique for short-circuiting normal read-copy update (RCU) grace period computations in the presence of expedited RCU grace periods. The technique may include determining during normal RCU grace period processing whether at least one expedited RCU grace period elapsed during a normal RCU grace period. If so, the normal RCU grace period is ended. If not, the normal RCU grace period processing is continued. Expedited RCU grace periods may be implemented by expedited RCU grace period processing that periodically awakens a kernel thread that implements the normal RCU grace period processing. The expedited RCU grace period processing may conditionally throttle wakeups to the kernel thread based on CPU utilization. | 2018-03-01 |
20180060141 | HIERARCHICAL HARDWARE OBJECT MODEL LOCKING - A method, executed by a computer, includes locking a system mutex of a system target, locking a node group with a single node group write-lock, wherein the node group comprises a plurality of nodes that are all locked by the single node group write-lock, and wherein each node of the plurality of nodes has a plurality of descendants, and locking the plurality of descendants corresponding to a node with a single node write-lock. A computer system and computer program product corresponding to the above method are also disclosed herein. | 2018-03-01 |
20180060142 | MIXED CRITICALITY CONTROL SYSTEM - A control system includes a multi-core processor configured to operate plural different applications performing different operations for controlling a controlled system. The applications are associated with different levels of criticality based on the operations performed by the applications. The processor is configured to provide a single hardware platform providing both spatial and temporal isolation between the different applications based on the different levels of criticality associated with the different applications. The processor also is configured to synchronize communications of the applications operating in a real time operating system with scheduled communications of a time sensitive network (TSN). | 2018-03-01 |
20180060143 | DISTRIBUTED SHARED LOG STORAGE SYSTEM HAVING AN ADAPTER FOR HETEROGENOUS BIG DATA WORKLOADS - A distributed shared log storage system employs an adapter that translates APIs for a big data application to APIs of the distributed shared log storage system. An instance of an adapter is configured for different big data applications in accordance with a profile thereof, so that the big data applications can take on a variety of added characteristics to enhance the application and/or to improve the performance of the application. Included in the added characteristics are global or local ordering of operations, replication of operations according to different replication models, making the operations atomic and caching. | 2018-03-01 |
20180060144 | CONTROL METHODS FOR MOBILE ELECTRONIC DEVICES IN DISTRIBUTED ENVIRONMENTS - The present invention provides methods and systems for controlling electronic devices through digital signal processor (DSP) and handler control logic. DSPs and handlers are connected by at least one signal adapter, with each signal adapter making use of partial DSP functionalities, and at least one device sensor. The present invention makes use of a device profiling database, to optimize device performance. | 2018-03-01 |
20180060145 | MESSAGE CACHE MANAGEMENT FOR MESSAGE QUEUES - A method and apparatus for message cache management for message queues is provided. A plurality of messages from a plurality of enqueuers are enqueued in a queue comprising one or more shards, each shard comprising one or more subshards. A message cache is maintained in memory. Enqueuing a message includes enqueuing the message in a current subshard of a particular shard, which includes storing the message in a cached subshard corresponding to the current subshard of the particular shard. For each dequeuer-shard pair, a dequeue rate is determined. Estimated access time data is generated that includes an earliest estimated access time for each of a plurality of subshards based on the dequeuer-shard pair dequeue rates. A set of subshards is determined for storing as cached subshards in the message cache based on the earliest estimated access times for the plurality of subshards. | 2018-03-01 |
20180060146 | MESSAGE PATTERN DETECTION AND PROCESSING SUSPENSION - A transaction suspension system rapidly determines whether messages received by a data transaction processing system correspond to a stored message pattern. Stored message patterns may relate to a transaction type associated with each message, which sources transmitted the messages, and when messages were received by the data transaction processing system. The transaction suspension system may prevent the processing of messages, e.g., messages from a specific source, even if the messages would have otherwise qualified for processing or execution. | 2018-03-01 |
20180060147 | PROCESSOR SYSTEM AND METHOD FOR MONITORING PROCESSORS - A processor system includes an application processor, which has a processor core and hardware performance counters, and a monitoring processor, which is coupled to the application processor by a data transmission interface. The monitoring processor has a look-up table, in which target performance profiles of the progression over time of performance events of the hardware performance counters are stored for an application which is to be executed on the application processor and monitored. The monitoring processor has an evaluating logic which is linked to the look-up table and is configured to record the progression over time of performance events of the hardware performance counters during the execution of the application to be monitored on the application processor and to compare the progression with the target performance profiles stored in the look-up table. | 2018-03-01 |
20180060148 | BAD BLOCK DETECTION AND PREDICTIVE ANALYTICS IN NAND FLASH STORAGE DEVICES - Utilities for use in actively detecting the occurrence of bad blocks in NAND flash storage devices and diagnosing the devices as faulty at some point before complete failure of the devices (e.g., before a number of allowable bad blocks has been reached) to allow a corresponding service processor to continue to write to available blocks for a period of time until a replacement NAND flash device can be identified. The utilities may also be utilized to predict the future occurrence of bad blocks in NAND flash devices, such as during the “burn-in” process of the devices (e.g., which tests the quality of the NAND flash device before being placed into service to weed out devices with defects). | 2018-03-01 |
20180060149 | Interface Tool for Asset Fault Analysis - Disclosed herein are systems, devices, and methods related to analyzing faults across a population of assets. In particular, examples involve receiving a selection of variables each corresponding to an asset attribute type, accessing data associated with the selected variables, determining the number of fault occurrences across the population of assets for each combination of values of the selected variables, and facilitating the identification of outlier combination(s) that correspond to an abnormally large number of fault occurrences relative to other combination(s). | 2018-03-01 |
20180060150 | ROOT CAUSE ANALYSIS - Systems, methods and tools for performing a root cause analysis and improvements to the root cause detection by changing the way analysts and troubleshooters interact with the error reporting files to detect injection points that indicate the root cause of a system error. The systems, methods and tools record the observable behavior of users as the users review files to identify behavioral clues of the user to infer a level of interest in sections of the files being viewed. The systems identify correlations between user behavior and emotive expression to calculate a probability of event data being the root cause of an error. The systems may manually or automatically generate one or more tags in the reviewed file for each of the sections of the file that has a probability of being a root cause of a defect and the tags may vary as a function of the probability. | 2018-03-01 |
20180060151 | TECHNIQUE FOR VALIDATING A PROGNOSTIC-SURVEILLANCE MECHANISM IN AN ENTERPRISE COMPUTER SYSTEM - The disclosed embodiments relate to a system for validating a prognostic-surveillance mechanism, which detects anomalies that arise during operation of a computer system. During operation, the system obtains telemetry data comprising a set of raw signals gathered from sensors in the computer system during operation of the computer system, wherein the telemetry signals are gathered over a monitored time period. Next, for each raw signal in the set of raw signals, the system decomposes the raw signal into deterministic and stochastic components. The system then generates a corresponding set of synthesized signals based on the deterministic and stochastic components of the raw signals, wherein the synthesized signals are generated for a simulated time period, which is longer than the monitored time period. Finally, the system uses the set of synthesized signals to validate one or more performance metrics of the prognostic-surveillance mechanism. | 2018-03-01 |
20180060152 | AUTOMATED DATA STORAGE LIBRARY SNAPSHOT FOR HOST DETECTED ERRORS - Embodiments for automated data storage library snapshot for host detected errors by a processor. A host related trigger associated with a host of an automated data storage library may be detected. The host related triggering event may be unrecognized or undetected as a library error by the automated data storage library. A snapshot of one or more logs in the automated data storage library may be captured upon detection of the host related triggering event. The snapshot of the one or more logs may be stored by the automated data storage library. | 2018-03-01 |
20180060153 | Sensor Web for Internet of Things Sensor Devices - Concepts and technologies are disclosed herein for a sensor web for Internet of Things (“IoT”) devices. According to one aspect disclosed herein, a system can monitor a health status of an IoT sensor device of a plurality of IoT sensor devices. The system can determine that the health status of the IoT sensor device indicates a sensor malfunction experienced by the IoT sensor device, and in response, can generate and send an alert to a forensic analytics module. The alert can identify the sensor malfunction. In response to the alert, the forensic analytics module can determine a last known location of the IoT sensor device. The system can obtain a set of satellite images of the last known location of the IoT sensor device, and can utilize the set of satellite images of the last known location to determine a cause of the sensor malfunction. | 2018-03-01 |
20180060154 | TECHNIQUES FOR DYNAMICALLY BENCHMARKING CLOUD DATA STORE SYSTEMS - In various embodiments, a benchmarking engine automatically tests a data store to assess functionality and/or performance of the data store. The benchmarking engine generates data store operations based on dynamically adjustable configuration data. As the benchmarking engine generates the data store operations, the data store operations execute on the data store. In a complementary fashion, as the data store operations execute on the data store, the benchmarking engine generates statistics based on the results of the executed data store operations. Advantageously, because the benchmarking engine adjusts the number and/or type of data store operations that the benchmarking engine generates based on any changes to the configuration data, the workload that executes on the data store may be fine-tuned as the benchmarking engine executes. | 2018-03-01 |
20180060155 | FAULT DETECTION USING DATA DISTRIBUTION CHARACTERISTICS - Certain embodiments may include a method, system, apparatus, and/or machine accessible storage medium to: obtain baseline data associated with a device, wherein the baseline data comprises an indication of an expected performance of the device during healthy operation; obtain status data associated with the device, wherein the status data is obtained based on operational information monitored by a sensor; compute delta data based on a delta between the status data and the baseline data; compute a standard deviation of the delta data; compute a plurality of standard deviation bands based on the standard deviation of the delta data; compute a statistical distribution of the delta data based on the plurality of standard deviation bands; and detect a fault in the device based on the statistical distribution of the delta data. | 2018-03-01 |
20180060156 | ANALYSIS METHOD, ANALYSIS APPARATUS, AND COMPUTER-READABLE RECORDING MEDIUM STORING ANALYSIS PROGRAM - Normal and abnormal states are calculated from log data with respect to each of a plurality of processings in which shared modules exist. A timing of a change of the states is calculated. A time interval, in which the normal and abnormal states are not mixed, is separated with respect to each of the plurality of processings, based on the calculated timing. In the time interval, an abnormal module is detected, based on relationship information between the plurality of processings and the modules. | 2018-03-01 |
20180060157 | GENERATING TAILORED ERROR MESSAGES - A computer system may encounter an error and receive information regarding the error and the user. The system may use information about the user to generate a message generation profile for the user. The system may use the message generation profile and the information about the error to generate a user-tailored message. The system may monitor the reaction of the user to an error message, and consider the information associated with the reaction when generating user-tailored error messages, subsequently. | 2018-03-01 |
20180060158 | MULTIPLE PATH ERROR DATA COLLECTION IN A STORAGE MANAGEMENT SYSTEM - In one aspect, multiple data path error collection is provided in a storage management system. In one embodiment, an error condition in a main data path between the storage controller and at least one of a host and a storage unit is detected, and in response, a sequence of error data collection operations to collect error data through a main path is initiated. In response to a failure to collect error data at a level of the sequential error data collection operations, error data is collected through an alternate data path as a function of the error data collection level at which the failure occurred. Other aspects are described. | 2018-03-01 |
20180060159 | PROFILING AND DIAGNOSTICS FOR INTERNET OF THINGS - A computing device and method for profiling and diagnostics in an Internet of Things (IoT) system, including matching an observed solution characteristic of the IoT system to an anomaly in an anomaly database. | 2018-03-01 |
20180060160 | Low-Latency Decoder for Reed Solomon Codes - A decoder includes a syndrome calculator, a Key Equation Solver (KES) and an error corrector. The syndrome calculator is configured to receive an n-symbol code word encoded using a Reed Solomon (RS) code to include (n−k) redundancy symbols, and to calculate for the code word 2t syndromes Si, t=(n−k)/2 is a maximal number of correctable erroneous symbols. The KES is configured to derive an error locator polynomial | 2018-03-01 |
20180060161 | PATTERNED BIT IN ERROR MEASUREMENT APPARATUS AND METHOD - A method includes detecting different data patterns in data read from a portion of a non-transitory data storage medium. Bit errors in the different data patterns are then determined. Further, bits in error for a total number of bits in each of the different data patterns are calculated from the determined bit errors in the different data patterns. | 2018-03-01 |
20180060162 | Auto-Recovery of Media Cache Master Table Data - Apparatus and method for managing a media cache of a data storage device. In some embodiments, a media cache master table is maintained in a memory as a data structure having a plurality of entries that describe data sets stored in a non-volatile media cache memory. A first timecode stamp value is written to respective first and second locations in the table at the commencement of a data transfer operation to transfer data associated with the plurality of entries in the table. The first location is updated with a new, second timecode stamp value responsive to detection of an error condition that interrupts the data transfer operation. An error recovery operation is subsequently performed responsive to a detected mismatch between the timecode stamp values in the first and second locations. | 2018-03-01 |
20180060163 | ERROR CORRECTION HARDWARE WITH FAULT DETECTION - Error correction code (ECC) hardware includes write generation (Gen) ECC logic and a check ECC block coupled to an ECC output of a memory circuit with read Gen ECC logic coupled to an XOR circuit that outputs a syndrome signal to a syndrome decode block coupled to a single bit error correction block. A first MUX receives the write data is in series with an input to the write Gen ECC logic or a second MUX receives the read data from the memory circuit in series with an input of the read Gen ECC logic. A cross-coupling connector couples the read data from the memory circuit to a second input of the first MUX or for coupling the write data to a second input of the second MUX. An ECC bit comparator compares an output of the write Gen ECC logic to the read Gen ECC logic output. | 2018-03-01 |
20180060164 | INTEGRATED CIRCUITS AND METHODS FOR DYNAMIC ALLOCATION OF ONE-TIME PROGRAMMABLE MEMORY - An integrated circuit includes a one-time programmable (OTP) memory having a plurality of pages and address translation circuitry. A first line of each page is configured to store error policy bits. When a first bit of the first line has a first value, the page is configured to store data with error correction code (ECC) bits, and when the first bit has a second value, at least a portion of the page is configured to store data with redundancy. The address translation circuitry is configured to, in response to receiving an access address, use the first line of an accessed page of the plurality of pages accessed by the access address to determine a physical address in the accessed page which corresponds to the access address. | 2018-03-01 |
20180060165 | SEMICONDUCTOR DEVICES - A semiconductor device may include an error correction circuit and a fuse signal generation circuit. The error correction circuit may be configured to generate a syndrome signal from data using an error correction code. The fuse signal generation circuit may be configured to receive the syndrome signal to generate a fuse signal for repairing a cell array storing the data. | 2018-03-01 |
20180060166 | SEMICONDUCTOR SYSTEMS - A semiconductor system includes a host and a media controller. The host may generate first host parities from first host data based on an error check matrix. The media controller may include a first input/output (I/O) circuit and a second I/O circuit. The media controller may generate first media data and first media parities based on the first host data and the first host parities. The first I/O circuit may generate, based on the error check matrix, first internal data by correcting errors in the first host data using the first host parities. The second I/O circuit may generate the first media data and the first media parities from the first internal data. | 2018-03-01 |
20180060167 | REDUCING UNCORRECTABLE ERRORS BASED ON A HISTORY OF CORRECTABLE ERRORS - In some embodiments, a computer-implemented method includes maintaining two or more error indicators for correctable errors occurring at two or more memory components. Each of the error indicators may be associated with a corresponding memory component. A correctable error may be detected as occurring during a first memory fetch operation at a first memory component. A first error indicator corresponding to the first memory component may be set, responsive to the correctable error at the first memory component. An uncorrectable error may be detected during a second memory fetch operation. It may be detected that the first error indicator is set. The first memory component may be marked, responsive to the uncorrectable error and to detecting that the first error indicator is set. The two or more error indicators for correctable errors may thus determine which memory component to mark due to the uncorrectable error. | 2018-03-01 |