52nd week of 2018 patent applcation highlights part 53 |
Patent application number | Title | Published |
20180373515 | DIFFERENTIATED STATIC ANALYSIS FOR DYNAMIC CODE OPTIMIZATION - A mechanism for generating optimized native code for a program having dynamic behavior uses a static analysis of the program to predict the likelihood that different elements of the program are likely to be used when the program executes. The static analysis is performed prior to execution of the program and marks certain elements of the program with confidence indicators that classify the elements with either a high level of confidence or a low level of confidence. The confidence indicators are then used by an ahead-of-time native compiler to generate native code and to optimize the code for faster execution and/or a smaller-sized native code. | 2018-12-27 |
20180373516 | TECHNIQUES FOR DISTRIBUTING CODE TO COMPONENTS OF A COMPUTING SYSTEM - Techniques and apparatus for distributing code via a translation process are described. In one embodiment, for example, an apparatus may include at least one memory and logic, at least a portion of the logic comprised in hardware coupled to the at least one memory, the logic to determine a source code element to be translated to a target code element, determine source code information for the source code element, provide a translation request corresponding to the source code to a translation service, receive the target code element from the translation service, and execute the target code element in place of the source code element. Other embodiments are described and claimed. | 2018-12-27 |
20180373517 | SYSTEMS, METHODS, AND APPARATUSES FOR DOCKER IMAGE DOWNLOADING - The disclosure provides methods, apparatuses, and systems for downloading Docker images using a P2P distribution system. In one embodiment, a method comprises receiving, by a supernode from a client device, a download request for a layer of a container image file, the supernode selected from a supernode list comprising a plurality of supernodes; generating, by the supernode, slice information of each slice of the layer; and transmitting, by the supernode, the slice information and at least one target node to the client device, the transmitting of the slice information and the target node causing the client device to initiate a download of slices from the supernode to the at least one target node. By means of embodiments of the disclosure, the efficiency and stability of Docker image downloading can be improved. | 2018-12-27 |
20180373518 | Multiple Virtual Machines in a Mobile Virtualization Platform - Systems and methods are described for embodiments of a mobile virtualization platform (MVP) that may be embedded in an end user mobile device or comprise part of the firmware loaded on the device. The MVP may implement a thin layer of software embedded on the device to decouple applications and data from the underlying hardware, thus enabling the device to concurrently run multiple operating systems. Furthermore, the MVP may enable applications to run concurrently per each base band. | 2018-12-27 |
20180373519 | INFORMATION PROCESSING APPARATUS, STORAGE MEDIUM, AND CONTROL METHOD - An information processing apparatus according to embodiments of the present invention installs a printer driver by specifying a name of a logical printer, adds customization information for changing a setting of the printer driver to a database, and deletes a logical printer of the specified name from the operating system if it is determined that addition of the customization information to the database has failed with respect to the logical printer. | 2018-12-27 |
20180373520 | SERVER FOR PROVIDING SOFTWARE PLATFORM AND METHOD OF OPERATING SERVER - A method of operating a server for providing a software platform includes the operations of receiving, from a client device, information about an electronic device on which the software platform is to be mounted; transmitting, to the client device, information about software packages mountable on the electronic device; receiving, from the client device, a request for information about a first software package selected from among the software packages; detecting a second software package associated with the first software package; transmitting, to the client device, the information about the first software package and information about the second software package; and creating a platform image, based on software packages selected by the client device. | 2018-12-27 |
20180373521 | SAFE AND AGILE ROLLOUTS IN A NETWORK-ACCESSIBLE SERVER INFRASTRUCTURE USING SLICES - Methods, systems, and apparatuses manage rolling out of updates in a network-accessible server infrastructure which operates a plurality of instances of a supporting service. The supporting service is comprised by a plurality of service portions. The instances of the supporting service each include of the service portions. The instances of the supporting service are partitioned into a plurality of slices. Each instance is partitioned to include one or more of the slices, and each slice of an instance includes one or more of the service portions. A software update is deployed to the instances by applying the software update to the slices in a sequence such that the software update is applied to a same slice in parallel across the instances containing that same slice before being applied to a next slice, and waiting a wait time before applying the software domain to a next slice in the sequencing. | 2018-12-27 |
20180373522 | IN-VEHICLE UPDATING DEVICE, UPDATING SYSTEM, AND UPDATE PROCESSING PROGRAM - Provided are an in-vehicle updating device, an updating system and an update processing program that are able to efficiently perform update processing of an in-vehicle communication device connected to a plurality of communication lines. Update processing of an ECU connected to communication lines is performed, by a gateway transmitting repro data for use in updating to the ECU. A repro tool stores divided data of the repro data, and transmits the divided data to the gateway. The repro tool attaches, to the plurality of divided data, sequential order information for use in restoring the divided data, and transmits the resultant data to the gateway. The gateway determines a communication state of each communication line, appropriately distributes the plurality of divided data to the plurality of communication lines, according to the determined communication states, and transmits the plurality of divided data to the ECU via the plurality of communication lines. | 2018-12-27 |
20180373523 | APPLICATION UPDATE METHOD AND APPARATUS - When an application client is started, if an application patch file package exists for the application client, the device invokes a DexClassLoader to load one or more executable files generated from one or more class files for which an updated version and a current version of the application client have a difference. The device initializes the application client by inserting each of the one or more executable files in front of existing executable files of a corresponding application component in the current version of the application client, such that invocation of corresponding class files for the one or more classes in the current version of the application client is bypassed during the initializing of the application client. The present disclosure resolves a technical problem that when an application is updated, a current operation needs to be interrupted to enter an installation interface, consequently reducing application update efficiency. | 2018-12-27 |
20180373524 | SERIAL BOOTLOADING OF POWER SUPPLIES - One example of a system includes a server, a plurality of power supplies, and a system controller. The plurality of power supplies are electrically coupled to the server and each power supply includes machine readable instructions. The system controller updates the machine readable instructions of each of the plurality of power supplies one at a time while maintaining power to the system controller from at least one of the plurality of power supplies. | 2018-12-27 |
20180373525 | CONSTRUCTING BUILD ENVIRONMENTS FOR SOFTWARE - Build environments for software can be constructed. For example, a computing device can receive a file indicating a first software component to be installed in a build environment and a second software component to be built in the build environment. The computing device can perform a first setup phase for creating part of the build environment by causing the first software component to be installed in the build environment. The computing device can also determine that the first setup phase is complete. Based on determining that the first setup phase is complete, the computing device can perform a second setup phase for completing the build environment by causing the second software component to be built in the build environment. | 2018-12-27 |
20180373526 | MAINTAINING THE INTEGRITY OF PROCESS CONVENTIONS WITHIN AN ALM FRAMEWORK - At least one ALM artifact, indexed by a unified data store, that does not comply with at least one process convention can be identified. Responsive to identifying the ALM artifact, indexed by the unified data store, that does not comply with the process convention, a determination can be made by a process convention agent executed by a processor as to whether script code is available to update the ALM artifact to comply with the process convention. Responsive to the process convention agent determining that script code is available to update the ALM artifact to comply with the process convention, the process convention agent can automatically execute the script code to update the ALM artifact to comply with the process convention. | 2018-12-27 |
20180373527 | WEIGHTING STATIC ANALYSIS ALERTS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for computing weights for source code alerts. One of the methods includes generating a respective sample of alerts for each feature of a plurality of features. One or more feature values are computed for alerts having a same respective attribute value for each feature of a plurality of features. An importance distribution that maps each feature value to a respective measure of importance for an alert having the feature value is used to compute a respective feature score for the feature using one or more feature values computed the alert. A respective weight is computed for each alert by combining the plurality of feature scores computed for the alert. | 2018-12-27 |
20180373528 | PREDICTED NULL UPDATES - Setting or updating of floating point controls is managed. Floating point controls include controls used for floating point operations, such as rounding mode and/or other controls. Further, floating point controls include status associated with floating point operations, such as floating point exceptions and/or others. The management of the floating point controls includes efficiently updating the controls, while reducing costs associated therewith. | 2018-12-27 |
20180373529 | FINE-GRAINED MANAGEMENT OF EXCEPTION ENABLEMENT OF FLOATING POINT CONTROLS - Setting or updating of floating point controls is managed. Floating point controls include controls used for floating point operations, such as rounding mode and/or other controls. Further, floating point controls include status associated with floating point operations, such as floating point exceptions and/or others. The management of the floating point controls includes efficiently updating the controls, while reducing costs associated therewith. | 2018-12-27 |
20180373530 | EMPLOYING PREFIXES TO CONTROL FLOATING POINT OPERATIONS - Setting or updating of floating point controls is managed. Floating point controls include controls used for floating point operations, such as rounding mode and/or other controls. Further, floating point controls include status associated with floating point operations, such as floating point exceptions and/or others. The management of the floating point controls includes efficiently updating the controls, while reducing costs associated therewith. | 2018-12-27 |
20180373531 | PREDICTED NULL UPDATES - Setting or updating of floating point controls is managed. Floating point controls include controls used for floating point operations, such as rounding mode and/or other controls. Further, floating point controls include status associated with floating point operations, such as floating point exceptions and/or others. The management of the floating point controls includes efficiently updating the controls, while reducing costs associated therewith. | 2018-12-27 |
20180373532 | FINE-GRAINED MANAGEMENT OF EXCEPTION ENABLEMENT OF FLOATING POINT CONTROLS - Setting or updating of floating point controls is managed. Floating point controls include controls used for floating point operations, such as rounding mode and/or other controls. Further, floating point controls include status associated with floating point operations, such as floating point exceptions and/or others. The management of the floating point controls includes efficiently updating the controls, while reducing costs associated therewith. | 2018-12-27 |
20180373533 | EMPLOYING PREFIXES TO CONTROL FLOATING POINT OPERATIONS - Setting or updating of floating point controls is managed. Floating point controls include controls used for floating point operations, such as rounding mode and/or other controls. Further, floating point controls include status associated with floating point operations, such as floating point exceptions and/or others. The management of the floating point controls includes efficiently updating the controls, while reducing costs associated therewith. | 2018-12-27 |
20180373534 | COMPILER CONTROLS FOR PROGRAM REGIONS - Setting or updating of floating point controls is managed. Floating point controls include controls used for floating point operations, such as rounding mode and/or other controls. Further, floating point controls include status associated with floating point operations, such as floating point exceptions and/or others. The management of the floating point controls includes efficiently updating the controls, while reducing costs associated therewith. | 2018-12-27 |
20180373535 | METHODS AND APPARATUSES FOR CALCULATING FP (FULL PRECISION) AND PP (PARTIAL PRECISION) VALUES - A method for calculating FP (Full Precision) and PP (Partial Precision) values, performed by an ID (Instruction Decode) unit, contains at least the following steps: decoding an instruction request from a compiler; executing a loop m times to generate m microinstructions for calculating first-type data, or n times to generate n microinstructions for calculating second-type data according to the instruction mode of the instruction request, thereby enabling ALGs (Arithmetic Logic Groups) to execute lanes of a thread. m is less than n and the precision of the first-type data is lower than the precision of the second-type data. | 2018-12-27 |
20180373536 | APPARATUSES FOR INTEGRATING ARITHMETIC WITH LOGIC OPERATIONS - An apparatus for integrating arithmetic with logic operations contains at least a calculation device and a post-logic unit. The calculation device calculates source data to generate and output first destination data. The post-logic unit, coupled to the calculation device, performs a comparison operation for comparing the first destination data with 0 and outputs a comparison result. | 2018-12-27 |
20180373537 | COMPILER CONTROLS FOR PROGRAM REGIONS - Setting or updating of floating point controls is managed. Floating point controls include controls used for floating point operations, such as rounding mode and/or other controls. Further, floating point controls include status associated with floating point operations, such as floating point exceptions and/or others. The management of the floating point controls includes efficiently updating the controls, while reducing costs associated therewith. | 2018-12-27 |
20180373538 | COLLAPSING OF MULTIPLE NESTED LOOPS, METHODS, AND INSTRUCTIONS - In an embodiment, the present invention is directed to a processor including a decode logic to receive a multi-dimensional loop counter update instruction and to decode the multi-dimensional loop counter update instruction into at least one decoded instruction, and an execution logic to execute the at least one decoded instruction to update at least one loop counter value of a first operand associated with the multi-dimensional loop counter update instruction by a first amount. Methods to collapse loops using such instructions are also disclosed. Other embodiments are described and claimed. | 2018-12-27 |
20180373539 | SYSTEM AND METHOD OF MERGING PARTIAL WRITE RESULTS FOR RESOLVING RENAMING SIZE ISSUES - A processor including a physical register file with multiple physical registers, mapping logic, and a merge system. The mapping logic maps up to a first maximum number of the physical registers for each architectural register specified in received program instructions and stores corresponding mappings in a rename table. The merge system generates a merge instruction for each architectural register that needs to be merged, inserts each generated merge instruction into the program instructions to provide a modified set of instructions, and that issues the modified set of instructions in consecutive issue cycles based on a take rule. In one embodiment, the first maximum number may be two. | 2018-12-27 |
20180373540 | CLUSTER GRAPHICAL PROCESSING UNIT (GPU) RESOURCE SHARING EFFICIENCY BY DIRECTED ACYCLIC GRAPH (DAG) GENERATION - Embodiments for graphical processing unit (GPU) resource sharing in a computing cluster, by a processor device. Resource-specific stages are dynamically generated in a directed acyclic graph (DAG) using a DAG interpreter for a set of tasks by creating equivalence stages in the DAG having an associated inserted set of shuffle stages, the equivalence stages created based on a determined cost of each stage of the set of shuffle stages. Backlog tasks are scheduled and tasks within the set of tasks are shifted among respective stages in the equivalence stages according to the determined cost to avoid overlapping allocation of the GPU resource during central processing unit (CPU) execution of the respective tasks of the set of tasks. | 2018-12-27 |
20180373541 | SYSTEM AND METHOD FOR NON-SPECULATIVE REORDERING OF LOAD ACCESSES - Methods and systems for maintaining validity of a memory model in a multiple core computer system are described. A first core presents a store instruction from being performed by another core until a condition is met which enables reordered instructions to validly execute. | 2018-12-27 |
20180373542 | METHOD AND APPARATUS FOR DECLARATIVE ACTION ORCHESTRATION - A method, an Activation Node, a computer program and a computer program product for orchestration of activation actions are provided. The solution provides for avoiding an imperative way of specifying the logic and manually defining its level of parallelism. The Activation Node is configured to deploy or fetch a specification, the specification mapping dependencies between a data model for an activation request to a data model of lower layer resources, to be used for orchestrating execution of the activation actions; receive an activation request; match the activation request with a specific flow of activation actions to be executed in in accordance with the specification; and execute the execute logic of the flow of activation actions ordered based on the dependencies between the data models. | 2018-12-27 |
20180373543 | System and Method for Providing Fine-Grained Memory Cacheability During a Pre-OS Operating Environment - An information handling system includes a memory with a cache, and a processor to execute pre-operating system (pre-OS) code before the processor executes boot loader code. The pre-OS code sets up a Memory Type Range Register (MTRR) to define a first memory type for a memory region of the memory, sets up a page attribute table (PAT) with an entry to define a second memory type for the memory region, disables the PAT, and pass execution by the processor to the boot loader code. The first memory type specifies a first cacheability setting on the processor for data from the memory region, and the second memory type specifies a second cacheability setting on the processor for data from the memory region. | 2018-12-27 |
20180373544 | APPLICATION ACTIVATION METHOD AND APPARATUS - An application activation method is provided. The method includes obtaining a first compressed file, where the first compressed file contains activation information of an application and compressed content of a code package of the application. The method also includes extracting the compressed content from the first compressed file; generating a second compressed file by using the compressed content without decompressing the compressed content; and loading the second compressed file, and activating the application according to the activation information in the first compressed file. | 2018-12-27 |
20180373545 | ENSURING DETERMINISM DURING PROGRAMMATIC REPLAY IN A VIRTUAL MACHINE - Aspects of an application program's execution which might be subject to non-determinism are performed in a deterministic manner while the application program's execution is being recorded in a virtual machine environment so that the application program's behavior, when played back in that virtual machine environment, will duplicate the behavior that the application program exhibited when originally executed and recorded. Techniques disclosed herein take advantage of the recognition that only minimal data needs to be recorded in relation to the execution of deterministic operations, which actually can be repeated “verbatim” during replay, and that more highly detailed data should be recorded only in relation to non-deterministic operations, so that those non-deterministic operations can be deterministically simulated (rather than attempting to re-execute those operations under circumstances where the outcome of the re-execution might differ) based on the detailed data during replay. | 2018-12-27 |
20180373546 | HYBRID SOFTWARE AND GPU ENCODING FOR UI REMOTING - Frames of a virtual desktop are encoded using a hybrid approach that combines the strength of software encoding by a central processing unit (CPU) and hardware encoding by a graphics processing unit (GPU). A method of encoding frame data of one or more virtual desktops in hardware and in software and transmitting the encoded frame data to one or more client devices, includes the steps of encoding a first portion of the frame data in the GPU to generate a first encoded frame data, encoding a second portion of the frame data in software, i.e., programmed CPU, during encoding of the first portion, to generate a second encoded frame data, and transmitting the first encoded frame data and the second encoded frame data from a host computer of the one or more virtual desktops to the one or more client devices as separate video streams. | 2018-12-27 |
20180373547 | SYSTEMS AND METHODS FOR PROVIDING A VIRTUAL ASSISTANT TO ACCOMMODATE DIFFERENT SENTIMENTS AMONG A GROUP OF USERS BY CORRELATING OR PRIORITIZING CAUSES OF THE DIFFERENT SENTIMENTS - Systems and methods are disclosed herein for providing, to a group of users, a virtual assistant with customized avatar sentimental and behavioral characteristics to accommodate different sentiments among the group of users. When the media guidance application is configured to serve a group of users, who may exhibit different moods or sentiments, the media guidance application may configure the virtual assistant to accommodate the different sentiments of the group of users. For example, the media guidance application may determine sentimental and behavioral characteristics to configure the virtual assistant based on a context of the split sentiments of the group of users, a particular sentiment that has a higher priority, and/or the like. | 2018-12-27 |
20180373548 | System And Method For Configuring Equipment That Is Reliant On A Power Distribution System - A system and method for configuring data-center equipment that is reliant on a power distribution system. A data-processing system receives an indication of a device whose electrical power is being configured. The data-processing system identifies a plurality of candidate electrical outlets that are available for use and then evaluates those candidate outlets. The data-processing system can evaluate the candidate outlets based on one or more of i) the metadata of each electrical outlet in the plurality of candidate outlets, ii) the power redundancy associated with each electrical outlet, and iii) the power capacity associated with a power chain of each electrical outlet. This results in a set of identifiers of qualifying candidate electrical outlets. The data-processing system can rank the outlets in terms of their distances from the device and/or effects on electrical phase balance. The data-processing system then displays identifiers of the ranked, qualifying outlets. | 2018-12-27 |
20180373549 | BUS ARRANGEMENT AND METHOD FOR OPERATING A BUS ARRANGEMENT - A bus arrangement includes a coordinator that has a non-volatile memory; a first node that has a first serial number; a second node that has a second serial number; and a bus. The bus includes a first signal line, which couples the first node and the coordinator; a second signal line, which connects the second node to the first node; and at least one bus line, which connects the coordinator to the first and the second nodes. The coordinator is configured such that, in a configuration phase, it establishes a connection to the first node, queries the first serial number, and stores the first serial number in the non-volatile memory, and establishes a connection to the second node, queries the second serial number, and stores the second serial number in the non-volatile memory. | 2018-12-27 |
20180373550 | INFORMATION PROCESSING APPARATUS - In an apparatus, in a case where a confirming unit confirms that remote desktop connection is made and a software screen is set to be displayed on a foreground, a setting unit cancels the setting for displaying the software screen on the foreground. | 2018-12-27 |
20180373551 | SYSTEMS AND METHODS FOR USING DYNAMIC TEMPLATES TO CREATE APPLICATION CONTAINERS - The disclosed computer-implemented method for using dynamic templates to create application containers may include (i) identifying an application that is to be deployed in a container, (ii) creating a dynamic template that comprises at least one variable parameter and that defines at least a portion of an operating environment of the container, (iii) generating a value of the variable parameter during deployment of the application, (iv) processing the dynamic template to create a configuration file that comprises the value of the variable parameter, and (v) triggering a container initialization system to create, based on the configuration file, the container such that the container isolates a user space of the application from other software on a host system while sharing a kernel space with the other software. Various other methods, systems, and computer-readable media are also disclosed. | 2018-12-27 |
20180373552 | CONSISTENT VIRTUAL MACHINE PERFORMANCE ACROSS DISPARATE PHYSICAL SERVERS - Embodiments are directed to ensuring that VM behavior and characteristics are maintained as datacenter hardware changes and tenant VMs are migrated to newer hardware. Virtual machine resources are modeled and constraints defined on individual resources for each generation of physical server hardware. Constraints may be expressed as absolute limits (e.g., memory size), as some fraction of the physical resource (e.g., a percentage of the physical processor performance), or in terms of a behavior profile (e.g., performance variations with usage patterns, such as a disk drive behavior profile). When appropriately modeled, performance can be normalized across different server hardware generations and the cloud service provider can deploy the same virtual machine on different hardware. | 2018-12-27 |
20180373553 | TECHNIQUES TO MIGRATE A VIRTUAL MACHINE USING DISAGGREGATED COMPUTING RESOURCES - Examples may include techniques to live migrate a virtual machine (VM) using disaggregated computing resources including compute and memory resources. Examples include copying data between allocated memory resources that serve as near or far memory for compute resources supporting the VM at a source or destination server in order to initiate and complete the live migration of the VM. | 2018-12-27 |
20180373554 | HYPERVISOR REMEDIAL ACTION FOR A VIRTUAL MACHINE IN RESPONSE TO AN ERROR MESSAGE FROM THE VIRTUAL MACHINE - Exemplary methods, apparatuses, and systems include a hypervisor receiving an error message from an agent within a first virtual machine run by the hypervisor. In response to the error message, the hypervisor determines and initiates a corrective action for the hypervisor to take in response to the error message. An exemplary corrective action includes initiating a reset of the first virtual machine or a reset of a second virtual machine. | 2018-12-27 |
20180373555 | Management of IoT Devices in a Virtualized Network - Specialized, service optimized virtual machines are assigned to handle specific types of Internet of Things (IoT) devices. An IoT context mapping policy engine within the context of a virtualized network function manages IoT context mapping policy functions in load balancers. The IoT context mapping policy functions select service optimized virtual machines based on IoT device IDs, and assign those virtual machines to handle the devices. The IoT context mapping policy functions provide load data to the IoT context mapping policy engine. Based on the load data, the IoT context mapping policy engine maintains appropriate scaling by creating or tearing down instances of the virtual machines. | 2018-12-27 |
20180373556 | APPARATUS AND METHOD FOR PATTERN-DRIVEN PAGE TABLE SHADOWING FOR GRAPHICS VIRTUALIZATION - An apparatus and method are described for pattern driven page table updates. For example, one embodiment of an apparatus comprises a graphics processing unit (GPU) to process graphics commands and responsively render a plurality of image frames; a hypervisor to virtualize the GPU to share the GPU among a plurality of virtual machines (VMs); a first guest page table managed within a first VM, the first guest page table comprising a plurality of page table entries; a first shadow page table managed by the hypervisor and comprising page table entries corresponding to the page table entries of the first guest page table; and a command parser to analyze a current working set of commands submitted from the first VM to the GPU, the command parser to responsively update the first shadow page table responsive to determining a set of page table entries predicted to be used based on the analysis of the working set of commands. | 2018-12-27 |
20180373557 | System and Method for Virtual Machine Live Migration - A system for virtual machine live migration includes a management node, a source server, a destination server, a peripheral component interconnect express (PCIe) switch, and an single root input/output virtualization (SR-IOV) network adapter, where the source server includes a virtual machine (VM) before live migration; the destination server includes a VM after live migration; the management node is adapted to configure, using the PCIe switch, a connection relationship between a virtual function (VF) module used by the VM before live migration and the source server as a connection relationship between the VF module and the destination server; and the destination server, using the PCIe switch and according to the connection relationship with the VF module configured by the management node, uses the VF module to complete virtual machine live migration. By switching the connection relationships, the system ensures that a data packet receiving and sending service is not uninterrupted. | 2018-12-27 |
20180373558 | PERFORMANCE-BASED PUBLIC CLOUD SELECTION FOR A HYBRID CLOUD ENVIRONMENT - A hybrid cloud solution for securely extending a private cloud or network to a public cloud can be enhanced with tools for evaluating the resources offered by multiple public cloud providers. In an example embodiment, a public cloud evaluation system can be used to create a virtual machine (VM) in a public cloud to serve the function of a public cloud evaluation agent. The public cloud evaluation agent can instantiate one or more VMs and other resources in the public cloud, and configure the VMs and resources to execute performance evaluation software. The results of the performance evaluation software can be transmitted to a private enterprise network, and analyzed to determine whether the public cloud is an optimal public cloud for hosting an enterprise application. | 2018-12-27 |
20180373559 | AGENT-BASED END-TO-END TRANSACTION ANALYSIS - A method for agent-based transaction analysis which includes: building an instrumented binary code of a software application for a transaction; configuring an analysis agent for the software application; starting the software application in an application process environment with the instrumented binary code; attaching the analysis agent to the instrumented binary code of the software application; extracting by the analysis agent the metadata from the software application; sending the metadata to a central analysis server in an environment separate from the application process environment; and building by the central analysis server an end-to-end description of the transaction from the metadata. | 2018-12-27 |
20180373560 | SNAPSHOT ISOLATION IN GRAPHICAL PROCESSING UNIT HARDWARE TRANSACTIONAL MEMORY - Snapshot Isolation (SI) is an established model in the database community, which permits write-read conflicts to pass and aborts transactions only on write-write conflicts. With the Write Skew Anomaly (WSA) correctly eliminated, SI can reduce the occurrence of aborts, save the work done by transactions, and greatly benefit long transactions involving complex data structures. Embodiments include a multi-versioned memory subsystem for hardware-based transactional memory (HTM) on the GPU, with a method for eliminating the WSA on the fly, and incorporates SI. The GPU HTM can provide reduced compute time for some compute tasks. | 2018-12-27 |
20180373561 | HIERARCHICAL STALLING STRATEGIES - Hierarchical stalling strategies are disclosed. An indication is received of a stalling event caused by a requested resource being inaccessible. In response to receiving the indication of the stalling event, a set of cost functions usable to determine how to handle the stalling event is selected based at least in part on a type of the stalling event. The stalling event is handled based at least in part on an evaluation of the set of cost functions selected based at least in part on the type of the stalling event. | 2018-12-27 |
20180373562 | CONTROLLING OPERATION OF A GPU - The operation of a GPU is controlled based on one or more deadlines by which one or more GPU tasks must be completed and estimates of the time required to complete the execution of a first GPU task (which is currently being executed) and the time required to execute one or more other GPU tasks (which are not currently being executed). Based on a comparison between the deadline(s) and the estimates, the operating parameters of the GPU may be changed. | 2018-12-27 |
20180373563 | METHOD AND SYSTEM FOR PROCESSING COMMUNICATION CHANNEL - There is provided a method and a system for processing a communication channel. including a heartbeat channel and a data channel between a master process and a worker process. The method includes determining at least one data channel associated with a heartbeat channel, detecting the determined at least one data channel, disconnecting the heartbeat channel when it is detected that any data channel is in a disconnected state to cause a heartbeat to time out, and ending a current task after it is determined that the heartbeat times out. | 2018-12-27 |
20180373564 | Computer Systems And Computer-Implemented Methods For Dynamically Adaptive Distribution Of Workload Between Central Processing Unit(s) and Graphics Processing Unit(s) - In some embodiments, the present invention provides an exemplary computing device, including at least: a scheduler processor; a CPU; a GPU; where the scheduler processor configured to: obtain a computing task; divide the computing task into: a first set of subtasks and a second set of subtasks; submit the first set to the CPU; submit the second set to the GPU; determine, for a first subtask of the first set, a first execution time, a first execution speed, or both; determine, for a second subtask of the second set, a second execution time, a second execution speed, or both; dynamically rebalance an allocation of remaining non-executed subtasks of the computing task to be submitted to the CPU and the GPU, based, at least in part, on at least one of: a first comparison of the first execution time to the second execution time, and a second comparison of the first execution speed to the second execution speed. | 2018-12-27 |
20180373565 | ALLOCATING RESOURCES TO VIRTUAL MACHINES - A method, executed by a computer, for allocating resources to virtual machines includes monitoring resource usage for a selected resource for one or more capped virtual machines and one or more uncapped virtual machines, and responsive to detecting a first resource violation, the first resource violation corresponding to resource usage for a capped virtual machine and a second resource violation, the second resource violation corresponding to resource usage for an uncapped virtual machine, adjusting allocation of the selected resource for each of the one or more capped virtual machines previous to adjusting allocation of the selected resource for any of the uncapped virtual machines. A computer program product and computer system corresponding to the above method are also disclosed herein. | 2018-12-27 |
20180373567 | DATABASE RESOURCE SCALING - A method, computer system, and a computer program product for resource scaling is provided. The present invention may include receiving a request for resources. The present invention may include receiving a request for a plurality of resources from a virtual device. The present invention may then include estimating a resource allocation based on a predetermined level of service based on the received request. The present invention may also include estimating a benefit curve of a workload for a plurality of tiers of resources based on the estimated resource allocation. The present invention may further include estimating a performance cost of the workload for the plurality of tiers of resources based on the estimated benefit curve. | 2018-12-27 |
20180373568 | Automatic Workflow-Based Device Switching - Methods and systems for receiving an indication that an application running on a first device is ready to perform a task, determining a device capability associated with performing the task, determining one or more devices associated with a user of the first device, wherein each of the one or more devices is associated with the device capability, selecting, based on the task and one or more user preferences associated with the user, a second device from the one or more devices, and sending an instruction to the second device, wherein the instruction causes the second device to perform the task, are described herein. | 2018-12-27 |
20180373569 | SYSTEM FOR LINKING ALTERNATE RESOURCES TO RESOURCE POOLS AND ALLOCATING LINKED ALTERNATIVE RESOURCES TO A RESOURCE INTERACTION - A system for digitally linking alternate resources to resource pools and allocating linked alternate resources to resource interactions. In specific embodiments, in response to allocating the alternate resource to the resource interaction, resources are re-allocated to the resource pool from which the resource interaction was initiated. | 2018-12-27 |
20180373570 | APPARATUS AND METHOD FOR CLOUD-BASED GRAPHICS VALIDATION - An apparatus and method are described for intelligent cloud based testing of graphics hardware and software. For example, one embodiment of an apparatus comprises: a hardware pool comprising a plurality of test machines to perform cloud-based graphics validation operations; a virtual resource pool comprising data associated a plurality of different graphics hardware resources; a resource manager to coordinate between the hardware pool and the virtual resource pool to cause one or more virtual machines (VMs) to be executed on one or more of the test machines using resources from the virtual resource pool; and a task dispatcher to dispatch graphics validation tasks to the VMs responsive to user input. | 2018-12-27 |
20180373571 | METHOD OF ALLOCATING EXECUTION RESOURCES - A method of allocating execution resources, by a virtualized-resources manager entity, for an execution of an application service and of at least one network service. The execution of the application service depends on the concurrent execution of the at least one network service. the method includes: a first request to allocate execution resources by a manager entity of the at least one network service to the virtualized-resources manager entity; a second request to allocate execution resources by a manager entity of the application service to the virtualized-resources manager entity; and, prior to the requests, a notification, by the manager entity of the application service, of consumption forecast by the application service of at least one network service provided by the network services manager entity, to the services-managing network entity. | 2018-12-27 |
20180373572 | DYNAMICALLY MANAGING WORKLOAD PLACEMENTS IN VIRTUALIZED ENVIRONMENTS BASED ON CURRENT USER GLOBALIZATION CUSTOMIZATION REQUESTS - Multiple workloads from multiple users requesting access to at least one virtualized application are received, wherein each of the workloads is specified with one or more separate globalization characteristics from among multiple globalization characteristics. To dynamically manage workload placement, each of the workloads is dynamically categorized separately for placement in one or more particular virtualized environments from among multiple virtualized environments based on the one or more separate globalization characteristics of each of the workloads, wherein each virtualized environment comprises the at least one virtualized application configured for a separate selection of globalization services from among multiple globalization services for handling a separate selection of the one or more separate globalization characteristics. | 2018-12-27 |
20180373573 | LOCK MANAGER - In some examples, a lock manager may receive a lock release message from a processor. The lock release message may identify a lock that synchronizes control of a shared resource. The lock manager may determine, for the lock identified in the lock release message, multiple processors contending to acquire the lock and select a particular processor among the multiple processors to acquire the lock. | 2018-12-27 |
20180373574 | CLOUD-BASED ENTERPRISE-CUSTOMIZABLE MULTI-TENANT SERVICE INTERFACE - A method for transparently providing a customized enterprise-specific interface application in a cloud-hosted computing system environment includes, at application runtime, selecting a core application defined for a group of enterprises requiring a same core application functionality. The core application is packaged for deployment. On identification of a specific enterprise associated with the core application, according to the identified specific enterprise one or more predefined stored functionalities are applied to the core application to provide an identified-enterprise-specific application. | 2018-12-27 |
20180373575 | APPLICATION CONVERGENCE METHOD AND APPARATUS - Embodiments of the present invention disclose an application convergence method and apparatus. Multiple convergence parameter interfaces are provided, and multiple convergence parameters registered by an application by using the convergence parameter interfaces are received. Therefore, when a convergence operation request of a user or an apparatus for multiple applications is received, multiple convergence parameters of the multiple applications can be obtained from the multiple convergence parameter interfaces; and the multiple convergence parameters of the multiple applications are separately converged, so as to implement convergence of the multiple applications. | 2018-12-27 |
20180373576 | INFORMATION PROCESSING METHOD, DEVICE, SYSTEM, AND TERMINAL DEVICE - Information processing is disclosed including determining an information type of target information, providing a processing interface for an extended function based on the information type, receiving a trigger instruction for an operation portal, sending the target information to an application supporting the extended function or a server based on the trigger instruction, and presenting the results of the application processing the target information. | 2018-12-27 |
20180373577 | Method for Operating a Computer System, Computer Program With an Implementation of the Method, and Computer System Configured to Implement the Method - A method for operating a computer system, a computer program for implementing the method and a computer system which executes the computer program, wherein at least one software application which functions as a host application and at least one software application which functions as a guest application are executed on the computer system, where at least one guest application offers at least one addressable software function, the host application uses at least one addressable software function of the at least one guest application based on a configuration, and where a position of the use of the at least one software function is specified on a display unit as part of the configuration. | 2018-12-27 |
20180373578 | SYSTEM AND METHOD FOR PREDICTIVE TECHNOLOGY INCIDENT REDUCTION - Systems and methods for predictive technology incident reduction are disclosed. In one embodiment, in an information processing apparatus comprising at least one computer processor, a method for predictive technology incident reduction may include: (1) receiving a change record for a proposed change to a computer application or a computer network infrastructure; (2) analyzing the potential change for an adverse potential cross impact with another computer application or a computer system; (3) predicting a probability of failure and an impact of the proposed change using a model; (4) in response to a low predicted probability of failure, or a high predicted probability of failure with a low predicted impact: approving the proposed change; and implementing the proposed change; and (5) in response to a high predicted probability of failure and a high predicted impact, rejecting the proposed change. | 2018-12-27 |
20180373579 | PROCESSING DATA TO IMPROVE A QUALITY OF THE DATA - A first device may receive data from a set of second devices to be processed to determine a quality of the data. The data may include first data stored by the set of second devices, second data provided toward a third device, or third data related to fourth data. The first device may process the data using a first set of techniques to prepare the data for processing. The first device may process the data using a second set of techniques to improve the quality of the data and to form processed data. The first device may provide the processed data toward the set of second devices to replace the data stored by the set of second devices to permit the set of second devices to use the processed data. The first device may perform an action after providing the processed data toward the set of second devices. | 2018-12-27 |
20180373580 | Method And System For Real-Time And Scalable Anomaly Detection And Classification Of Multi-Dimensional Multivariate High-Frequency Transaction Data In A Distributed Environment - A system and method for the distributed analysis of high frequency transaction trace data to constantly categorize incoming transaction data, identify relevant transaction categories, create per-category statistical reference and current data and perform statistical tests to identify transaction categories showing overall statistically relevant performance anomalies. The relevant transaction category detection considers both the relative transaction frequency of categories compared to the overall transaction frequency and the temporal stability of a transaction category over an observation duration. The statistical data generated for the anomaly tests contains next to data describing the overall performance of transactions of a category also data describing the transaction execution context, like the number of concurrently executed transactions or transaction load during an observation period. Anomaly tests consider current and reference execution context data in addition to statistic performance data to determine if detected statistical performance anomalies should be reported. | 2018-12-27 |
20180373581 | SYSTEM AND METHODS FOR OPTIMAL ERROR DETECTION IN PROGRAMMATIC ENVIRONMENTS - System and methods are provided for optimal error detection in programmatic environments through the utilization of at least one user-defined condition. Illustratively, the conditions can include one or more triggers initiating the collection of log data for methods associated with the provided at least one condition. Operatively, the disclosed systems and methods observe the run-time of the programmatic environment and initiate the collection of log data based on the occurrence of a condition trigger. A rank score can also be calculated to rank the methods associated with the defined condition to isolate those methods that have higher probability of causing the defined condition. Dynamic instrumentation of the methods associated with the user defined conditions during run time are used to calculate the rank score, which is used for ranking the methods. | 2018-12-27 |
20180373582 | DATA ACCESS DEVICE AND ACCESS ERROR NOTIFICATION METHOD - Error notification by a bus master for a speculative access and error notification by a bus slave for a non-speculative access are achieved while a circuit scale of the bus master is suppressed. A bus request includes mode information for selecting that error notification for an access is performed by the bus slave or the bus master. In a case where the mode information indicating that error notification is performed by the bus slave is included in the bus request, when an error for an access in that bus request has occurred, the bus slave performs error notification. In a case where execution of an instruction of a speculative load access has been fixed and error information for the load access has been received from the bus slave, the bus master performs error notification based on the error information. | 2018-12-27 |
20180373583 | DATA INTEGRATION PROCESS REFINEMENT AND REJECTED DATA CORRECTION - Data integration process/tool refinement and correction of rejected data. A method acquires rejected data from a data integration tool, the rejected data rejected by the data integration tool during a data integration process. The method applies machine learning to a cognitive system, the machine learning being based at least in part on at least some of the acquired rejected data, and the machine learning including training the cognitive system to identify corrections to data elements to facilitate data element acceptance by the data integration tool. The method analyzes a data element of the acquired rejected data and identifies a correction to apply to the data element. The method applies the correction to the data element to obtain a corrected data element. The method also provides the corrected data element to the data integration tool for acceptance by the data integration tool and provision to a target. | 2018-12-27 |
20180373584 | METHOD AND DEVICE FOR PROGRAMMING NON-VOLATILE MEMORY - A method for programming a non-volatile memory in a programming operation is provided. The non-volatile memory has a number of cells and each of part of the cells stores data having at least 2 bits at least corresponding to a first page and a second page. The first programming-verifying operation including programming the first page and verifying whether the first page is successfully programmed is performed. When a first original fail-bit number for the first page is more than a predetermined fail-bit value, a second programming-verifying operation to the first page is performed to obtain a first over-counting fail-bit number for the first page and reduce the first original fail-bit number by the first over-counting fail-bit number. When the reduced first original fail-bit number is not more than the predetermined fail-bit value, the first page is set as successfully programmed. | 2018-12-27 |
20180373585 | CHECKSUM TREE GENERATION FOR IMPROVED DATA ACCURACY VERIFICATION - A data management system verifies the accuracy of data retrieved from a primary data store using a checksum tree stored by a secondary data store. A checksum tree is a tree graph that represents a hierarchy of checksums. Leaf nodes of the checksum tree can store checksums for data blocks stored by the primary data store and secondary data store, and parent nodes can represent checksums of their respective child nodes. The data management system can compare reference subtrees within the checksum tree to comparison subtrees that are generated from data retrieved from the primary data store to determine whether the retrieved data is accurate. The data management system can also use the checksum tree to identify which, if any, of the retrieved data blocks are inaccurate. | 2018-12-27 |
20180373586 | MEMORY SYSTEM AND OPERATING METHOD THEREFOR - Disclosed is a memory controller comprising: a memory unit including tables, in which various segments are stored; a calculator configured to update a parity for the segments stored in each of the tables whenever the table is updated when a segment is currently inputted, detect an error in the table based on a previously updated parity and a currently updated parity corresponding to the table; and a bit inverter configured to correct the detected error, and an operating method therefor. | 2018-12-27 |
20180373587 | SINGLE QUORUM VERIFICATION OF ERASURE CODED DATA - Techniques described and suggested herein include various methods and systems for verifying integrity of redundancy coded data, such as erasure coded data shards. In some embodiments, a quantity of redundancy coded data elements, hereafter referred to as data shards (e.g., erasure coded data shards), sufficient to reconstruct the original data element from which the redundancy coded data elements are derived, is used to generate reconstructed data shards to be used for checking the validity of analogous data shards stored for the original data element. | 2018-12-27 |
20180373588 | BAD BIT REGISTER FOR MEMORY - A memory device, a memory system, and corresponding methods are provided. The memory device includes a non-volatile random access memory. The non-volatile memory includes a suspect bit register configured to store addresses of bits that are determined to have had errors. The non-volatile memory further includes a bad bit register configured to store addresses of bits that both (i) appeared in the suspect bit register due to a first error and (ii) are determined to have had a second error. Hence, the memory device overcomes the aforementioned intrinsic write-error-rate by identifying the bad bits so they can be fused out, thus avoiding errors during use of the non-volatile random access memory. | 2018-12-27 |
20180373589 | BAD BIT REGISTER FOR MEMORY - A memory device, a memory system, and corresponding methods are provided. The memory device includes a non-volatile random access memory. The non-volatile memory includes a suspect bit register configured to store addresses of bits that are determined to have had errors. The non-volatile memory further includes a bad bit register configured to store addresses of bits that both (i) appeared in the suspect bit register due to a first error and (ii) are determined to have had a second error. Hence, the memory device overcomes the aforementioned intrinsic write-error-rate by identifying the bad bits so they can be fused out, thus avoiding errors during use of the non-volatile random access memory. | 2018-12-27 |
20180373590 | SCALING LARGE DRIVES USING ENHANCED DRAM ECC - The present disclosure generally relates to solid state storage device and techniques for conserving storage capacity associated therewith. Several embodiments are presented, including a data storage device, data storage controller, and methods for using the same are provided in the subject disclosure. A data storage device includes: a plurality of memory devices, a controller coupled to the plurality of memory devices and configured to program data to and read data from the plurality of memory devices, a memory including a logical-to-physical address translation map configured to enable the controller to determine a physical location of stored data in the plurality of memory devices, where the logical-to-physical address translation map contains at least one entry that merges at least two addresses that map, respectively, to at least two physical locations in the plurality of memory devices, where the controller is configured to encode each merged entry with an error-correcting code. | 2018-12-27 |
20180373591 | METHOD AND SYSTEM FOR SCANNING FOR ERASED FLASH MEMORY PAGES - The subject technology provides for scanning blocks of a flash memory device for erased pages. A first codeword read from a page of a block in a flash memory device is received and provided to a first decoder for decoding. In response to receiving a first success indicator from the first decoder indicating that the first codeword was successfully decoded, first decoded data is provided from the first decoder to a second decoder for verification of the first decoded data. In response to receiving a first failure indicator from the second decoder indicating that the first decoded data was not verified, the page of the block is identified as being in an erased state based on the first success indicator received from the first decoder and the first failure indicator received from the second decoder. | 2018-12-27 |
20180373592 | MEMORY DEVICE AND METHOD OF CONTROLLING ECC OPERATION IN THE SAME - A memory cell array includes memory cells that are formed in vertical channels extended in a vertical direction with respect to a substrate. The vertical channels are arranged in a zigzag manner in parallel to the first direction. A read-write circuit is connected to the memory cells via bit lines. An address decoder decodes an address to provide decoded address signals to the read-write, circuit. The memory cells include outer cells and inner cells. A distance between one of the outer cells and a common source node is smaller than a distance between one of the inner cells and the common source node. Data of the memory cells are distributed among ECC sectors and a data input-output order of the memory cells is arranged such that each ECC sector has substantially the same number of the outer cells and the inner cells. Each ECC sector corresponds to an ECC operation unit. | 2018-12-27 |
20180373593 | FLASH MEMORY CONTROLLER AND MEMORY DEVICE FOR ACCESSING FLASH MEMORY MODULE, AND ASSOCIATED METHOD - A method for accessing a flash memory module includes: sequentially writing Nth−(N+K)th data to a plurality of flash memory chips of the flash memory module, and encoding the Nth−(N+K)th data to generate Nth−(N+K)th ECCs, respectively, where the Nth−(N+K)th ECCs are used to correct errors of the Nth−(N+K)th data, respectively, and N and K are positive integers; and writing the (N+K+1)th data to the plurality of flash memory chips of the flash memory module, and encoding the (N+K+1)th data with at least one of the Nth−(N+K)th ECCs to generate the (N+K+1)th ECC. | 2018-12-27 |
20180373594 | SEMICONDUCTOR DEVICE AND ERROR MANAGEMENT METHOD - An error management system may be provided. The error management system may include an error analysis unit configured to generate error correction counting values by counting error correction occurrences in a plurality of management blocks and generate permanent error block information for defining whether errors generated in the plurality of management blocks are a permanent error or a temporary error by comparing the error correction counting values and at least one reference value. The error management system may include a block control unit configured to replace an address signal with a new address signal when a management block selected according to the address signal among the plurality of management blocks is previously designated in the permanent error block information. | 2018-12-27 |
20180373595 | MODIFYING ALLOCATION OF STORAGE RESOURCES IN A DISPERSED STORAGE NETWORK - A method for execution by a resource allocation module includes facilitating migration of a first set of encoded data slices stored at a storage unit for decommissioning to a newly commissioned storage unit, and facilitating migration of a remaining set of encoded data slices stored at the storage unit for decommissioning as foster encoded data slices to at least one other storage unit. For each foster encoded data slice, it is determined whether to facilitate migration of the foster encoded data slice to the newly commissioned storage unit. When determining to facilitate the migration of the foster encoded data slice, the migration of the foster encoded data slice to the newly commissioned storage unit is facilitated. An association of the newly commissioned storage unit and identity of the foster encoded data slice is updated in response to detecting successful migration of the foster encoded data slice. | 2018-12-27 |
20180373596 | AUTOMATIC INCREMENTAL REPAIR OF GRANULAR FILESYSTEM OBJECTS - Presented herein are methods, non-transitory computer readable media, and devices triggering a metadata recovery process within a network storage system, which include: dividing metadata into metadata segments, wherein each of the metadata segments is tasked to perform a specific file system operation function, validating each of the metadata segments during the specific file system operation function; upon failure to validate at least one of the metadata segments, triggering an automatic repair process while maintaining the operation function tasked to the metadata segment, and upon finalizing the automatic repair process, resuming the specific file system operation function tasked to the metadata segment. | 2018-12-27 |
20180373597 | LIVE BROWSING OF BACKED UP DATA RESIDING ON CLONED DISKS - Embodiments described herein detect on-the-fly whether requested subclient data resides on a certain type of storage device, such as cloned Windows Dynamic Disks. The system presents mount requests for the identified disks in a manner that allows for mounting of the disks, where the disks would not be otherwise mountable. For instance, in some embodiments the information management system generates substitute metadata for disk mounting purposes, such as for the purposes of browsing and/or restoring data. | 2018-12-27 |
20180373598 | MEMORY DEVICES AND SYSTEMS WITH SECURITY CAPABILITIES - Several embodiments of systems incorporating memory devices are disclosed herein. In one embodiment, a memory device can include a controller, a main memory operably coupled to the controller, and security hardware operably coupled to the controller and to the main memory. The main memory can include a plurality of memory regions and at least one reserved memory region configured to store genuine backups of memory content stored in the plurality of memory regions. In operation, the security hardware is configured to measure memory content of the plurality of memory regions before startup, shutdown, and reset of the memory device; compare the measured value to an expected value; and direct the controller to replace the memory content with a genuine backup of the memory content stored in the at least one reserved memory region if the measured value and the expected value are not in accord. | 2018-12-27 |
20180373599 | OPTIMIZED BACKUP OF CLUSTERS WITH MULTIPLE PROXY SERVERS - Systems and methods for backing up and restoring virtual machines in a cluster environment. Proxy nodes in the cluster are configured with agents. The agents are configured to perform backup operations and restore operations for virtual machines operating in the cluster. During a backup operation or during a restore operation, a load associated with the backup/restore operation is distributed across at least some of the proxy nodes. The proxy nodes can backup/restore virtual machines on any of the nodes in the cluster. | 2018-12-27 |
20180373600 | HYBRID DATA STORAGE SYSTEM WITH PRIVATE STORAGE CLOUD AND PUBLIC STORAGE CLOUD - Systems and methods are disclosed for accessing data on a storage system. An apparatus, such as a data storage device or a computing device, may include a memory configured to store data. The apparatus is configured to determine an importance level for a file to be stored in the storage system. The data storage system includes one or more private storage clouds and one or more public storage clouds. The apparatus is also configured to generate a set of recovery data chunks based on the file and the importance level. The apparatus is further configured to store the set of recovery data chunks in the set of public storage clouds. The apparatus is further configured to store at least a portion of the file in the private storage cloud. | 2018-12-27 |
20180373601 | INFORMATION MANAGEMENT BY A MEDIA AGENT IN THE ABSENCE OF COMMUNICATIONS WITH A STORAGE MANAGER - A media agent is configured to perform substantially autonomously to initiate, continue, and manage information management operations such as a backup job of a certain client's primary data, manage the operations, and generate and store resultant system-level metadata from the operations, etc. The media agent is configured to do this even when out of communication with the storage manager that manages the information management system. When communications are restored, the media agent reports the relevant metadata to the storage manager. The storage manager comprises corresponding enhancements, including specialized logic for identifying the media agent as an intelligent media agent capable of some autonomous functionality, for transmitting management parameters thereto, and for seamlessly integrating the received metadata into the storage manager's associated management infrastructure such as a management database. | 2018-12-27 |
20180373602 | CONSOLIDATED PROCESSING OF STORAGE-ARRAY COMMANDS USING A FORWARDER MEDIA AGENT IN CONJUNCTION WITH A SNAPSHOT-CONTROL MEDIA AGENT - The illustrative systems and methods consolidate storage-array command channels into a media agent that executes outside the production environment. A “snapshot-control media agent” (“snap-MA”) is configured on a secondary storage computing device that operates apart from client computing devices. A “forwarder” media agent operates on each client computing device that uses the storage array, yet lacks command channels to the storage array. Likewise, a “forwarder” proxy media agent may operate without command channels to the storage array. No third-party libraries or storage-array-command devices are installed or needed on the host computing device. The forwarder media agent forwards any commands directed at the storage array to the snap-MA on the secondary storage computing device. The snap-MA receives and processes commands directed at the storage array that were forwarded by the forwarder media agents. Responses from the storage array are transmitted to the respective forwarder media agent. The snap-MA advantageously pools any number of storage-array-command devices so that capacity limitations in regard to communications channels at the storage array may be avoided. As a result, the snap-MA operating in conjunction with the forwarder media agents enable the illustrative system to consolidate the communication of storage-array commands away from client computing devices and/or proxy media agent hosts and into the secondary storage computing device that hosts the snap-MA. | 2018-12-27 |
20180373603 | Web Application System and Database Utilization Method Therefor - By a user logging in to a web application system, a database is created on a database server | 2018-12-27 |
20180373604 | SYSTEMS AND METHODS OF RESTORING A DATASET OF A DATABASE FOR A POINT IN TIME - Systems and methods are provided for performing a point-in-time restore of data of a first tenant of a multitenanted database system. Metadata can be located to identify an archival version of first data of the first tenant stored in immutable storage of the database system. The archival version includes a most recently committed version of each datum prior to a first point in time. By using the metadata, a restore reference set is mapped into a target database instance of the database system. The mapping can be performed when all existing data for a tenant is to be the archival version, and where versions of data and records committed after the point in time are not available to the target database instance. | 2018-12-27 |
20180373605 | MEMORY SYSTEM AND METHOD OF OPERATING THE SAME - Provided herein may be a memory system and a method of operating the same. The memory system may include a memory controller, and a plurality of memory devices coupled to the memory controller through a channel. Each of the memory devices may include a plurality of memory blocks, including a first memory block, the plurality of memory devices may constitute different ways, respectively, and a group of the first memory blocks respectively included in the plurality of memory devices may constitute a first super block. When any one of the first memory blocks included in the first super block is determined to be a bad block, the memory controller may be configured to generate a new second super block by replacing the first memory block determined to be the bad block with a second memory block. | 2018-12-27 |
20180373606 | Systems and methods to service an electronic device - The disclosed embodiments include systems and methods to service an electronic device. In one embodiment, the method includes receiving a request to service an electronic device communicatively connected to a test station. The method also includes obtaining a device model and an image group of the electronic device and determining criteria to service the electronic device in accordance with a desired setup, where each image group is associated with one or more different device models. The method further includes transmitting a request to service the electronic device to a management system having an image of applications compatible with the image group of the electronic device. The method further includes receiving at least one of a virtual hard drive storing a copy of the image of the applications and an indication of a location of the virtual hard drive. The method further includes executing the applications to service the electronic device. | 2018-12-27 |
20180373607 | System, Apparatus And Method For Non-Intrusive Platform Telemetry Reporting Using An All-In-One Connector - In one embodiment, an apparatus includes a controller to couple between a system on chip (SoC) and an external connector of a platform. The controller may include: a digitizer to digitize platform telemetry information of the platform; and a control circuit to receive a command from a debug test system and direct the platform telemetry information to a destination in response to the command. Other embodiments are described and claimed. | 2018-12-27 |
20180373608 | PHASE COMPENSATION METHOD AND ASSOCIATED PHASE-LOCKED LOOP MODULE - A phase compensation method applied to a phase-locked loop (PLL) module of a communication device includes determining to output one of a maximum likelihood (ML) phase to an oscillator of the PLL module and a data-aided (DA) phase error to a filter of the PLL module according to an input signal. The ML phase is a phase generated from estimating known data in the input signal by using a ML method, and the DA phase error is a phase error generate from estimating the known data in the input signal by using a DA method. | 2018-12-27 |
20180373609 | Identifying System Device Locations - Technology for identifying a device location in a system or network includes an example method comprising detecting, by a controller, a device being connected to a location of a set of locations connected to the controller. Responsive to detecting the device, the controller can assign a device identifier (ID) to the device. A device ID may be based on a set of component IDs corresponding to components associated with the device and a component ID may be a system ID, a first controller ID, a first port ID, a first bus ID, or an information ID associated with the device. Further, the device ID can identify the location where the device resides. | 2018-12-27 |
20180373610 | REDUCING CLOCK POWER CONSUMPTION OF A COMPUTER PROCESSOR - Reducing clock power consumption of a computer processor by simulating, in a baseline simulation of a computer processor design using a software model of the computer processor design, performance of an instruction by the computer processor design, to produce a baseline result of the instruction, and identifying a circuit of the computer processor design that receives a clock signal during performance of the instruction, and in a comparison simulation of the computer processor design using the software model of the computer processor design, simulating performance of the instruction by the computer processor design while injecting a corruption signal into the circuit, to produce a comparison result of the instruction, and designating the circuit for clock gating when processing the instruction, if the comparison result of the instruction is identical to the baseline result of the instruction. | 2018-12-27 |
20180373611 | REDUCING CLOCK POWER CONSUMPTION OF A COMPUTER PROCESSOR - The present disclosure provides reducing clock power consumption of a computer processor by simulating, in a baseline simulation of a computer processor design using a software model of the computer processor design, performance of an instruction by the computer processor design, to produce a baseline result of the instruction, and identifying a circuit of the computer processor design that receives a clock signal during performance of the instruction, and in a comparison simulation of the computer processor design using the software model of the computer processor design, simulating performance of the instruction by the computer processor design while injecting a corruption signal into the circuit, to produce a comparison result of the instruction, and designating the circuit for clock gating when processing the instruction, if the comparison result of the instruction is identical to the baseline result of the instruction. | 2018-12-27 |
20180373612 | ADAPTIVE APPLICATION PERFORMANCE ANALYSIS - A system performs discovery and instrumentation of processes of an application based on process performance. The system includes one or more processors configured to: determine a duration score for a process indicating a relationship between a duration time for the process and a transaction time for a transaction including the process; determine an instrumentation threshold value; determine whether the duration score satisfies the instrumentation threshold value; and in response to determining that the duration score satisfies the instrumentation threshold value, instrument a second process invoked by the process to receive a second duration time for the second process when execution of the second process is detected in a second transaction trace of a second transaction. In some embodiments, the system prunes instrumented processes that primarily invoke subprocesses, and are thus unimportant for performance monitoring. | 2018-12-27 |
20180373613 | SYSTEMS AND METHODS FOR GENERATING AND PRESENTING ALTERNATIVE INPUTS FOR IMPROVING EFFICIENCY OF USER INTERACTION WITH COMPUTING DEVICES - Systems and methods for dynamic user gesture creation are disclosed. According to an aspect, a method includes analyzing, by the processor, a set of inputs of a user into a computing device to achieve a result on the computing device. The method also includes determining, by the processor, whether an efficiency threshold is met if the user utilizes another input to achieve the result rather than the set of inputs. Further, the method includes presenting the other input to the user as an alternative input for achieving the result on the computing device in response to determining that the efficiency threshold is met. | 2018-12-27 |
20180373614 | METHOD, DEVICE AND STORAGE MEDIUM FOR DETERMINING HEALTH STATE OF INFORMATION SYSTEM - The present disclosure relates to a method, a device and a storage medium for determining a health state of an information system. At first, a baseline configuration document corresponding to the information system is received, and data records under inspection of the information system are acquired. The baseline configuration document defines baselines. Then, each of the data records under inspection is compared with at least one baseline defined in the baseline configuration document to obtain a comparing result between each of the data records under inspection and the at least one baseline. At last, the health state of the information system is determined according to the comparing result between each of the data records under inspection and the at least one baseline. A health-determining apparatus relative to the above-mentioned method is also provided. Therefore, by these method and apparatus, the health state of the information system is quantifiable. | 2018-12-27 |
20180373615 | TUNABLE, EFFICIENT MONITORING OF CAPACITY USAGE IN DISTRIBUTED STORAGE SYSTEMS - The disclosed embodiments provide a system for monitoring resource usage statistics. During operation, the system obtains a set of expiration times associated with usage of the resource. Next, the system selects a first limit to a number of time slots for use in calculating usage statistics for the resource based on a memory efficiency associated with calculating the usage statistics for the resource. The system then populates, up to the first limit, a set of time slots after a current time with the expiration times. When a time slot in the set of time slots includes the current time, the system uses a subset of the expiration times in the time slot to update one or more usage statistics for the resource. Finally, the system outputs the one or more usage statistics for use in managing the usage of the resource. | 2018-12-27 |