29th week of 2022 patent applcation highlights part 42 |
Patent application number | Title | Published |
20220229666 | MANAGING DEPLOYMENT MODEL MIGRATIONS FOR ENROLLED DEVICES - Disclosed are various embodiments for managing deployment model migrations for enrolled devices. A client application can transmit a capability status to a management service in an instance in which a plurality of device conditions of the client device are validated. The client application can obtain and execute instructions that cause the client application to manage a migration of the client device from a first configuration to a second configuration. A user interface can be pinned on a display of the client device in an instance in which an enterprise environment endpoint is identified and a migration interface on the client device executed. The client application can transmit samples of device conditions of the second configuration of the client device to the management service. | 2022-07-21 |
20220229667 | Pipelines for Secure Multithread Execution - Described herein are systems and methods for secure multithread execution. For example, some methods include fetching an instruction of a first thread from a memory into a processor pipeline that is configured to execute instructions from two or more threads in parallel using execution units of the processor pipeline; detecting that the instruction has been designated as a sensitive instruction; responsive to detection of the sensitive instruction, disabling execution of instructions of threads other than the first thread in the processor pipeline during execution of the sensitive instruction by an execution unit of the processor pipeline; executing the sensitive instruction using an execution unit of the processor pipeline; and, responsive to completion of execution of the sensitive instruction, enabling execution of instructions of threads other than the first thread in the processor pipeline. | 2022-07-21 |
20220229668 | ACCELERATOR, METHOD OF OPERATING THE ACCELERATOR, AND DEVICE INCLUDING THE ACCELERATOR - A method of operating an accelerator includes receiving, from a central processing unit (CPU), commands for the accelerator and a peripheral device of the accelerator, processing the received commands according to a subject of performance of each of the commands, and transmitting a completion message indicating that performance of the commands is completed to the CPU after the performance of the commands is completed. | 2022-07-21 |
20220229669 | HOST OPERATING SYSTEM IDENTIFICATION USING TRANSPORT LAYER PROBE METADATA AND MACHINE LEARNING - Techniques, methods and/or apparatuses are disclosed that enable detection of an operating system of a host. Through the disclosed techniques, an operating system detection model, which may be a form of a machine learning model, may be trained to detect operating system. The operating system detection model may be provided to an operating system detector to detect operating system of a host utilizing transport layer probes without the need to have credentialed access to the host. | 2022-07-21 |
20220229670 | Method and Device for Loading Module of Virtual Reality Equipment Based on Computer Terminal - Embodiments of the present application disclose a method and a device for loading module of virtual reality equipment based on PC terminal. The virtual reality equipment is in communication connection with the PC terminal, and comprises a plurality of functional devices and functional modules corresponding to the functional devices; and the PC terminal comprises experience modules corresponding to the functional devices. The method comprises: step one, sequentially performing a loading operation on each of P functional devices, P being a positive integer, wherein the loading operation comprises: determining whether the functional devices are valid; if so, loading the functional modules corresponding to the functional devices and recording valid information; step two, collecting M pieces of valid information, wherein the M pieces of valid information correspond to M functional devices which are determined to be valid among the P functional devices, M being a positive integer which is no greater than P; and step three, sending the M pieces of valid information to the PC terminal, thereby the PC terminal loading experience modules corresponding to the M functional devices according to the M pieces of valid information. | 2022-07-21 |
20220229671 | CONFIGURING ACCESSORIES - A computer system is used to initiate a process to configure an external accessory for use with at least a first device management application. The computer system displays a prompt that includes an option to initiate a process to configure the external accessory for use with at least a first device management application. While displaying the prompt, the computer system optionally receives a selection and/or an input corresponding to a selection of an option to initiate a process to configure the external accessory for use with at least a first device management application. | 2022-07-21 |
20220229672 | METHOD FOR ACQUIRING APPLICATION, APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM - Disclosed are a method, an apparatus, and a non-transitory computer-readable storage medium for obtaining applications. The method includes: identifying an external device connected to an interface according to states of pins of the interface; identifying applications which require to use the external device connected to the interface; and displaying at least one of the applications identified. | 2022-07-21 |
20220229673 | LAZY LOADING OF CODE CONTAINERS BASED ON INVOKED ACTIONS IN APPLICATIONS - A system can improve application performance by using lazy loading of code containers based on non-navigational actions in single-page or hybrid applications. A page can launch by loading a main bundle of code. The main bundle can include an action manifest that maps action identifiers to separate code modules. Those separate code modules can include functions for handling the actions. Based on a non-navigational action that occurs, the application can use the action manifest to map a first action identifier of the first action to a first code module. The application can then lazy load a first code module asynchronously from the main bundle. The application can also use route guards with filters to determine child actions, validate action routes, and cache the validated routes for later use without a remote server call. | 2022-07-21 |
20220229674 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR SUPERIMPOSING GUIDANCE USING A VIRTUAL SCREEN - An information processing apparatus includes: a processor configured to: perform control that displays a virtual screen superimposed on a real space to a user; and when attaching a device to be added to an electronic substrate, display a guidance for attaching the device to the electronic substrate as the virtual screen, with respect to the electronic substrate and the device in the real space. | 2022-07-21 |
20220229675 | GENERATING ARTWORK TUTORIALS - A computer-implemented method and an apparatus for generating and running a creative tutorial algorithm for creating a visual artwork may include obtaining data defining a creative objective and identifying the creative objective based on the data defining the creative objective, and obtaining information about at least one targeted artistic style and identifying the at least one targeted artistic style based on the information about the at least one targeted artistic style, and accessing a plurality of predetermined artistic styles and identifying, based on the plurality of predetermined artistic styles, at least one predetermined artistic style matching the at least one targeted artistic style, thereby specifying at least one targeted predetermined artistic style, and generating the creative tutorial algorithm. The creative tutorial algorithm is configured to include instructions on how to reproduce the creative objective in terms of the at least one targeted predetermined artistic style. | 2022-07-21 |
20220229676 | GENERATING CONTENT ENDORSEMENTS USING MACHINE LEARNING NOMINATOR(S - Techniques are disclosed that enable the generation of candidate endorsements for recommended items of content using an ensemble of nominators. Various implementations include each nominator in the ensemble providing a candidate endorsement for each recommended item of content. Additionally or alternatively, an endorsement is selected to present to the user based on a score determined for each candidate endorsement. | 2022-07-21 |
20220229677 | PERFORMANCE MODELING OF GRAPH PROCESSING COMPUTING ARCHITECTURES - A distributed simulation system is provided that includes a timing simulator and functional simulator(s) on different computing nodes to simulate a graph processing system. The functional simulators are to simulate execution of a set of instructions on the graph processing system and to send information associated with the simulated set of instructions to the timing simulator over the network. The timing simulator is to determine timing information associated with execution of the sets of instructions sent by the functional simulators and send the timing information to the functional simulators over the network. The timing simulator may determine a global synchronization point for the functional simulators and send the timing information for the sets of instructions to respective functional simulators at the global synchronization point. The functional simulators may stall simulation of further instructions until the timing information for its set of instructions is received from the timing simulator. | 2022-07-21 |
20220229678 | DECLARATIVE VM MANAGEMENT FOR A CONTAINER ORCHESTRATOR IN A VIRTUALIZED COMPUTING SYSTEM - An example virtualized computing system includes a host cluster having a virtualization layer executing on hardware platforms of hosts, the virtualization layer supporting execution of virtual machines (VMs), the VMs including pod VMs and native VMs, the pod VMs including container engines supporting execution of containers in the pod VMs, the native VMs including applications executing on guest operating systems; and an orchestration control plane integrated with the virtualization layer, the orchestration control plane including a master server having a pod VM controller to manage lifecycles of the pod VMs and a native VM controller to manage lifecycles of the native VMs. | 2022-07-21 |
20220229679 | MONITORING AND MAINTAINING HEALTH OF GROUPS OF VIRTUAL MACHINES - Monitoring a health of a plurality of virtual machines operating within a group of virtual machines configured to implement an application includes receiving health information from each of the plurality of virtual machines during operation of the group of virtual machines, determining a health score for each of the plurality of virtual machines based on the received health information, establishing a priority queue ranking each of the plurality of virtual machines based on the determined health score thereof, identifying one or more unhealthy virtual machines based on the established priority queue, and sending a message to at least one of the identified unhealthy virtual machines over a communication network to remove the at least one of the identified unhealthy virtual machines from the group of virtual machines when a remaining number of virtual machines in the group of virtual machines is greater than a safety number. | 2022-07-21 |
20220229680 | DATA PROCESSING SYSTEM USING SKELETON VIRTUAL VOLUMES FOR IMPROVED SYSTEM STARTUP - A computer system includes a virtual machine (VM) host computer and a data storage system providing physical storage and mapping logic to store a virtual volume (vVol) for a VM. During a first operating session, first-session working data is stored on the vVol, the working data being session specific and not persisting across operating sessions. At the end of the first operating session, unmap operations are performed to deallocate underlying physical storage of the vVol, leaving the vVol as a skeleton vVol. At the beginning of a subsequent second operating session, and based on the existence of the vVol as the skeleton vVol, the VM host resumes use of the vVol for storing second-session working data of the VM during the second operating session. The retention of the vVol in skeleton form can improve system startup efficiency especially for a boot storm involving simultaneous startup of many VMs. | 2022-07-21 |
20220229681 | DEVICE DISCOVERY IN A VIRTUALIZED ENVIRONMENT - In one aspect, an example methodology implementing the disclosed techniques includes receiving, by a systems management console, a network address of a device in a virtual environment, and determining a network address associated with a virtual environment management console based on the received network address of the device in the virtual environment. The method also includes sending, by the systems management console via a systems management agent to the virtual environment management console using the determined network address associated with the virtual environment management console, a request for network addresses of virtual machine (VM) host servers and VMs in the virtual environment. The method also includes receiving, by the systems management console via the systems management agent from the virtual environment management console, the network addresses of the VM host servers and the VMs in the virtual environment and providing a notification of the discovered VM host servers and VMs. | 2022-07-21 |
20220229682 | SYSTEM AND METHOD FOR NOTEBOOK PROCESSING TO HANDLE JOB EXECUTION IN CROSS-CLOUD ENVIRONMENT - A system for notebook processing to handle job execution in cross-cloud environment is disclosed. A decision force assistant to receive one or more job requests representative of execution of one or more projects, parses the one or more job requests received; a decision force engine launches one or more virtual machines on a cloud-based platform, sends one or more job instructions associated with the one or more job requests to the decision force assistant, enables the decision force assistant to fetch at least one input file corresponding to the one or more job instructions; a job execution engine runs one or more web-based notebooks in a sequential manner, enables the decision force assistant to fetch the at least one input file for execution of the one or more job requests on the one or more web-based notebooks, generates a job associated output, to generate a job execution status. | 2022-07-21 |
20220229683 | MULTI-PROCESS VIRTUAL MACHINE MIGRATION IN A VIRTUALIZED COMPUTING SYSTEM - Examples provide a method of migrating a multi-process virtual machine (VM) from at least one source host to at least one destination host in a virtualized computing system. The method includes: copying, by VM migration software executing in the at least one source host, guest physical memory of the multi-process VM to the at least one destination host; obtaining, by the VM migration software, at least one device checkpoint for at least one device supporting the multi-process VM, the multi-process VM including a user-level monitor (ULM) and at least one user-level driver (ULD), the at least one ULD interfacing with the at least one device, the ULM providing a virtual environment for the multi-process VM; transmitting the at least one device checkpoint to the at least one destination host; restoring the at least one device checkpoint; and resuming the multi-process VM on the at least one destination host. | 2022-07-21 |
20220229684 | EARLY EVENT-BASED NOTIFICATION FOR VM SWAPPING - Various embodiments disclosed herein are related to a non-transitory computer readable storage medium. In some embodiments, the medium includes instructions stored thereon that, when executed by a processor, cause the processor to receive, from a user-space application, a request to detect swapping activity satisfying a threshold condition, detect the swapping activity satisfying the threshold condition, and, in response to occurrence of the threshold condition, send a response that indicates that the swapping activity satisfies the threshold condition. | 2022-07-21 |
20220229685 | APPLICATION EXECUTION ON A VIRTUAL SERVER BASED ON A KEY ASSIGNED TO A VIRTUAL NETWORK INTERFACE - In some implementations, a cloud computing system may deploy a virtual server to execute an application associated with an application licensor system. The cloud computing system may identify a virtual network interface that corresponds to the virtual server. The cloud computing system may associate a key received from the application licensor system to the virtual network interface to allow the application to be executed based on the virtual network interface. The cloud computing system may associate the virtual server and the virtual network interface. The cloud computing system may execute the application on the virtual server based on the key that is associated with the virtual network interface and based on associating the virtual server and the virtual network interface. | 2022-07-21 |
20220229686 | SCHEDULING WORKLOADS IN A CONTAINER ORCHESTRATOR OF A VIRTUALIZED COMPUTER SYSTEM - An example method of scheduling a workload in a virtualized computing system including a host cluster having a virtualization layer directly executing on hardware platforms of hosts is described. The virtualization layer supports execution of virtual machines (VMs) and is integrated with an orchestration control plane. The method includes: receiving, at the orchestration control plane, a workload specification for the workload; selecting, at the orchestration control plane, a plurality of nodes for the workload based on the workload specification, each of the plurality of nodes implemented by a host of the hosts; selecting, by the orchestration control plane in cooperation with a virtualization management server managing the host cluster, a node of the plurality of nodes; and deploying, by the orchestration control plane in cooperation with the virtualization management server, the workload on a host in the host cluster implementing the selected node. | 2022-07-21 |
20220229687 | NON-DISRUPTIVE CONTAINER RUNTIME CHANGES - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for migrating from a first container runtime to a second container runtime. One of the methods includes deploying a second control plane virtual machine that is configured to manage containers of a cluster of virtual execution environments using the second container runtime; obtaining, for each container executing workloads hosted by a respective virtual execution environment, a respective container image representing a current state of the container; updating each obtained container image to a format that is compatible with the second container runtime; deploying, for each updated container image, a corresponding container hosted by a virtual execution environment in the cluster, wherein the deployed container is managed by the second control plane virtual machine; decommissioning a first control plane virtual machine and transferring control of the containers of the cluster to the second control plane virtual machine. | 2022-07-21 |
20220229688 | VIRTUALIZED I/O - Distributed I/O virtualization includes receiving, at a first physical node in a plurality of physical nodes, an indication of a request to transfer data from an I/O device on the first physical node to a set of guest physical addresses. An operating system is executing collectively across the plurality of physical nodes. It further includes writing data from the I/O device to one or more portions of physical memory local to the first physical node. It further includes mapping the set of guest physical addresses to the written one or more portions of physical memory local to the first physical node. | 2022-07-21 |
20220229689 | VIRTUALIZATION PLATFORM CONTROL DEVICE, VIRTUALIZATION PLATFORM CONTROL METHOD, AND VIRTUALIZATION PLATFORM CONTROL PROGRAM - A virtualization infrastructure control apparatus ( | 2022-07-21 |
20220229690 | GLOBAL COHERENCE OPERATIONS - A method includes receiving, by a L2 controller, a request to perform a global operation on a L2 cache and preventing new blocking transactions from entering a pipeline coupled to the L2 cache while permitting new non-blocking transactions to enter the pipeline. Blocking transactions include read transactions and non-victim write transactions. Non-blocking transactions include response transactions, snoop transactions, and victim transactions. The method further includes, in response to an indication that the pipeline does not contain any pending blocking transactions, preventing new snoop transactions from entering the pipeline while permitting new response transactions and victim transactions to enter the pipeline; in response to an indication that the pipeline does not contain any pending snoop transactions, preventing, all new transactions from entering the pipeline; and, in response to an indication that the pipeline does not contain any pending transactions, performing the global operation on the L2 cache. | 2022-07-21 |
20220229691 | Persistent Multi-Word Compare-and-Swap - A computer system including one or more processors and persistent, word-addressable memory implements a persistent atomic multi-word compare-and-swap operation. On entry, a list of persistent memory locations of words to be updated, respective expected current values contained the persistent memory locations and respective new values to write to the persistent memory locations are provided. The operation atomically performs the process of comparing the existing contents of the persistent memory locations to the respective current values and, should they match, updating the persistent memory locations with the new values and returning a successful status. Should any of the contents of the persistent memory locations not match a respective current value, the operation returns a failed status. The operation is performed such that the system can recover from any failure or interruption by restoring the list of persistent memory locations. | 2022-07-21 |
20220229692 | METHOD AND DEVICE FOR DATA TASK SCHEDULING, STORAGE MEDIUM, AND SCHEDULING TOOL - A method and device for data task scheduling, a storage medium, and a scheduling tool are involved. The method includes: acquiring a data task to be executed at present which is configured with a task relationship; in response to that the data task to be executed at present satisfies a preset condition, ranking, according to the task relationship, the data task to be executed at present to create a data task queue; acquiring a load situation of a plurality of node servers, to determine a target node server; and sending, based on the data task queue, the data task to be executed at present to the target node server for processing. | 2022-07-21 |
20220229693 | CENTRALIZED HIGH-AVAILABILITY FLOWS EXECUTION FRAMEWORK - Techniques for providing a framework for handling execution of HA flows in an active-active storage node configuration. The techniques include receiving notifications of functional statuses of processes and/or equipment associated with storage nodes in the active-active configuration, making determinations regarding how to address HA events occurring on the processes and/or equipment associated with the storage nodes based on the received notifications, and, in response to a request to execute an HA flow for a respective HA event, determining whether to refuse the request to execute the HA flow, service the request to execute the HA flow, abort one or more HA flows in execution, and/or postpone execution of the HA flow to a later time based on one or more dependencies defining conditions for the HA flow. In this way, mutual interference of HA flows or other process threads in the active-active configuration can be reduced or eliminated. | 2022-07-21 |
20220229694 | SYSTEMS AND METHODS FOR THREAD MANAGEMENT TO OPTIMIZE RESOURCE UTILIZATION IN A DISTRIBUTED COMPUTING ENVIRONMENT - Systems and methods for embodiments for load attenuating thread pools (LATP) that may be associated with a service deployed in distributed computer environment, where that service utilizes a shared resource. A LATP includes a thread pool comprising a number of worker threads servicing requests handled by a service that includes such a LATP. The thread pool is managed by a thread pool manager of the LATP that can attenuate (herein used to mean add, remove or leave unchanged) the number of worker threads in the thread pool based on a resource utilization metric associated with the shared resource. | 2022-07-21 |
20220229695 | SYSTEM AND METHOD FOR SCHEDULING IN A COMPUTING SYSTEM - An improved multi-level scheduling system and method are disclosed. In one embodiment, the system comprises a coarse scheduler to allocate sets of computing resources at a first level and a set of fine grain schedulers configured to schedule at a second level, wherein the second level comprises individual computing resources within each set of computing resources. The fine grain scheduler may be configured to communicate with the coarse scheduler and monitor performance and utilization of the individual computing resources. The fine grain schedulers may also be configured to implement a different set of allocation rules than the coarse scheduler and request additional sets of resources from the coarse scheduler based on current and predicted utilization of the individual computing resources. | 2022-07-21 |
20220229696 | DATA PROCESSING EXECUTION DEVICE, DATA PROCESSING EXECUTION METHOD AND COMPUTER READABLE MEDIUM - Each of a plurality of engines executes data processing. While any engine of the plurality of engines is executing data processing as an execution engine, an engine selection unit ( | 2022-07-21 |
20220229697 | MANAGEMENT COMPUTER, MANAGEMENT SYSTEM, AND RECORDING MEDIUM - A management computer manages a data processing infrastructure including a server that executes a job and a storage device that is coupled to the server via a network and stores data used for processing in accordance with the job. The management computer includes a disc and a CPU. The disc stores maximum resource amount information, path information, and load information. The CPU computes a free resource amount of components forming a path to data related to execution of a predetermined job, based on the maximum resource amount information, the path information, and the load information and determines a parallelizable number in a parallel executable processing unit when the predetermined job is executing in the server, based on the free resource amount. | 2022-07-21 |
20220229698 | Autonomous Warehouse-Scale Computers - The subject matter described herein provides systems and techniques to address the challenges of growing hardware and workload heterogeneity using a Warehouse-Scale Computer (WSC) design that improves the efficiency and utilization of WSCs. The WSC design may include an abstraction layer and an efficiency layer in the software stack of the WSC. The abstraction layer and the efficiency layer may be designed to improve job scheduling, simplify resource management, and drive hardware-software co-optimization using machine learning techniques and automation in order to customize the WSC for applications at scale. The abstraction layer may embrace platform/hardware and workload diversity through greater coordination between hardware and higher layers of the WSC software stack in the WSC design. The efficiency layer may employ machine learning techniques at scale to realize hardware/software co-optimizations as a part of the autonomous WSC design. | 2022-07-21 |
20220229699 | DYNAMIC DATABASE THREAD ALLOCATION TO APPLICATIONS - Described herein are systems, methods, and software to manage thread allocation for applications to access a database. In one implementation, a coordination service may identify applications to be deployed in a computing environment and allocate each of the applications to a queue of a plurality of queues based on qualities of service for the applications. The coordination service may further select the applications from the plurality of queues and allocate threads to each of the applications to interact with a database based on requirements for the applications. The coordination service may also monitor resource usage by the applications at the database, determine a thread allocation modification for one or more of the applications based on the resource usage, and initiate a thread allocation modification. | 2022-07-21 |
20220229700 | METHOD FOR SECURELY EXECUTING A WORKFLOW IN A COMPUTER SYSTEM - A method is provided for securely executing a workflow in a computer system which comprises at least a distributed repository network and a number of client systems that are operatively coupled or operatively couplable to this repository network. A model of the workflow is stored in the repository network. The workflow model comprises a number of workflow steps, and execution conditions, events, tasks and notifications are stored in the workflow model for each workflow step. To execute the workflow, a workflow instance is generated from the workflow model and is stored in the repository network. The client systems execute the workflow instance stored in the repository network. | 2022-07-21 |
20220229701 | DYNAMIC ALLOCATION OF COMPUTING RESOURCES - According to implementations of the subject matter, a solution of dynamic management of computing resource is provided. In the solution, a first request for using a target number of computing resource in a set of computing resources is received, wherein at least one free computing resource of the set of computing resources is organized into at least one free resource group. When it is determined that a free matching resource group is absent from the first resource group and a free redundant resource group is present in at least one free resource group, the target number of computing resources are allocated for the first request by splitting the free redundant resource group, wherein the number of resources in the free redundant resource group is greater than the target number. Therefore, the dynamic allocation of computing resources is enabled. | 2022-07-21 |
20220229702 | METHOD, APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM FOR CREATING ACTION RESOURCES - A method, an apparatus, an electronic device, and a storage medium for creating action resources are disclosed. The method for creating action resources includes: receiving a creation request for a first action resource, wherein the creation request includes a first target resource and a first criteria, the first criteria being used to trigger a first operation for the first target resource according to a first condition, creating the first action resource according to the first criteria and the first target resource; and creating a second action resource, wherein the second action resource includes the first target resource and a second criteria, the second criteria being used to trigger a second operation for the first target resource according to a second condition, and wherein the second operation is different from the first operation. | 2022-07-21 |
20220229703 | ASYMMETRIC TELEMETRY SELECTOR - A system selects a telemetry allocation for the transmission of data from multiple transmitters. An allocation matrix defines a split ratio for data generated by a data source that is split between transmitters. The split ratio is defined for each data source of a plurality of data sources. (A) A storage buffer model is executed based on the allocation matrix to simulate a buffer that stores data from each data source that exceeds a download bandwidth value. (B) Performance parameter values are stored in association with the allocation matrix. (C) The allocation matrix is updated to redefine at least one split ratio. (D) (A) through (C) is repeated with the allocation matrix replaced with the updated allocation matrix until each unique permutation of values for the split ratio is processed. An allocation matrix is selected from allocation matrices stored in (B) based on the plurality of performance parameter values. | 2022-07-21 |
20220229704 | SYSTEM AND METHOD FOR USE WITH A CLOUD COMPUTING ENVIRONMENT FOR DETERMINING A CLOUD SCORE ASSOCIATED WITH RESOURCE USAGE - In accordance with an embodiment, described herein is a system and method for use with a cloud computing environment, for determining a cloud score associated with a resource configuration, limits, or shape, for example that of a virtual machine or host provided within the environment. In accordance with an embodiment, the described approach provides a set of infrastructure workloads, for use in assessing a cloud infrastructure and resources provided thereby, so that a full spectrum of aspects of the cloud infrastructure can be covered by workload testing. The workloads can be used to generate metrics associated with resource usage. The system can then consider one or more metrics that are associated with performance of a particular resource configuration or shape, for example that of a virtual machine or (e.g., bare metal) host hosted by a cloud provider, and determine a score that is indicative of the relative performance of that configuration or shape for a particular workload configuration. | 2022-07-21 |
20220229705 | GEO-REPLICATED SERVICE MANAGEMENT - Embodiments automatically identify which cloud resources and resource groups correspond to which geo-replicated services and service replicas. Resource groups are represented as vectors having features which may depend on resource types, resource group tags, resource group names, and other data. Vectors are clustered using hierarchical agglomerative clustering, for example, and each cluster is recognized as corresponding to a service. Associations between resources and services are then used for management functions such as updating or testing or suspending or modifying only the resources of a given service, finding configuration inconsistencies, or identifying higher cost replicas. Because two replicas of a given service may have different resource configurations or different constituent resources, similarity measures may be employed to map resources between replicas when defining resource group vectors or analyzing replicas. Automation permits documentation of accurate current associations between resources and services, even when resources are being created or deleted automatically. | 2022-07-21 |
20220229706 | METHODS FOR DYNAMIC THROTTLING TO SATISFY MINIMUM THROUGHPUT SERVICE LEVEL OBJECTIVES AND DEVICES THEREOF - Methods, non-transitory machine readable media, and computing devices that dynamically throttle non-priority workloads to satisfy minimum throughput service level objectives (SLOs) are disclosed. With this technology, a determination is made when a number of detection intervals with a violation within a detection window exceeds a threshold, when a current one of the detection intervals is outside an observation area. The detection intervals are identified a violated based on an average throughput for priority workloads within the detection intervals exceeding a minimum throughput SLO. A throttle is then set to rate-limit non-priority workloads, when the number of violated detection intervals within the detection window exceeds the threshold. Advantageously, throughput for priority workloads is more effectively managed and utilized with this technology such that throttling oscillations are reduced, throttling is not deployed in conditions in which it would not improve throughput, and throttling is minimally deployed to maximize throughput. | 2022-07-21 |
20220229707 | MANAGING MIGRATION OF WORKLOAD RESOURCES - Examples described herein relate to a management node and a method for managing migration of workload resources. The management node may assign a capability tag to each of a plurality of member nodes hosting workload resources. Further, the management node may determine a resource requirement classification of each workload resource of the workload resources based on analysis of runtime performance data of each workload resource. Furthermore, the management node may determine a temporal usage pattern classification of each workload resource. Moreover, the management node may determine a migration plan for a candidate workload resource of the workload resources based on the capability tag of each of the plurality of member nodes, the resource requirement classification and the temporal usage pattern classification of each workload resource. | 2022-07-21 |
20220229708 | Content Sharing Method and Electronic Device - A content sharing method includes displaying, by an electronic device, an interface of a first application, where a text and a picture are displayed in the interface, performing, by a user, an operation on the picture in the interface such that the electronic device displays the picture in a suspended first window, dragging, by the user, the picture in the first window to a second application, and sharing, by the user, the picture with another user using the second application. | 2022-07-21 |
20220229709 | EFFICIENT OPERATIONS OF COMPONENTS IN A WIRELESS COMMUNICATIONS DEVICE - Various embodiments comprise apparatuses and methods including a communications subsystem having an interface module and a protocol module with the communications subsystem being configured to be coupled to an antenna. An applications subsystem includes a software applications module and an abstraction module. The software applications module is to execute an operating system and user applications; the abstraction module is to provide an interface with the software applications module. A controller interface module is coupled to the abstraction module and the interface module and is to convert signals from the applications subsystem into signals that are executable by the communications subsystem. Additional apparatuses and methods are described. | 2022-07-21 |
20220229710 | FILE UPLOAD MODIFICATIONS FOR CLIENT SIDE APPLICATIONS - Methods and systems are provided for a client computing device including a browser that renders a web page. Program code generates a mock upload event and a corresponding mock data transfer object for uploading data using the web page. The mock upload event and the corresponding mock data transfer object are propagated to an upload event listener of the web page and executed. Prior to generating the mock upload event and corresponding mock data transfer object, an embedded upload event listener may receive an upload event, read the upload event, drop the received upload event from an event handler pipeline, and call synchronously or asynchronously, code to perform logic on the received upload event for the generation of the mock upload event and a corresponding mock data transfer object. | 2022-07-21 |
20220229711 | ASYMMETRIC FULFILLMENT OF REMOTE PROCEDURE CALLS BY MULTI-CORE SYSTEMS - A method of performing a remotely-initiated procedure on a computing device is provided. The method includes (a) receiving, by memory of the computing device, a request from a remote device via remote direct memory access (RDMA); (b) in response to receiving the request, assigning processing of the request to one core of a plurality of processing cores of the computing device, wherein assigning includes the one core receiving a completion signal from a shared completion queue (Shared CQ) of the computing device, the Shared CQ being shared between the plurality of cores; and (c) in response to assigning, performing, by the one core, a procedure described by the request. An apparatus, system, and computer program product for performing a similar method are also provided. | 2022-07-21 |
20220229712 | SELF-REGULATING POWER MANAGEMENT FOR A NEURAL NETWORK SYSTEM - A neural network runs a known input data set using an error free power setting and using an error prone power setting. The differences in the outputs of the neural network using the two different power settings determine a high level error rate associated with the output of the neural network using the error prone power setting. If the high level error rate is excessive, the error prone power setting is adjusted to reduce errors by changing voltage and/or clock frequency utilized by the neural network system. If the high level error rate is within bounds, the error prone power setting can remain allowing the neural network to operate with an acceptable error tolerance and improved efficiency. The error tolerance can be specified by the neural network application. | 2022-07-21 |
20220229713 | MONITORING SYSTEM, MONITORING METHOD, AND NON-TRANSITORY STORAGE MEDIUM - The present invention provides a monitoring system ( | 2022-07-21 |
20220229714 | SERIALIZING MACHINE CHECK EXCEPTIONS FOR PREDICTIVE FAILURE ANALYSIS - Upon occurrence of multiple errors in a central processing unit (CPU) package, data indicating the errors is stored in machine check (MC) banks. A timestamp corresponding to each error is stored, the timestamp indicating a time of occurrence for each error. A machine check exception (MCE) handler is generated to address the errors based on the timestamps. The timestamps can be stored in the MC banks or in a utility box (U-box). The MCE handler can then address the errors based on order of occurrence, for example by determining that the first error in time causes the remaining error. The MCE can isolate hardware/software associated with the first error to recover from a failure. The MCE can report only the first error to the operating system (OS) or other error management software/hardware. The U-Box may also convert the timestamps into real time to support user debugging. | 2022-07-21 |
20220229715 | PRINTER EVENT LOG RETRIEVAL MECHANISM - A method of retrieving printer error logs from a printer or a multi-function printer with a printer memory buffer and internal non-volatile storage involves receiving a print error. A processor determines if a print error is a fatal error or a non-fatal error. On condition the print error is a non-fatal error, the event log located in the printer memory buffer is printed using the printer or the multi-function printer. On condition the print error is a fatal error, the processor determines if there is enough storage capacity to save the fatal error into the internal non-volatile storage. On condition there is enough storage capacity, the processor saves the event log into the internal non-volatile storage. The user is then instructed to power cycle the printer or the multi-function printer. | 2022-07-21 |
20220229716 | EVALUATION DEVICE, SYSTEM, CONTROL METHOD, AND PROGRAM - An evaluation apparatus ( | 2022-07-21 |
20220229717 | DIAGNOSTIC TEST PRIORITIZATION BASED ON ACCUMULATED DIAGNOSTIC REPORTS - According to an aspect, there is provided a method for guiding a user in diagnostic test selection. Initially, one or more diagnostic reports on each of a plurality of computing devices are maintained ( | 2022-07-21 |
20220229718 | AUTOMATED CRASH RECOVERY - Methods for improving operation of a user device executing an application. The methods include collecting a first set of data corresponding to a run time environment of the application, collecting a second set of data corresponding to a crash of the application, identifying a cause of the crash based on the first set of data and a second set of data and determining the cause of the crash is associated with an application feature corresponding to a feature flag. | 2022-07-21 |
20220229719 | ERROR LOGGING DURING SYSTEM BOOT AND SHUTDOWN - Systems and methods are described for improved error logging during system boot and shutdown. A hardware initialization firmware on a computing device can include a logging module. When errors occur during early system booting or late system shutdown, the firmware can create error logs. The logging module can receive the error logs and prioritize them according to a set of rules. The logging module can select error logs of the highest priority up to a predetermined maximum amount. The logging module can modify the error logs using a shorthand form and write them to nonvolatile random-access memory. The firmware can initialize runtime services and launch an operating system. A system logger on the operating system can retrieve the error logs, save them to a file, and erase them from the memory. | 2022-07-21 |
20220229720 | Firmware Failure Reason Prediction Using Machine Learning Techniques - Techniques are provided for predicting firmware installation failure reasons using machine learning techniques. One method comprises obtaining log data for a user device, wherein the log data is obtained following a failure of a firmware installation on the user device; extracting a plurality of features from the obtained log data; applying the extracted features to a trained machine learning model to obtain a prediction of whether the firmware installation failure is caused by a hardware-related failure or a software-related failure; and performing an automated remedial action based on a result of the prediction. The trained machine learning model can be trained using historical data for multiple user devices that experienced a firmware installation failure, where the historical data comprises a label indicating whether a given failure comprises a hardware-related failure or a software-related failure. The trained machine learning model can be trained and tested using cross-validation techniques. | 2022-07-21 |
20220229721 | SELECTION OF OUTLIER-DETECTION PROGRAMS SPECIFIC TO DATASET META-FEATURES - Embodiments described herein involve selecting outlier-detection programs that are specific to meta-features of datasets. For instance, a computing system constructs a performance vector from a U vector and a reference V matrix. Vector elements of the performance vector identify estimated performance values of various outlier-detection programs with respect to an input dataset. The U vector is generated using meta-features of the input dataset. The reference V matrix is generated from a training process in which performance values of the various outlier-detection programs with respect to training input datasets are used to obtain the reference V matrix via a UV decomposition. The computing system selects an outlier-detection program having a greater estimated performance value in the performance vector as compared to other outlier-detection programs' respective estimated performance values. | 2022-07-21 |
20220229722 | METHOD AND APPARATUS TO IMPROVE PERFORMANCE OF A REDUNDANT ARRAY OF INDEPENDENT DISKS THAT INCLUDES ZONED NAMESPACES DRIVES - High performance parity-based Redundant Array of Independent Disks (RAID) on Zoned Namespaces Solid State Drives (SSD)s with support for high queue depth write Input Output (IO) and Zone Append command is provided in a host system. The host system includes a stripe mapping table to store mappings between parity strips and data strips in stripes on the RAID member SSDs. The host system also includes a Logical to Physical (L2P) table to store data block addresses returned by the Zone Append command. | 2022-07-21 |
20220229723 | LOW OVERHEAD ERROR CORRECTION CODE - Memory requests are protected by encoding memory requests to include error correction codes. A subset of bits in a memory request are compared to a pre-defined pattern to determine whether the subset of bits matches a pre-defined pattern, where a match indicates that a compression can be applied to the memory request. The error correction code is generated for the memory request and the memory request is encoded to remove the subset of bits, add the error correction code, and add at least one metadata bit to the memory request to generate a protected version of the memory request, where the at least one metadata bit identifies whether the compression was applied to the memory request. | 2022-07-21 |
20220229724 | READ RETRY TO SELECTIVELY DISABLE ON-DIE ECC - A memory device that performs internal ECC (error checking and correction) can selectively return read data with application of the internal ECC or without application of the internal ECC, in response to different read commands from the memory controller. The memory device can normally apply ECC and return corrected data in response to a normal read command. In response to a retry command, the memory device can return the read data without application of the internal ECC. | 2022-07-21 |
20220229725 | SOFT ERROR DETECTION AND CORRECTION FOR DATA STORAGE DEVICES - Various implementations described herein relate to systems and methods for detecting soft errors, including but not limited to, reading a codeword from a non-volatile memory, decoding the codeword to obtain at least input data, determining validity of the input data using a first signature after processing the input data through a data path, and in response to determining that the input data is valid using the first signature, sending the input data to a host. | 2022-07-21 |
20220229726 | ERROR CORRECTION OF MEMORY - A memory includes: a data receiving circuit suitable for receiving a data during a write operation; a data rotation circuit suitable for changing an order of the data transferred from the data receiving circuit and outputting the data whose order is changed in response to an address during the write operation; an error correction code generation circuit suitable for generating an error correction code based on the data output from the data rotation circuit during the write operation; and a memory core suitable for storing the data received by the data receiving circuit and the error correction code during the write operation. | 2022-07-21 |
20220229727 | ENCODING AND STORAGE NODE REPAIRING METHOD FOR MINIMUM STORAGE REGENERATING CODES FOR DISTRIBUTED STORAGE SYSTEMS - The present disclosure is based on erasure coding, information dispersal, secret sharing and ramp schemes to assure reliability and security. More precisely, the present disclosure combines ramp threshold secret sharing and systematic erasure coding. | 2022-07-21 |
20220229728 | REDUCED PARITY DATA MANAGEMENT - A method includes receiving, by a memory sub-system, host data to be written to a plurality of blocks of a memory device associated with a memory sub-system, where each of the plurality of blocks are coupled to one of a plurality of word lines of the memory device. The method can further include generating parity data for each word line of the block; dividing the parity data into one of either a first word line parity set or a second word line parity set; generating a reduced parity data set with exclusive or parity values for the first word line parity set and for the second word line parity set; and writing the reduced parity data set in the memory sub-system. | 2022-07-21 |
20220229729 | DISTRIBUTED RAID REBUILD - A technique is disclosed for generating rebuild data of a RAID configuration having one or more failed drives. The RAID configuration includes multiple sets of drives coupled to respective computing nodes, and the computing nodes are coupled together via a network. A lead node directs rebuild activities, communicating with the other node or nodes and directing such node(s) to compute partial rebuild results. The partial rebuild results are based on data of the drives of the RAID configuration coupled to the other node(s). The lead node receives the partial rebuild results over the network and computes complete rebuild data based at least in part on the partial rebuild results. | 2022-07-21 |
20220229730 | STORAGE SYSTEM HAVING RAID STRIPE METADATA - A processing device obtains a write operation which comprises first data and second data to be stored in first and second strips of a given stripe. The processing device stores the first data in the first strip and determines that the second strip is unavailable. The processing device determines a parity based on the first data and the second data and stores the parity in a parity strip. The processing device updates metadata to indicate that the second data was not stored in the second strip. In some embodiments, the updated metadata is non-persistent and the processing device may be further configured to rebuild the given stripe, update persistent metadata corresponding to a sector of stripes including the given stripe and clear the non-persistent metadata based at least in part on a completion of the rebuild. | 2022-07-21 |
20220229731 | NON-VOLATILE STORAGE DEVICE HAVING FAST BOOT CODE TRANSFER WITH LOW SPEED FALLBACK - A storage system comprises a non-volatile memory configured to store boot code and a control circuit connected to the non-volatile memory. In response to a first request from a host to transmit the boot code, the storage system commences transmission of the boot code to the host at a first transmission speed. Before successfully completing the transmission of the boot code to the host at the first transmission speed, it is determined the boot code transmission has failed. Therefore, the host will issue a second request for the boot code. In response to the second request for the boot code, and recognizing that this is a fallback condition because the previous transmission of the boot code failed, the storage apparatus re-transmits the boot code to the host at a lower transmission speed than the first transmission speed. | 2022-07-21 |
20220229732 | INTELLIGENT RE-TIERING OF INCREMENTAL BACKUP DATA STORED ON A CLOUD-BASED OBJECT STORAGE - Described is a system for intelligent re-tiering of backup data stored on a cloud-based object storage. More specifically, the system may re-tier objects such that the system retains the ability to efficiently perform a full restore of backup data even when incremental backups are performed to a cloud-based object storage. To provide such a capability, the system may maintain a specialized metadata database that stores information indicating the backup time for each backup, and a list of objects required to perform a full restore to each of the backup times. Accordingly, when using a threshold time (e.g. expiry) to select object candidates for re-tiering, the system may leverage the metadata database to ensure that objects that may still need to be referenced are not unnecessarily moved to a lower storage tier. | 2022-07-21 |
20220229733 | RULE-BASED RE-TIERING OF INCREMENTAL BACKUP DATA STORED ON A CLOUD-BASED OBJECT STORAGE - Described is a system for rule-based re-tiering of backup data stored on a cloud-based object storage. More specifically, the system may re-tier objects based on one or more storage rules such that the system retains the ability to efficiently perform a full restore of backup data even when incremental backups are performed to a cloud-based object storage. To provide such a capability, the system may maintain a specialized metadata database that stores information indicating the backup time for each backup, and a list of objects required to perform a full restore to each of the backup times. Accordingly, when initiating a re-tiering based on one or more storage rules, the system may intelligently select candidate objects for re-tiering by leveraging the metadata database to ensure that objects that may still need to be referenced are not unnecessarily moved to a lower storage tier. | 2022-07-21 |
20220229734 | SNAPSHOT PERFORMANCE OPTIMIZATIONS - Techniques for creating and using snapshots may include: receiving a request to create a new snapshot of a source object; determining whether a first generation identifier associated with the source object matches a second generation identifier associated with a base snapshot of the source object; determining whether the source object has been modified since the base snapshot was created; and responsive to determining the first generation identifier matches the second generation identifier and also determining that the source object has not been modified since the base snapshot was created, associating the new snapshot with the base snapshot thereby indicating that the new snapshot and the base snapshot have matching content and denote a same point in time copy of the source object. | 2022-07-21 |
20220229735 | SPECIALIZED METADATA FOR MANAGING AND SEARCHING BACKUP DATA STORED ON A CLOUD-BASED OBJECT STORAGE - Described is a system (and method) for managing specialized metadata that may be used to manage and search incremental backup data stored on a cloud-based object storage. The system may create and store such metadata as part of a specialized metadata database that includes a data catalog and a backup catalog. The system may leverage the metadata database to initiate operations to efficiently manage incremental backup data stored on the object storage. For example, the metadata may be relied upon to efficiently reconstruct (e.g. synthetically) the client data to a point-in-time of any incremental backup. In addition, the metadata may include properties of the backed-up data, which are maintained separately from the backup data stored as objects. Accordingly, these properties may be searched to identify and locate backup data without having to retrieve the stored objects. | 2022-07-21 |
20220229736 | SYNTHESIZING A RESTORE IMAGE FROM ONE OR MORE SECONDARY COPIES TO FACILITATE DATA RESTORE OPERATIONS TO A FILE SERVER - An illustrative media agent (MA) in a data storage management system instructs a NAS file server (filer) to restore an MA-created synthesized-copy instead of larger filer-created backup copies. The synthesized-copy is designed only for the particular files to be restored and mimics, and is typically much smaller than, a filer-created backup copy. The synthesized-copy is fed to the filer on restore as a “restore data image.” When receiving a restore request for certain backed-up data files, the MA synthesizes the synthesized-copy on the fly. The MA generates a header mimicking a filer-created backup header; extracts files from filer-created backup copies arranging them within the synthesized-copy as if in filer-created backups; and instructs filer to perform a full-volume restore from the synthesized-copy. The MA serves the synthesized-copy piecemeal as available, rather than waiting to synthesize the entire synthesized-copy. The synthesized-copy is not stored at the MA. | 2022-07-21 |
20220229737 | VIRTUAL SERVER CLOUD FILE SYSTEM FOR VIRTUAL MACHINE RESTORE TO CLOUD OPERATIONS - Uploads of restored virtual machine (“VM”) data to cloud storage, e.g., VM restore-to-cloud operations, are performed without having to write whole restored virtual disk files to a proxy server before the virtual disk data begins uploading to cloud. Restored data blocks from a backup source are locally cached, staged for efficiency, and asynchronously uploaded to the cloud page-by-page without tapping mass storage resources on the proxy. Downloads of VM data from cloud storage, e.g., VM backup-from-cloud, are performed without having to download a virtual disk file in its entirety to the proxy server before the backup operation begins generating a backup copy. This speeds up “pulling” VM data from the cloud by pre-fetching and locally caching downloaded data blocks. The cached data blocks are processed for backup and stored page-by-page directly into a secondary copy of the cloud VM virtual-disk file without tapping mass storage resource at the proxy. | 2022-07-21 |
20220229738 | VISUAL MAPPING OF PROTECTION STATUS FOR NETWORKING DEVICES - Embodiments are described for a method and system of applying data protection software mechanisms to network devices to auto-discover the networking equipment, save changes from memory (TCAM) to local storage, backup changes to protection storage, provide auditing and tracking history of changes, and provide the ability to deploy test/development copies of changes using software defined networking techniques. Embodiments include an efficient visual mapping aspect provided through a GUI to display the topography and backup/protection configuration of network devices in a system. | 2022-07-21 |
20220229739 | MANAGEMENT DATABASE LONG-TERM ARCHIVING TO A RECOVERY MANAGER - A storage manager for an information management system determines whether one or more predetermined conditions have been met for transferring metadata of previously performed backup jobs stored in a first management database. A backup job may correspond to a backup operation of a primary storage device of a first client computing device. In response to a determination that one or more of the predetermined conditions have been met, the storage manager may transfer metadata for a second plurality of backup jobs to a second management database of a recovery manager. The recovery manager may receive a request to restore data to the primary storage device of the first client computing device based on the metadata of the second plurality of backup jobs. A media agent managed by the recovery manager may then restore the requested data to the primary storage device of the first client computing device. | 2022-07-21 |
20220229740 | PROTECTING DATABASES IN A DISTRIBUTED AVAILABILITY GROUP - A determination is made that a relational database management system (RDBMS) is configured as a distributed availability group. The distributed availability group spans first and second availability groups. Each availability group includes a cluster of servers hosting replicas of a database. One of the first or second availability groups functions as a primary availability group. Another of the first or second availability groups functions as a secondary availability group that is available as a failover target should the primary availability group become unavailable. A name of the distributed availability group is obtained. A first server in the first availability group is directed to backup a replica of the database being hosted by the first server. The directing includes instructing the first server to index the backup against the name of the distributed availability group. | 2022-07-21 |
20220229741 | PROTECTING DATABASES IN A CLUSTERLESS AVAILABILITY GROUP - A determination is made that a backup of a database in an availability group provided by a relational database management system (RDBMS) should be performed. The availability group includes a node functioning as a primary node and hosting a primary replica of the database and one or more other nodes functioning as secondary nodes and hosting secondary replicas of the database. The availability group is a clusterless availability group in which the one or more other nodes functioning as secondary nodes are not available as automatic failover targets should the primary node become unavailable. A command is issued to a node in the availability group to obtain a globally unique identifier (GUID) of the availability group. The node is instructed to index a backup of the database against the GUID of the availability group. | 2022-07-21 |
20220229742 | WORKFLOW ERROR HANDLING FOR DEVICE DRIVEN MANAGEMENT - Disclosed are various embodiments for workflow error handling for device driven management. A workflow can be received from a management service by a management agent. The workflow can define a sequence of actions to be implemented by the management agent on a client device and a set of error conditions associated with individual actions in the sequence of actions. The management agent can then process the individual actions in the sequence of actions defined by the workflow. Subsequently, the management agent can monitor the individual actions to determine whether the individual actions trigger an error condition in the set of error conditions. Finally, in response to a determination that the individual actions triggered the error condition in the set of error conditions, the management agent can perform an error response specified by the workflow. | 2022-07-21 |
20220229743 | REMOTE SNAPPABLE LINKING - In some examples, a duster comprises peer nodes and a distributed data store implemented across the peer nodes, a method of remote linking of data objects for data transfer between a first node cluster and a second node cluster among the peer nodes; the method comprising: creating a data object group including multiple remote data objects, wherein a plurality of remote data objects in the data object group represent a same first virtual machine and are registrable on at least the first and second node clusters of the peer DMS nodes; creating or identifying remote links to a plurality of the remote data objects in the data object group; designating a member of the data object group as an active member of the group; and assigning a task to the active member to be completed using remote links. | 2022-07-21 |
20220229744 | Recovering From System Faults For Replicated Datasets - Recovering from system faults for replicated datasets, including: receiving, by the cloud-based storage system, a request to modify a dataset that is stored by the cloud-based storage system, wherein the dataset is synchronously replicated among a plurality of storage systems that includes the cloud-based storage system, wherein a request to modify the dataset is acknowledged as being complete when each of the plurality of storage systems has modified its copy of the dataset; generating recovery information indicating whether the request to modify the dataset has been applied on all storage systems in the plurality of storage systems synchronously replicating the dataset; and after a system fault, applying a recovery action in dependence upon the recovery information indicating whether the request to modify the dataset has been applied on all storage systems in the plurality of storage systems synchronously replicating the dataset. | 2022-07-21 |
20220229745 | SYSTEMS AND METHODS FOR AUTOMATICALLY RESOLVING SYSTEM FAILURE THROUGH FORCE-SUPPLYING CACHED API DATA - Systems and methods for force-supplying cached API call data are disclosed. A system may comprise a memory storing instructions and at least one processor configured to execute instructions to perform operations including: receiving initial order data from a user device, the initial order data comprising a product identifier, a user identifier, and a promotion identifier; determining an initial reduction amount based on the received product identifier and promotion identifier; mapping the initial reduction amount to a cache identifier; caching the initial order data and the cache identifier; receiving an order request from a device associated with the user identifier, the order request being associated with the promotion identifier; calling an API to complete the order request; detecting a failure of the API attempting to complete the order request; retrieving the cache identifier; determining a final reduction amount; and completing the order request using the final reduction amount. | 2022-07-21 |
20220229746 | SELECTING A WITNESS SERVICE WHEN IMPLEMENTING A RECOVERY PLAN - Methods, systems, and computer program products for selection of a witness during virtualization system recovery after a disaster event. A recovery plan is configured to identify a witness that is then used to elect a leader to implement the recovery. Various system, and/or network, and/or component failures and/or various loss of function of components of the virtualization system can trigger initiation of the recovery plan. Based on the particular recovery plan that is invoked upon a determination of a network outage, or component failure or loss of function of a component of the virtualization system, a particular witness corresponding to a subset of entities of the particular recovery plan is selected. The witness is used to elect the leader, and the leader initiates actions of the recovery plan. The implementation of the recovery plan includes consideration of the health of components that would potentially be involved in the recovery actions. | 2022-07-21 |
20220229747 | RECOVERING CONSISTENCY OF A RAID (REDUNDANT ARRAY OF INDEPENDENT DISKS) METADATA DATABASE - Technology is disclosed for recovering the consistency of a RAID (Redundant Array of Independent Disks) metadata database when data corruption is detected in the RAID metadata database. The RAID metadata database includes super sectors, stage sectors, and a data region. Valid data within the data region is a contiguous set of sectors extending from a head sector to a tail sector. In response to data corruption in one of the two super sectors, a set of pointers contained in the other super sector is used to identify the head sector and tail sector. In response to data corruption in both super sectors, the head sector and tail sector are located based on the contents of the sectors in the data region. Techniques are also disclosed for recovering consistency when the data corruption occurs in the stage sectors and/or data region. | 2022-07-21 |
20220229748 | NONVOLATILE MEMORY DEVICES, SYSTEMS AND METHODS FOR FAST, SECURE, RESILIENT SYSTEM BOOT - A storage device can include at least one nonvolatile (NV) memory array that includes a first section having a first physical address range, and a second section having a second physical address range. A nonvolatile fault indication can be set to at least a fault state or a no-fault state. A memory watchdog circuit configured to set the fault indication to the fault state in response to an expiration of a predetermined watchdog period, the watchdog period being reset in response to a defer indication. An address mapping circuit can be configured to, in response to the fault indication having the no fault state, mapping input addresses to the first physical addresses range, and in response to the fault indication having the fault state, mapping the same input addresses to the second physical address range. Corresponding methods and systems are also disclosed. | 2022-07-21 |
20220229749 | ERASURE CODING REPAIR AVAILABILITY - Distributed storage systems frequently use a centralized metadata repository that stores metadata in an eventually consistent distributed database. However, a metadata repository cannot be relied upon for determining which erasure coded fragments are lost because of a storage node(s) failures. Instead, when recovering a failed storage node, a list of missing fragments is generated based on fragments stored in storage devices of available storage nodes. A storage node performing the recovery sends a request to one or more of the available storage nodes for a fragment list. The fragment list is generated, not based on a metadata database, but on scanning storage devices for fragments related to the failed storage node. The storage node performing the recovery merges retrieved lists to create a master list indicating fragments that should be regenerated for recovery of the failed storage node(s). | 2022-07-21 |
20220229750 | MEMORY BLOCK AGE DETECTION - Disclosed herein are related to an age detector for determining an age of a memory block, and a method of operation of the age detector. In one configuration, a memory system includes a memory block and an age detector coupled to the memory block. In one aspect, the memory block generates a first set of data in response to a first power on, and generates a second set of data in response to a second power on. In one configuration, the age detector includes a storage block to store the first set of data from the memory block, and inconsistency detector to compare the first set of data and the second set of data. In one configuration, the age detector includes a controller to determine an age of the memory block, based on the comparison. | 2022-07-21 |
20220229751 | FAIL COMPARE PROCEDURE - Methods, systems, and devices for a fail compare procedure are described. An apparatus may include a host device coupled with a memory device. An application specific integrated circuit (ASIC) associated with the host device (e.g., included in, coupled with) may include a set of comparators that output first bit information that includes respective states of at least two bits of data read from the memory device. The host device may compare (e.g., at the ASIC) the first bit information to second bit information that includes respective expected states of the at least two bits. Based on the comparison, the host device may determine whether a state of at least one bit of the first bit information is different than a state of a corresponding bit of the second bit information, and may output one or more signals including indications of a fail to a counter of the ASIC. | 2022-07-21 |
20220229752 | GLITCH SUPPRESSION APPARATUS AND METHOD - An apparatus includes a main core processor configured to receive a first signal through a first main buffer, a second signal through a second main buffer, a third signal through a third main buffer and a fourth signal through a fourth main buffer, a shadow core processor configured to receive the first signal through a first shadow buffer, the second signal through a second shadow buffer, the third signal through a third shadow buffer and the fourth signal through a fourth shadow buffer, and a first glitch suppression buffer coupled to a common node of an input of the first main buffer and an input of the first shadow buffer. | 2022-07-21 |
20220229753 | EVENT-BASED OPERATIONAL DATA COLLECTION FOR IMPACTED COMPONENTS - A method comprises receiving a notification of an issue with at least one component of a plurality of components in a computing environment. One or more machine learning algorithms are used to determine one or more components of the plurality of components impacted by the issue with the at least one component. The method further comprises collecting operational data for the at least one component and the one or more impacted components. | 2022-07-21 |
20220229754 | DETERMINING CHANGES TO COMPONENTS OF A COMPUTING DEVICE PRIOR TO BOOTING TO A PRIMARY ENVIRONMENT OF THE COMPUTING DEVICE - An apparatus comprises a processing device configured to receive a request to boot a given computing device to a primary environment and, responsive to receiving the request, to obtain first inventory information for components of the given computing device utilizing a preinstallation environment of the given computing device. The processing device is also configured to analyze the first inventory information and second inventory information to determine whether there any changes in the components of the given computing device prior to booting the given computing device to the primary environment, the second inventory information being previously stored in a support environment of the given computing device. The processing device is further configured to generate notifications based at least in part on determining that there are one or more changes in the components of the given computing device, and to provide the notifications at a user interface of the given computing device. | 2022-07-21 |
20220229755 | DOCKING STATIONS HEALTH - An example of an electronic device includes a processor to collect data of an input/output (I/O) interface of a docking station to which the electronic device is to couple, use at least one artificial intelligence (AI) processing model to process the collected data of the I/O interface to provide an AI processing model result that calculates past usage and predicts future usage of the I/O interface, and determine an estimated health of the docking station based on the AI processing model result. | 2022-07-21 |
20220229756 | USER EXPERIENCE SCORING AND USER INTERFACE - Systems and methods are described for providing and configuring an overall user experience score. Mobile and desktop user devices can collect and send data to a server about an application installed on the devices and the health of the devices. The server can use the application data and device health information to determine three scores for the application: a mobile score for a mobile version, a desktop score for a desktop version, and a device health score. The server can determine an overall user experience score based on the lowest of the three scores. The server can cause the overall user experience score to be displayed in a first graphical user interface (“GUI”). A second GUI can allow an administrator to reconfigure scoring metrics for the user experience scores by moving elements on a sliding bar that changes thresholds. | 2022-07-21 |
20220229757 | QUANTUM GATE BENCHMARKING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND MEDIUM - A method includes: obtaining a set of characterizing equations for a quantum gate to be benchmarked; in response to the set of characterizing equations including another quantum gate, determining whether the another quantum gate has been benchmarked as trusted; in response to the another quantum gate having been benchmarked as trusted, successively performing a quantum gate operation on each characterizing equation in the set of characterizing equations to obtain, for each characterizing equation, a corresponding measurement result; for each characterizing equation, determining whether the measurement result for the characterizing equation meets an expected result in the characterizing equation for characterization; and in response to the measurement results for each characterizing equation in the set of characterizing equations meeting the expected result, benchmarking the quantum gate to be benchmarked as trusted. | 2022-07-21 |
20220229758 | LOGGING TECHNIQUES FOR THIRD PARTY APPLICATION DATA - Embodiments of the present disclosure present devices, methods, and computer readable medium for techniques for measuring operational performance metrics, and presenting these metrics through an application programming interface (API) for developers to access for optimizing their applications. Exemplary metrics can include central processing unit or graphics processing unit time, foreground/background time, networking bytes (per application), location activity, display average picture luminance, cellular networking condition, peak memory, number of logical writes, launch and resume time, frame rates, and hang time. Regional markers can also be used to measure specific metrics for in application tasks. The techniques provide multiple user interfaces to help developers recognize the important metrics to optimize the performance of their applications. The data can be normalized over various different devices having different battery size, screen size, and processing requirements. The user interfaces can provide an intelligent method for visualizing performance changes for significant changes in application versions. | 2022-07-21 |
20220229759 | METHOD, DEVICE, AND SYSTEM FOR SIMULATION TEST - The present disclosure relates to intelligent driving technology, and provides a method, device, and system for simulation test. The method includes: retrieving data in a first format for a first sensor from a database; processing the data in the first format to obtain corresponding data in a second format, the second format being a format of data collection of the first sensor; and transmitting the data in the second format to a second computing device capable of performing a simulation test based on the data in the second format. The present disclosure can provide a simulation test solution for an intelligent system by providing a simulation test environment closer to the real-world environment. | 2022-07-21 |
20220229760 | LONG RUNNING WORKFLOWS FOR ROBOTIC PROCESS AUTOMATION - Systems and methods for executing a robotic process automation (RPA) workflow are provided. The RPA workflow is executed by a first robot. The execution of the RPA workflow is suspended by the first robot. A current context of the RPA workflow is serialized at a time of the suspension and the current context of the RPA workflow is stored. The execution of the RPA workflow is resumed by a second robot based on a triggering condition by retrieving the current context of the RPA workflow. The first robot and the second robot may be the same robot or different robots. | 2022-07-21 |
20220229761 | SYSTEMS AND METHODS FOR OPTIMIZING HARD DRIVE THROUGHPUT - The disclosed computer-implemented method includes accessing a hard drive to measure operational characteristics of the hard drive. The method next includes deriving hard drive health factors used to control the hard drive that are based on the measured operational characteristics. The derived hard drive health factors include an average per-seek time indicating an average amount of time the hard drive spends seeking specified data that is to be read and an average read speed indicating an average amount of time the hard drive spends reading the specified data. The method next includes determining, based on the hard drive health factors and the operational characteristics, an amount of load servicing capacity currently available at the hard drive, and then includes regulating the amount of load servicing performed by the hard drive according to the determined amount of available load servicing capacity. Various other methods, systems, and computer-readable media are also disclosed. | 2022-07-21 |
20220229762 | Robotic Process Automation (RPA) Debugging Systems And Methods - In some embodiments, a robotic process automation (RPA) robot is configured to identify a runtime target of an automation activity (e.g., a button to click, a form field to fill in, etc.) by searching a user interface for a UI element matching a set of characteristics of the target defined at design-time. When the target identification fails, some embodiments display an error message indicating which target characteristic could not be matched. Some embodiments further display for selection by the user a set of alternative target elements of the runtime interface. | 2022-07-21 |
20220229763 | CROSS-THREAD MEMORY INDEXING IN TIME-TRAVEL DEBUGGING TRACES - Exposing a memory cell value during trace replay prior to an execution time at which the memory cell value was recorded into a trace. A computer system identifies a trace fragment that records an uninterrupted consecutive execution of executable instructions. Based on performing an intra-fragment analysis of the trace fragment, the computer system determines that a memory cell value recorded into the trace fragment is compatible with memory access(es) to the memory cell that occurred during recording, prior to an event that caused the memory cell value to be recorded. The computer system determines that the memory cell value can be exposed, during trace replay, at a first execution time that is prior to a second execution time corresponding to the event that caused the value to be recorded, and generates output data indicating that the memory cell value can be exposed at the first execution time during trace replay. | 2022-07-21 |
20220229764 | AUTOMATED TEST REPLAY WITH SENSITIVE INFORMATION OBFUSCATION - According to examples, an apparatus may include a processor that may identify sensitive information in a recording of an automated test script that is replayed to automatically test a graphical user interface (GUI) of an application under test (AUT). The apparatus may identify the sensitive information during the recording such that sensitive information is identified as the automated test is recorded or afterward based on an analysis of the recording. as based on user input that identifies the sensitive information (or areas containing the sensitive information), automated text analysis, or automated image analysis such as machine-learning based object detection. Once sensitive information (or area) is identified, the apparatus may generate and apply a mosaic to obscure the sensitive information (or area). | 2022-07-21 |
20220229765 | METHODS AND SYSTEMS FOR AUTOMATED SOFTWARE TESTING - In one aspect, a computerized method useful for automated software testing comprising: writing a test suite in a human-readable language; implementing an Artificial Intelligent (AI) process test suite; and creating a set of scripts, data, and execute tests. | 2022-07-21 |