26th week of 2022 patent applcation highlights part 50 |
Patent application number | Title | Published |
20220206824 | VIRTUALIZED TRANSACTION TERMINAL PLATFORM - A virtualized transaction terminal platform is provided. A transaction terminal is configured as a thin-client terminal. A virtualized transaction terminal (Virtual Machine (VM)) is instantiated remotely on a cloud or a server over a network connection. Peripherals connected to the thin-client terminal are mapped to virtual peripheral device drivers on the cloud or the server. Physical peripherals connected to the thin-client terminal are mapped inside the VM to the corresponding virtual peripheral device drivers. As transactions are initiated and physical peripherals are operated at the thin-client terminal, the transactions are processed by the VM and inputs/outputs from the physical peripherals are forwarded for processing by the corresponding virtual peripheral device drivers. A remote desktop (RD) agent on the thin-client terminal keeps states of the VM and virtual peripheral device drivers in synchronization with a peripheral display of the thin-client terminal. | 2022-06-30 |
20220206825 | MEMORY-INDEPENDENT AND SCALABLE STATE COMPONENT INITIALIZATION FOR A PROCESSOR - Systems or methods of the present disclosure may provide an initialization technique that enables the initialization of multiple states in an efficient manner. The initialization technique includes a register to track usage of state components of the processor and a decode unit to decode a state initialization instruction. The state initialization instruction indicates that of the state components are to be initialized. The initialization technique also includes an execution unit coupled with the decode unit. The execution unit, in response to the state initialization instruction, is to initialize the state components without reading another state component from memory as part of the initialization. | 2022-06-30 |
20220206826 | DETERMINING SEQUENCES OF INTERACTIONS, PROCESS EXTRACTION, AND ROBOT GENERATION USING ARTIFICIAL INTELLIGENCE / MACHINE LEARNING MODELS - Use of artificial intelligence (AI)/machine learning (ML) models is disclosed to determine sequences of user interactions with computing systems, extract common processes, and generate robotic process automation (RPA) robots. The AI/ML model may be trained to recognize matching n-grams of user interactions and/or a beneficial end state. Recorded real user interactions may be analyzed, and matching sequences may be implemented as corresponding activities in an RPA workflow. | 2022-06-30 |
20220206827 | AUDIO PLAYBACK DEVICE AND METHOD FOR CONTROLLING OPERATION THEREOF - Provided are an audio reproduction device and a method of controlling an operation thereof, which involve a user interface that allows a user to more effectively control various functions. The audio reproduction device includes a processor configured to obtain function information of the audio reproduction device corresponding to a user input received using mapping information for mapping between a user input received based on at least one of at least one wheel region rotatable clockwise or counterclockwise and at least one touch region and function information of the audio reproduction device, and to control the audio reproduction device according to the obtained function information of the audio reproduction device. | 2022-06-30 |
20220206828 | INPUT VALUE SETTING ASSISTING APPARATUS, INPUT VALUE SETTING ASSISTING METHOD AND PROGRAM - An input value setting assistance device includes: an acquisition unit that displays a first screen including a plurality of input fields, and acquires an image of the first screen and a coordinate value regarding each of the input fields included is the first screen; and an assistance unit that displays a second screen including the image acquired by the acquisition unit regarding the first screen and setting fields for setting values for each of the input fields of the first screen and, when one of the setting fields is selected, displays a prescribed mark based on the coordinate value acquired by the acquisition unit for the input field related to the setting field. Thereby, the setting work of each of the input values prepared in advance for each of the input fields included is the screen can be effectively assisted. | 2022-06-30 |
20220206829 | VIRTUALIZATION PLATFORM CONTROL DEVICE, VIRTUALIZATION PLATFORM CONTROL METHOD, AND VIRTUALIZATION PLATFORM CONTROL PROGRAM - A virtualization infrastructure control device ( | 2022-06-30 |
20220206830 | RUNNING ARBITRARY BINARIES AS UNIKERNELS ON EMBEDDED PROCESSORS - Orchestration of guest unikernel virtual machines on a host device includes determining hardware profile information associated with the host device. It further includes based at least in part on the determined hardware profile information, configuring orchestration of the guest unikernel virtual machines to be provisioned by a hypervisor running on the host device. | 2022-06-30 |
20220206831 | METHOD AND SYSTEM FOR MANAGING APPLICATIONS ON A VIRTUAL MACHINE - A method and system for managing applications on a virtual machine includes creating a plurality of virtual machines on a computer system. Each virtual machine is isolated from one another. Resources are allocated to each virtual machine based upon a resource requirement of an application executing on each virtual machine. | 2022-06-30 |
20220206832 | CONFIGURING VIRTUALIZATION SYSTEM IMAGES FOR A COMPUTING CLUSTER - A plurality of different virtualization system images are configured for deployment to a plurality of nodes in heterogeneous environments. Individual ones of the virtualization system images are configured such that once deployed, the nodes form a computer cluster having a storage pool that is shared across the nodes. When configuring the virtualization system images, information that describes the heterogeneous computing environments is accessed, and constraints pertaining the heterogeneous computing environments are reconciled in advance of configuring the different virtualization system images. A common subnet across the heterogeneous environments is established. The plurality of different virtualization system images are configured to access the common subnet once deployed. The common subnet serves as a storage I/O communication path over which a cluster-wide storage pool is implemented. The virtualization system images are configured to correspond to address portions of a contiguous address space that is used to access data in the storage pool. | 2022-06-30 |
20220206833 | PLACING VIRTUAL GRAPHICS PROCESSING UNIT (GPU)-CONFIGURED VIRTUAL MACHINES ON PHYSICAL GPUS SUPPORTING MULTIPLE VIRTUAL GPU PROFILES - In one set of embodiments, a computer system can receive a request to provision a virtual machine (VM) in a host cluster, where the VM is associated with a virtual graphics processing unit (GPU) profile indicating a desired or required framebuffer memory size of a virtual GPU of the VM. In response, the computer system can execute an algorithm that identifies, from among a plurality of physical GPUs installed in the host cluster, a physical GPU on which the VM may be placed, where the identified physical GPU has sufficient free framebuffer memory to accommodate the desired or required framebuffer memory size, and where the algorithm allows multiple VMs associated with different virtual GPU profiles to be placed on a single physical GPU in the plurality of physical GPUs. The computer system can then place the VM on the identified physical GPU. | 2022-06-30 |
20220206834 | DIRECT ACCESS STORAGE FOR PERSISTENT SERVICES IN A DISTRIBUTED STORAGE SYSTEM - An example virtualized computing system includes a cluster of hosts having a virtualization layer executing thereon and configured to manage virtual machines (VMs); first and second local storage devices in a first host, the first local storage device being part of a virtual storage area network (vSAN) and the second local storage device being exclusive of the vSAN; and an orchestration control plane, integrated with the virtualization layer and including a master server managing state of the orchestration control plane, the state including objects representing the hosts and the VMs, the orchestration control plane deploying a persistent application executing on a first VM, the persistent application storing persistent data on the second local storage device; and a virtualization management server configured to manage the cluster and to cooperate with the orchestration control plane to modify the state to notify the master server of a virtual infrastructure (VI) event. | 2022-06-30 |
20220206835 | APPARATUS FOR VIRTUALIZED REGISTERS AND METHOD AND COMPUTER PROGRAM PRODUCT FOR ACCESSING TO THE SAME - The invention relates to an apparatus for virtualized registers. The apparatus includes register space, group selectors, and a block selector. The register space is divided into physical blocks, each of which includes register groups, and each register group contains registers. Each group selector is coupled to a portion of the register groups in a corresponding physical block, and is arranged operably to enable one of the portion of the register groups in the corresponding physical block in accordance with a first control signal corresponding to a virtual device, or a function performed by the virtual device. The block selector, coupled to the group selectors, is arranged operably to enable one of the group selectors in accordance with a second control signal corresponding to a virtual machine instruction. The virtual machine instruction is translated into an operation of the virtual device. | 2022-06-30 |
20220206836 | Method and Apparatus for Processing Virtual Machine Migration, Method and Apparatus for Generating Virtual Machine Migration Strategy, Device and Storage Medium - Embodiments of the present disclosure provide a method and an apparatus for processing virtual machine migration, a method and an apparatus for generating a virtual machine migration strategy, a device and a storage medium. In a case where idle resources on each single one of multiple physical hosts in a system do not meet a resource requirement from a virtualized network function (VNF) but total idle resources on the multiple physical hosts meet the resource requirement from the VNF, a virtual machine migration strategy is determined according to resource information about resources currently occupied on each of the multiple physical hosts and corresponding service information, and live migration may be performed on virtual machines. | 2022-06-30 |
20220206837 | SYSTEM AND METHOD FOR GENERATING AND USING A CONTEXT BLOCK BASED ON SYSTEM PARAMETERS - A system and method for generating a context block using system parameters. The system parameters include objective parameters, functionality parameters, and interface definitions. Context field definitions are received. The system parameters and context fields definitions may be used to determine context fields and context entries. The system parameters may be used to determine context fields and number of context entries. The context module hardware description may be created using context fields, number of context entries, and context field definitions. | 2022-06-30 |
20220206838 | ADAPTIVE THREAD GROUP DISPATCH - One or more shader processor inputs (SPIs) provide work items from a thread group for execution on one or more shader engines. A command processor selectively dispatches the work items to the SPIs based on a size of the thread group and a format of cache lines of a cache implemented in the one or more shader engines. The command processor operates in a tile mode in which the command processor schedules the work items in multidimensional blocks that correspond to the format of the cache lines. In some cases, the format of the cache lines is determined by a texture surface format and a swizzle mode for storing texture data. The SPIs (or corresponding drivers) adaptively select wave size, tile size, and wave walk mode based on thread group size, UAV surface format. The SPIs adaptively launch and schedule waves in a thread group based on selected tile size, wave walk mode, and wave size to improve cache locality, reduce memory access, and create address pattern to improve memory efficiency. | 2022-06-30 |
20220206839 | ADDRESS MAPPING-AWARE TASKING MECHANISM - An Address Mapping-Aware Tasking (AMAT) mechanism manages compute task data and issues compute tasks on behalf of threads that created the compute task data. The AMAT mechanism stores compute task data generated by host threads in a set of partitions, where each partition is designated for a particular memory module. The AMAT mechanism maintains address mapping data that maps address information to partitions. Threads push compute task data to the AMAT mechanism instead of generating and issuing their own compute tasks. The AMAT mechanism uses address information included in the compute task data and the address mapping data to determine partitions in which to store the compute task data. The AMAT mechanism then issues compute tasks to be executed near the corresponding memory modules (i.e., in PIM execution units or NUMA compute nodes) based upon the compute task data stored in the partitions. | 2022-06-30 |
20220206840 | Timer Processing Method, Apparatus, Electronic Device and Computer Storage Medium - Timer processing method, apparatus, electronic device and computer storage medium are provided. The timer processing method includes: determining to perform timer switching on a virtual local timer used by a virtual processor according to preset timer switching condition(s); determining a physical processor that runs the virtual processor, and switching a physical local timer currently used by the physical processor to a physical global timer; and performing a timer configuration for the virtual processor to enable the physical local timer to act as a timer of the virtual processor. Through the embodiments of the present disclosure, additional overheads of a virtual machine system caused by operations of conversion of virtual timer and physical timer are avoided. | 2022-06-30 |
20220206841 | DYNAMIC GRAPHICAL PROCESSING UNIT REGISTER ALLOCATION - Systems, apparatuses, and methods for dynamic graphics processing unit (GPU) register allocation are disclosed. A GPU includes at least a plurality of compute units (CUs), a control unit, and a plurality of registers for each CU. If a new wavefront requests more registers than are currently available on the CU, the control unit spills registers associated with stack frames at the bottom of a stack since they will not likely be used in the near future. The control unit has complete flexibility determining how many registers to spill based on dynamic demands and can prefetch the upcoming necessary fills without software involvement. Effectively, the control unit manages the physical register file as a cache. This allows younger workgroups to be dynamically descheduled so that older workgroups can allocate additional registers when needed to ensure improved fairness and better forward progress guarantees. | 2022-06-30 |
20220206842 | METHODS, APPARATUS, SYSTEMS, AND INSTRUCTIONS TO MIGRATE PROTECTED VIRTUAL MACHINES - Techniques for migration of a source protected virtual machine from a source platform to a destination platform are descried. A method of an aspect includes enforcing that bundles of state, of a first protected virtual machine (VM), received at a second platform over a stream, during an in-order phase of a migration of the first protected VM from a first platform to the second platform, are imported to a second protected VM of the second platform, in a same order that they were exported from the first protected VM. Receiving a marker over the stream marking an end of the in-order phase. Determining that all bundles of state exported from the first protected VM prior to export of the marker have been imported to the second protected VM. Starting an out-of-order phase of the migration based on the determination that said all bundles of the state exported have been imported. | 2022-06-30 |
20220206843 | BLOCKING/UNBLOCKING ALGORITHMS FOR SIGNALING OPTIMIZATION IN A WIRELESS NETWORK FOR TRAFFIC UTILIZING PROPRIETARY AND NON-PROPRIETARY PROTOCOLS - A method of optimizing traffic on a mobile device includes determining that an application is inactive based on historical behavior of the application and blocking traffic originating from or directed towards the application that is determined to be inactive based on historical behavior. A related mobile device is also provided. | 2022-06-30 |
20220206844 | SCHEDULING RESOURCE RESERVATIONS IN A CLOUD-BASED COMMUNICATION SYSTEM - Scheduling resource reservations in a cloud based communications system. One embodiment provides a scheduling server for scheduling resource reservation in a cloud-based communication system. The scheduling server includes an electronic processor configured to monitor events outside of the cloud-based communication system to determine an occurrence of an incident and determine cloud computing resources to be allocated to consuming communication devices assigned to respond to the incident. The electronic processor is also configured to reserve the cloud computing resources such that the cloud computing resources are available to the consuming communication devices for responding to the incident. | 2022-06-30 |
20220206845 | APPARATUS, SYSTEM, AND METHOD FOR MULTI-LEVEL INSTRUCTION SCHEDULING IN A MICROPROCESSOR - Aspects disclosed in the detailed description include multi-level instruction scheduling in a processor. Related methods and systems are also disclosed. In one exemplary aspect, an apparatus is provided that comprises a scheduler circuit comprising a scheduling group circuit, a first selection circuit, and a second selection circuit. The scheduling group circuit comprising a plurality of groups of scheduling entries, each scheduling entry among the groups of scheduling entries each comprising an instruction portion and a ready portion, each group configured to have its scheduling entries written in-order. The scheduling group circuit is further configured to maintain group age information associated with each group of the plurality of groups. The first selection circuit is configured to select a first in-order ready entry from each group. The second selection circuit is configured to select the first in-order ready entry belonging to the oldest group based on the group age information for scheduling. | 2022-06-30 |
20220206846 | DYNAMIC DECOMPOSITION AND THREAD ALLOCATION - Devices and techniques for thread scheduling control and memory splitting in a processor are described herein. An apparatus includes a hardware interface configured to receive a first request to execute a first thread, the first request including an indication of a workload; and processing circuitry configured to: determine the workload to produce a metric based at least in part on the indication; compare the metric with a threshold to determine that the metric is beyond the threshold; divide, based at least in part on the comparison, the workload into a set of sub-workloads consisting of predefined number of equal parts from the workload; create a second request to execute a second thread, the second request including a first member of the set of sub-workloads; and process a second member of the set of sub-workloads in the first thread. | 2022-06-30 |
20220206847 | SYSTEMS AND METHODS FOR COLLECTING AND SENDING REAL-TIME DATA - Example implementations described herein involve a system that manages a dispatch of data within an Internet of Things (IoT) system that can involve a first process for intaking new data and conducting one of dispatching the new data or queuing the new data; a second process executed at lower priority than the first process involving determining if queued data exceeds a retry count; forwarding the queued data to a third process if the retry count does not exceed the threshold; and popping the queued data into an error process if the queued data exceeds the retry count; and the third process executed after receiving the queued data from the second process, involving attempting to dispatch the queued data. | 2022-06-30 |
20220206848 | ON-DEMAND CLOUD ROBOTS FOR ROBOTIC PROCESS AUTOMATION - Systems and methods for implementing robotic process automation (RPA) in the cloud are provided. An instruction for managing an RPA robot is received at an orchestrator in a cloud computing environment from a user in a local computing environment. In response to receiving the instruction, the instruction for managing the RPA robot is effectuated. | 2022-06-30 |
20220206849 | HARDWARE SUPPORT FOR LOW LATENCY MICROSERVICE DEPLOYMENTS IN SWITCH - Methods and apparatus for hardware support for low latency microservice deployments in switches. A switch is communicatively coupled via a network or fabric to a plurality of platforms configured to implement one or more microservices. The microservices are used to perform a distributed workload, job, or task as defined by a corresponding graph representation of the microservices including vertices (also referred to as nodes) associated with microservices and edges defining communication between microservices. The graph representation also defines dependencies between microservices. The switch is configured to schedule execution of the graph of microservices on the plurality of platforms, including generating an initial schedule that is dynamically revised during runtime in consideration of performance telemetry data for the microservices received from the platforms and network/fabric utilization monitored onboard the switch. The switch also may include memory in which graph representations, microservice tables, and node-to-microservice maps are stored. | 2022-06-30 |
20220206850 | METHOD AND APPARATUS FOR PROVIDING NON-COMPUTE UNIT POWER CONTROL IN INTEGRATED CIRCUITS - Methods and apparatus employ a plurality of heterogeneous compute units and a plurality of non-compute units operatively coupled to the plurality of compute units. Power management logic (PML) determines a memory bandwidth level associated with a respective workload running on each of a plurality of heterogeneous compute units on the IC, and adjusts a power level of at least one non-compute unit of a memory system on the IC from a first power level to a second power level, based on the determined memory bandwidth levels. Memory access latency is also taken into account in some examples to adjust a power level of non-compute units. | 2022-06-30 |
20220206851 | REGENERATIVE WORK-GROUPS - A method and processing apparatus are provided for executing a program. The processing apparatus comprises memory and a processor. The processor is configured to dispatch a parent work group of a program to be executed and execute a spawn work group instruction to enable a child work group of the parent work group to be executed. The processor is also configured to dispatch the child work group for execution when a sufficient amount of resources are determined to be available to execute the child work group and execute the child work group on one or more compute units. The spawn work group instruction comprises a pointer to a synchronization variable, and the processor is also configured to execute a join workgroup instruction which comprises the pointer to the synchronization variable in the spawn work group instruction. | 2022-06-30 |
20220206852 | LOCKLESS HANDLING OF BUFFERS FOR REMOTE DIRECT MEMORY ACCESS (RDMA) I/O OPERATIONS - Methods, systems and computer program products for lockless acquisition of memory for RDMA operations. A contiguous physical memory region is allocated. The contiguous physical memory region is divided into a plurality of preregistered chunks that are assigned to one or more process threads that are associated with an RDMA NIC. When responding to a request from a particular one of the one or more process threads, a buffer carved from the preregistered chunk of the contiguous physical memory region is assigned to the requesting process thread. Since the memory is pre-registered, and since the associations are made at the thread level, there is no need for locks when acquiring a buffer. Furthermore, since the memory is pre-registered, the threads do not incur registration latency. The contiguous physical memory region can be a contiguous HugePage contiguous region from which a plurality of individually allocatable buffers can be assigned to different threads. | 2022-06-30 |
20220206853 | HYBRID LOW POWER HOMOGENOUS GRAPICS PROCESSING UNITS - In an example, an apparatus comprises a plurality of execution units comprising at least a first type of execution unit and a second type of execution unit and logic, at least partially including hardware logic, to analyze a workload and assign the workload to one of the first type of execution unit or the second type of execution unit. Other embodiments are also disclosed and claimed. | 2022-06-30 |
20220206854 | APPARATUSES, METHODS, AND SYSTEMS FOR INSTRUCTIONS FOR ALIGNING TILES OF A MATRIX OPERATIONS ACCELERATOR - Systems, methods, and apparatuses relating to one or more instructions for element aligning of a tile of a matrix operations accelerator are described. In one embodiment, a system includes a matrix operations accelerator circuit comprising a two-dimensional grid of processing elements, a first plurality of registers that represents a first two-dimensional matrix coupled to the two-dimensional grid of processing elements, and a second plurality of registers that represents a second two-dimensional matrix coupled to the two-dimensional grid of processing elements; and a hardware processor core coupled to the matrix operations accelerator circuit and comprising a decoder circuit to decode a single instruction into a decoded instruction, the single instruction including a first field that identifies the first two-dimensional matrix, a second field that identifies the second two-dimensional matrix, and an opcode that indicates an execution circuit of the hardware processor core is to cause the matrix operations accelerator circuit to generate a third two-dimensional matrix from a proper subset of elements of a row or a column of the first two-dimensional matrix and a proper subset of elements of a row or a column of the second two-dimensional matrix and store the third two-dimensional matrix at a destination in the matrix operations accelerator circuit, and the execution circuit of the hardware processor core to execute the decoded instruction according to the opcode. | 2022-06-30 |
20220206855 | OFFLOADING COMPUTATIONS FROM A PROCESSOR TO REMOTE EXECUTION LOGIC - Offloading computations from a processor to remote execution logic is disclosed. Offload instructions for remote execution on a remote device are dispatched in the form of processor instructions like conventional instructions. In the processor, an offload instruction is inserted in an offload queue. The offload instruction may be inserted at the dispatch stage or the retire stage of the processor pipeline. Metadata for the offload instruction is added to the offload instruction in the offload queue. After retirement of the offload instruction, the processor transmits an offload request generated from the offload instruction. | 2022-06-30 |
20220206856 | STREAM-BASED JOB PROCESSING - Systems and techniques for managing and executing digital workflows are described. A technique described includes obtaining a job record from a job queue from a first server; assigning a node associated with a second server to handle a task indicated by the job record; operating, at the second server, a first action block in the node to produce output results in response to executing the task and to forward the output results to batch blocks; operating, at the second server, the batch blocks in the node to respectively accumulate different batch groups of the output results; operating, at the second server, the batch blocks in the node to respectively forward the different batch groups of the output results to respective second action blocks; and operating, at the second server, the second action blocks in the node to respectively process the different batch groups of the output results. | 2022-06-30 |
20220206857 | TECHNOLOGIES FOR PROVIDING DYNAMIC SELECTION OF EDGE AND LOCAL ACCELERATOR RESOURCES - Technologies for providing dynamic selection of edge and local accelerator resources includes a device having circuitry to identify a function of an application to be accelerated, determine one or more properties of an accelerator resource available at the edge of a network where the device is located, and determine one or more properties of an accelerator resource available in the device. Additionally, the circuitry is to determine a set of acceleration selection factors associated with the function, wherein the acceleration factors are indicative of one or more objectives to be satisfied in the acceleration of the function. Further, the circuitry is to select, as a function of the one or more properties of the accelerator resource available at the edge, the one or more properties of the accelerator resource available in the device, and the acceleration selection factors, one or more of the accelerator resources to accelerate the function. | 2022-06-30 |
20220206858 | METHOD AND APPARATUS FOR RELAYING DIGITAL CERTIFICATE PROCESSING TASK, MEDIUM, AND PROGRAM PRODUCT - The present disclosure relates to a method and an apparatus for relaying a digital certificate processing task, a medium, and a program product. The method for relaying the digital certificate processing task includes establishing a first connection with a digital certificate processing apparatus and a second connection with a mining pool, obtaining the digital certificate processing task from a mining pool through the second connection, and assigning the digital certificate processing task to the digital certificate processing apparatus through the first connection. | 2022-06-30 |
20220206859 | System and Method for a Self-Optimizing Reservation in Time of Compute Resources - A system and method of dynamically controlling a reservation of resources within a cluster environment to maximize a response time are disclosed. The method embodiment of the invention comprises receiving from a requestor a request for a reservation of resources in the cluster environment, reserving a first group of resources, evaluating resources within the cluster environment to determine if the response time can be improved and if the response time can be improved, then canceling the reservation for the first group of resources and reserving a second group of resources to process the request at the improved response time. | 2022-06-30 |
20220206860 | System and Method for a Self-Optimizing Reservation in Time of Compute Resources - A system and method of dynamically controlling a reservation of resources within a cluster environment to maximize a response time are disclosed. The method embodiment of the invention comprises receiving from a requestor a request for a reservation of resources in the cluster environment, reserving a first group of resources, evaluating resources within the cluster environment to determine if the response time can be improved and if the response time can be improved, then canceling the reservation for the first group of resources and reserving a second group of resources to process the request at the improved response time. | 2022-06-30 |
20220206861 | System and Method for a Self-Optimizing Reservation in Time of Compute Resources - A system and method of dynamically controlling a reservation of resources within a cluster environment to maximize a response time are disclosed. The method embodiment of the invention comprises receiving from a requestor a request for a reservation of resources in the cluster environment, reserving a first group of resources, evaluating resources within the cluster environment to determine if the response time can be improved and if the response time can be improved, then canceling the reservation for the first group of resources and reserving a second group of resources to process the request at the improved response time. | 2022-06-30 |
20220206862 | AUTONOMOUS AND EXTENSIBLE RESOURCE CONTROL BASED ON SOFTWARE PRIORITY HINT - Embodiments of apparatuses, methods, and systems for resource control based on software priority are described. In embodiments, an apparatus includes resource sharing hardware and multiple cores. The resource sharing hardware is to share the shared resource among the cores. A first core includes first execution circuitry to execute multiple threads. The first core also includes registers programmable by software. A first register is to store a first identifier of a first thread and a first priority tag to indicate a first priority of the first thread relative to a second priority of a second thread. A second register to store a second identifier of the second thread and a second priority tag to indicate the second priority of the second thread relative to the first priority of the first thread. The resource sharing hardware is to use the first priority and the second priority to control access to the shared resource by the first thread and the second thread. | 2022-06-30 |
20220206863 | APPARATUS AND METHOD TO DYNAMICALLY OPTIMIZE PARALLEL COMPUTATIONS - The invention provides a method of optimizing a parallel computing system including a plurality of processing element types by applying a generalized Amdahl law relating a speed-up of the system, numbers of the processing elements of each type and a fraction of a code portion of each concurrency which is parallelizable. The invention can be used to determine a change in accelerator processing elements required to obtain a desired speed-up | 2022-06-30 |
20220206864 | WORKLOAD EXECUTION BASED ON DEVICE CHARACTERISTICS - Examples described herein relate to causing execution of a workload on a device based on characteristics of the device and based on metadata associated with the device identifying execution requirements and software and hardware compatibilities between the device and a platform environment. In some examples, an accelerator device is selected to execute a workload based on characteristics of the accelerator device and based on software and hardware compatibilities between the device and a platform environment of the accelerator device. | 2022-06-30 |
20220206865 | DISTRIBUTED ARTIFICIAL INTELLIGENCE FABRIC CONTROLLER - In general, this disclosure describes techniques for configuring and provisioning, with a distributed artificial intelligence (AI) fabric controller, network resources in an AI fabric for use by AI applications. In one example, the AI fabric controller is configured to discover available resources communicatively coupled to a cloud exchange; obtain a set of candidate solutions, each candidate solution of the set of candidate solutions comprising an AI application and a configuration of resources for use by the AI application; filter, based on one or more execution metrics corresponding to each of the candidate solutions, the set of candidate solutions to generate a filtered set of candidate solutions; generate provisioning scripts for the filtered set of candidate solutions; execute the provisioning scripts to provision resources for each candidate solution in the filtered set of candidate solutions; and create an execution environment for each candidate solution in the filtered set of candidate solutions. | 2022-06-30 |
20220206866 | OPTIMAL CALIBRATION OF GATES IN A QUANTUM COMPUTING SYSTEM - A method of performing a quantum computation process includes mapping, by a classical computer, logical qubits to physical qubits of a quantum processor so that quantum circuits are executable using the physical qubits of the quantum processor and a total infidelity of the plurality of quantum circuits is minimized, wherein each of the physical qubits comprise a trapped ion, and each of the plurality of quantum circuits comprises single-qubit gates and two-qubit gates within the plurality of the logical qubits, calibrating, by a system controller, two-qubit gates within a first plurality of pairs of physical qubits, such that infidelity of the two-qubit gates within the first plurality of pairs of physical qubit is lowered, executing the plurality of quantum circuits on the quantum processor, by applying laser pulses that each cause a single-qubit gate operation and a two-qubit gate operation in each of the plurality of quantum circuits on the plurality of physical qubits, measuring, by the system controller, population of qubit states of the physical qubits in the quantum processor after executing the plurality of quantum circuits on the quantum processor, and outputting, by the classical computer, the measured population of qubit states of the physical qubits as a result of the execution the plurality quantum circuits, wherein the result of the execution the plurality quantum circuits are configured to be displayed on a user interface, stored in a memory of the classical computer, or transferred to another computational device. | 2022-06-30 |
20220206867 | SYSTEM AND METHOD FOR FACILITATING MANAGEMENT OF CLOUD INFRASTRUCTURE BY USING SMART BOTS - A system and method for facilitating management of cloud infrastructure by using smart bots is disclosed. The method includes obtaining one or more insights associated with one or more user accounts on a cloud infrastructure from one or more cloud infrastructure resources and determining one or more cloud infrastructure issues associated with the one or more user accounts by validating the obtained one or more insights based on a set of predefined rules. The method further includes creating one or more customized bots for the determined one or more cloud infrastructure issues based on one or more user parameters by using a rule engine based AI model and deploying the created one or more customized bots on the one or more cloud infrastructure resources. Further, the method includes managing the cloud infrastructure via the deployed one or more customized bots. | 2022-06-30 |
20220206868 | EDGE COMPUTE ENVIRONMENT CONFIGURATION TOOL - A tool is provided to configure an edge compute environment of a network. The edge compute network configuration tool may generate a configuration process for instantiating an edge compute environment at an edge site of a network including configuring one or more of the components of the edge compute environment. The configuration process may include generating automatically executed configuration instructions that communicate with the devices of the edge compute environment to configure operational processes of the devices, provision communication ports, establish one or more network addresses with the devices, etc. In some instances, the edge compute configuration tool may execute one or more micro-services to communicate with and control configuration of the devices of the edge compute environment. In addition, in some instances, a content delivery network may be used to deliver configuration data to the device being configured. | 2022-06-30 |
20220206869 | VIRTUALIZING RESOURCES OF A MEMORY-BASED EXECUTION DEVICE - Virtualizing resources of a memory-based execution device is disclosed. A host processing system orchestrates the execution of two or more offload tasks on a remote execution device. The remote execution device includes a memory array coupled to a processing unit that is shared by concurrent processes on the host processing system. The host processing system provides time-multiplexed access to the processing unit by each concurrent process for completing offload tasks on the processing unit. The host processing system initiates a context switch on the remote execution device from a first offload task to a second offload task. The context state of the first offload task is saved on the remote execution device. | 2022-06-30 |
20220206870 | SYSTEMS AND METHODS OF CREATING AND OPERATING A CLOUDLESS INFRASTRUCTURE OF COMPUTING DEVICES - Aspects involve an apparatus, device, systems, and methods for instantiating and operating a cloudless infrastructure of computing devices that communicate peer-to-peer and mostly off-grid (or otherwise without communicating through a conventional centralized network) to share resources, access, and provide services and applications, store and access data and other information, and the like. The systems may provide services to connecting computing devices, such as user devices, personal computing devices, mobile devices, laptops, personal computers, Internet of Things (IoT) devices etc., in communication with one or more of the nodes of the infrastructure. The infrastructure exchanges or manages communications, transactions, and/or data in a cloudless and/or decentralized environment to freely exchange information between the nodes to allow the infrastructure to scale in response to client demands, adapt the infrastructure to a failed node with minimal impact on connected computing devices, and provide robust security to customer information, communications, and devices. | 2022-06-30 |
20220206871 | TECHNIQUES FOR WORKLOAD BALANCING USING DYNAMIC PATH STATE MODIFICATIONS - Rebalancing the workload of logical devices across multiple nodes may include dynamically modifying preferred paths for one or more logical devices in order to rebalance the I/O workload of the logical devices among the nodes of the data storage system. Determining whether to rebalance the I/O workload between the two nodes may be performed in accordance with one or more criteria. Processing may include monitoring the current workloads of both nodes over time and periodically evaluating, in accordance with the one or more criteria, whether the current workloads of the nodes are imbalanced. Responsive to determining, in accordance with the criteria, that rebalancing of workload between the nodes is needed, the rebalancing may be performed. A notification may be sent to the host regarding any path state changes made as a result of the workload rebalancing. | 2022-06-30 |
20220206872 | TRANSPARENT DATA TRANSFORMATION AND ACCESS FOR WORKLOADS IN CLOUD ENVIRONMENTS - A computer-implemented method of providing data transformation includes installing one or more data transformation plugins in a dataset made accessible for processing an end user's workload. A dataset-specific policy for the accessible dataset is ingested. A data transformation of the accessible dataset is executed by invoking one or more of the data transformation plugins to the accessible dataset based on the dataset-specific policy to generate a transformed dataset. The user's workload is deployed to provide data access for processing using the transformed dataset in accordance with a data governance policy. | 2022-06-30 |
20220206873 | PRE-EMPTIVE CONTAINER LOAD-BALANCING, AUTO-SCALING AND PLACEMENT - A resource usage platform is disclosed. The platform performs preemptive container load balancing, auto scaling, and placement in a computing system. Resource usage data is collected from containers and used to train a model that generates inferences regarding resource usage. The resource usage operations are performed based on the inferences and on environment data such as available resources, service needs, and hardware requirements. | 2022-06-30 |
20220206874 | DETERMINATION OF WORKLOAD DISTRIBUTION ACROSS PROCESSORS IN A MEMORY SYSTEM - A memory system having a set of media, a set of resources, and a controller configured via firmware to use the set of resources in processing requests from a host system to store data in the media or retrieve data from the media. The memory system has a workload manager that analyzes activity records in an execution log for a time period where each of the activity records can indicate whether a processor of the controller is in an idle state during a time slot in the time period. The workload manager identifies idle time slots within the time period during which time slots one or more lightly-loaded processors in the plurality of processors are in the idle state, and adjusts a configuration of the controller to direct tasks from one or more heavily-loaded processors to the one or more lightly-loaded processors. | 2022-06-30 |
20220206875 | SOFTWARE VISIBLE AND CONTROLLABLE LOCK-STEPPING WITH CONFIGURABLE LOGICAL PROCESSOR GRANULARITIES - A processor is described. The processor includes model specific register space that is visible to software above a BIOS level. The model specific register space is to specify a granularity of a processing entity of a lock-step group. The processor also includes logic circuitry to support dynamic entry/exit of the lock-step group's processing entities to/from lock-step mode including: i) termination of lock-step execution by the processing entities before the program code to be executed in lock-step is fully executed; and, ii) as part of the exit from the lock-step mode, restoration of a state of a shadow processing entity of the processing entities as the state existed before the shadow processing entity entered the lock-step mode and began lock-step execution of the program code. | 2022-06-30 |
20220206876 | Management of Thrashing in a GPU - Systems, apparatuses, and methods for managing a number of wavefronts permitted to concurrently execute in a processing system. An apparatus includes a register file with a plurality of registers and a plurality of compute units configured to execute wavefronts. A control unit of the apparatus is configured to allow a first number of wavefronts to execute concurrently on the plurality of compute units. The control unit is configured to allow no more than a second number of wavefronts to execute concurrently on the plurality of compute units, wherein the second number is less than the first number, in response to detection that thrashing of the register file is above a threshold. The control unit is configured to detect said thrashing based at least in part on a number of registers in use by executing wavefronts that spill to memory | 2022-06-30 |
20220206877 | DETERMINING A DEPLOYMENT SCHEDULE FOR OPERATIONS PERFORMED ON DEVICES USING DEVICE DEPENDENCIES AND REDUNDANCIES - An apparatus comprises a processing device configured to generate a model of a plurality of devices characterizing relationships between the devices, to build a device dependency chain for the devices based on the model, to predict workload for each of the devices in one or more time slots of a given time period, and to determine a deployment schedule for the devices based on the device dependency chain and the predicted workload. The processing device is also configured to utilize the deployment schedule to select a device of the devices on which to perform an operation, to determine whether the selected device corresponds to an additional device of the devices configured to operate in place of the selected device during performance of the operation, and to control performance of the operation on the selected device responsive to the determination of whether the selected device corresponds to the additional device. | 2022-06-30 |
20220206878 | BOTTLENECK DETECTION FOR PROCESSES - Systems and methods for analyzing an event log for a plurality of instances of execution of a process to identify a bottleneck are provided. An event log for a plurality of instances of execution of a process is received and segments executed during one or more of the plurality of instances of execution are identified from the event log. The segments represent a pair of activities of the process. For each particular segment of the identified segments, a measure of performance is calculated for each of the one or more instances of execution of the particular segment based on the event log, each of the one or more instances of execution of the particular segment is classified based on the calculated measures of performance, and one or more metrics are computed for the particular segment based on the classified one or more instances of execution of the particular segment. The identified segments are compared with each other based on the one or more metrics to identify one of the identified segments that is most likely to have a bottleneck. | 2022-06-30 |
20220206879 | PCIe Race Condition secure by Trait Claims and Address Space by using Portable Stimulus - As part of PCIe enumeration, switches and endpoint devices allocate memory from three PCIe slave address spaces of the HOST. Multi-CPUs cause a Race Condition during Enumeration. Suppose CPU1 initiate a transaction on Address Port CF8 data is to be written to Data Port CFC. Another CPU2 may write on Address Port CF8 before the CPU1 has been able to write to Data Port leading to a race condition. This may be avoided using Traits to avoid conflict and Race Condition. Allocation of Address Space region (Contiguous) for TYPE0 or TYPE1 Configuration space is achieved using Byte Addressability by PCIe End Point devices. PCIe devices shall claim this Configuration Space region when it wishes to operate on it. An allocation claim uses a trait to map to Configuration Spaces from address space. Configuration Space with traits that satisfy the claim's trait constraints is the candidate for matching regions | 2022-06-30 |
20220206880 | SYSTEM AND METHOD FOR IMPLEMENTING A SINGLE WINDOW INTEGRATED MODULE - Various methods, apparatuses/systems, and media for implementing a single window integrated platform are disclosed. A processor is operatively connected with one or more memories via a communication network. The processor receives a request from a user via a user computing device to develop a micro service; authenticates the user based on verifying login information of the user; receives information data related to the requested micro service; generates products application programming interface (API) to display selectable products based on the information data of the requested micro service. The processor also receives input on selected products; triggers a dynamic workflow based on the selected products; interacts with onboarding APIs to develop the micro service in response to the triggering of the dynamic workflow; and transmits a notification to the user computing device when an end state of the dynamic workflow is detected. | 2022-06-30 |
20220206881 | SYSTEMS AND METHODS FOR IDENTIFYING SIMILAR ELECTRONIC CONTENT ITEMS - Systems, methods and non-transitory computer readable media for detecting incidents are disclosed. The method includes receiving a primary issue creation event record for a primary issue, the event record including a description of the primary issue, and encoding the primary issue into a primary vector number based on the description of the primary issue. The method further includes identifying candidate issues and retrieving vector numbers of the identified candidate issues, computing distances between the primary vector number and each of the candidate vector numbers, and determining whether incident criteria is met based on the computed distances. In addition, the method includes determining that an incident has occurred upon determining that the incident criteria is met and generating an alert. | 2022-06-30 |
20220206882 | METHOD AND APPARATUS FOR READING AND WRITING CLIPBOARD INFORMATION AND STORAGE MEDIUM - A method for reading and writing clipboard information, applied to a terminal, includes: acquiring a request for reading and writing clipboard information; determining a read permission for clipboard information, in response to the request for reading and writing clipboard information including a request for reading clipboard information, and allowing to read the clipboard information or refusing to read the clipboard information based on the read permission; and determining a write permission for the clipboard information, in response to the request for reading and writing clipboard information including a request for writing clipboard information, and allowing or refusing to write the clipboard information based on the write permission. The read and write permission of the clipboard can be divided into the read permission and the write permission, thereby facilitating preventing leakage of the clipboard information in the clipboard and improving the security of the clipboard information. | 2022-06-30 |
20220206883 | SCALABLE ACTIONS FOR USER DATA REQUESTS - In some examples, a computing device may receive a user request and may determine a user jurisdiction associated with the received user request. Based at least on a request type and the user jurisdiction, the computing device may select a first policy file from among a plurality of policy files, the plurality of policy files preconfigured for respective different combinations of at least the request type and the user jurisdiction to contain at least one data action for instructing at least one respective target subsystem to perform the at least one data action in response to a respective user request. In addition, the computing device may send, based on a data action included in the first policy file, at least one instruction to at least one target subsystem. | 2022-06-30 |
20220206884 | SYSTEMS AND METHODS FOR CONDUCTING AN AUTOMATED DIALOGUE - A method for conducting an automated dialogue between an inbound automated voice resource and an outbound automated voice resource during a voice communication session according to one embodiment includes receiving at the inbound automated voice resource an initiation of the voice communication session from the outbound automated voice resource; transmitting, by the inbound automated voice resource, a speech communication to the outbound automated voice resource during the voice communication session, wherein a digital watermark is embedded in the speech communication; identifying, by the outbound automated voice resource, the digital watermark in response to analyzing the speech communication; converting, by the outbound automated voice resource, an outbound automated voice resource communication language from speech to machine language in response to determining that the inbound automated voice resource interprets machine language based on the digital watermark; transmitting, by the outbound automated voice resource, a machine language communication to the inbound automated voice resource; converting, by the inbound automated voice resource, an inbound automated voice resource communication language from speech to machine language in response to determining that the outbound automated voice resource interprets machine language based on the machine language communication; and completing the automated dialogue between the inbound automated voice resource and the outbound automated voice resource using machine language. | 2022-06-30 |
20220206885 | SYSTEM AND METHOD FOR N-MODULAR REDUNDANT COMMUNICATION - A fault tolerant consensus generation and communication system and method is described. Each processing node in the system receives a plurality of measurements from a sensor, calculates a consolidated value for the received plurality of measurements, transmits the consolidated value to other processing nodes, receives consolidated values from the other processing nodes, calculates a consensus value based on the calculated consolidated value and the received one or more consolidated values, transmits the calculated consensus value to the other processing nodes, receives consensus values from the other processing nodes, generates a consensus message based on the calculated consensus value, the received one or more consensus values, and a predefined criterion, and, in a case where the consensus message is not present in a consensus queue, adds the consensus message to the consensus queue. | 2022-06-30 |
20220206886 | ROOT CAUSE ANALYSIS OF LOGS GENERATED BY EXECUTION OF A SYSTEM - A system stores logs representing events that occur in the system based on executable instructions executed by the system, for example, by processes executing within the system or by applications. The system analyzes the logs to determine the root cause of the error or event that resulted in generation of the log. The system clusters logs to determine clusters of logs. The system analyzes logs of each cluster to determine a root cause of errors resulting in logs belonging to the cluster. For any new error log that is received, the system determines the cluster to which the error log belongs and takes action based on the root cause associated with the cluster, for example, sending an alert message or performing automatic remediation. | 2022-06-30 |
20220206887 | BUS MONITORING DEVICE AND METHOD, STORAGE MEDIUM, AND ELECTRONIC DEVICE - A bus monitoring device and method, a non-transitory computer-readable storage medium, and an electronic device are disclosed. The bus monitoring method may include: arranging monitoring nodes in a bus, with each monitoring node arranged in one of subsystems to be tested of the bus, where the monitoring nodes are connected in series in a ring topology ( | 2022-06-30 |
20220206888 | ABNORMAL PORTION DETECTING DEVICE, METHOD OF DETECTING ABNORMAL PORTION, AND RECORDING MEDIUM - An abnormal portion detecting device ( | 2022-06-30 |
20220206889 | AUTOMATIC CORRELATION OF DYNAMIC SYSTEM EVENTS WITHIN COMPUTING DEVICES - Systems and methods are described herein for logging system events within an electronic machine using an event log structured as a collection of tree-like cause and effect graphs. An event to be logged may be received. A new event node may be created within the event log for the received event. One or more existing event nodes within the event log may be identified as having possibly caused the received event. One or more causal links may be created within the event log between the new event node and the one or more identified existing event nodes. The new event node may be stored as an unattached root node in response to not identifying an existing event node that may have caused the received event. | 2022-06-30 |
20220206890 | VIDEO PLAYBACK ERROR IDENTIFICATION BASED ON EXECUTION TIMES OF DRIVER FUNCTIONS - According to examples, an apparatus may include a processor and a non-transitory computer-readable medium on which may be stored instructions that when executed by the processor, may cause the processor to access a data file. In some examples, the data file may include a frame rate of a video and information regarding driver functions associated with driver threads executing during playback of the video. In some examples, the processor may determine whether the frame rate is less than a predefined frame rate. When frame rate is determined to be less than the predefined frame rate, the processor may determine whether an execution time of a first driver function exceeds a first predefined length of time. In some examples, the processor may output a first error message that includes information associated with the first driver function. | 2022-06-30 |
20220206891 | ERROR HANDLING METHOD AND APPARATUS - An error handling method performed by a computing device, the computing device comprises at least one computing device component and a board management controller (BMC) coupled to the at least one computing device component, the method comprises the steps of a BMC detecting an error relating to at least one computing device component, the BMC determining from a database a technical specification to fix the error and generating information for accessing the technical specification. An error handling apparatus comprises a BMC and at least one computing device component coupled to the BMC. The BMC is configured to detect an error relating to the at least one computing device component, determine from in a database a technical specification to fix the error, and generate information for accessing the technical specification. | 2022-06-30 |
20220206892 | MEMORY SYSTEM - A memory system includes a non-volatile memory including at least one memory cell, a buffer, and a memory controller. The memory controller acquires first data from the buffer. The first data includes a plurality of bits of data. The memory controller generates second data by performing a randomization process on the first data, generates a flag that is information used to identify an error suppression encoding process, based on the second data, and stores the flag in the buffer. The memory controller acquires third data and the flag from the buffer. The third data is 1-bit data of the first data. The memory controller generates storage data by performing the error suppression encoding process based on the acquired flag and the randomization process on the third data, and writes the storage data into the memory cell. | 2022-06-30 |
20220206893 | STORAGE CONTROLLER AND STORAGE SYSTEM INCLUDING THE SAME - A storage system may include a memory device including a first region including a single-level cell and a second region different from the first region, and a storage controller configured to read data from the first region at a first gear level of a plurality of gear levels, determine an error level of the read data and a state of the memory device, and change the first gear level to a second gear level of the plurality of gear levels based on the determined error level of the data and the determined state of the memory device. | 2022-06-30 |
20220206894 | METHOD AND SYSTEM FOR FACILITATING WRITE LATENCY REDUCTION IN A QUEUE DEPTH OF ONE SCENARIO - One embodiment provides a system which facilitates data management. During operation, the system processes, by a storage device, a write request and data associated with the write request, wherein the storage device comprises a plurality of channels over which to access a non-volatile memory of the storage device. The system writes the data to a first data buffer of the storage device while bypassing a first interface and a memory controller. The system sends the write request to the memory controller via the first interface. The system writes, via a first channel allocated for host write operations, the data from the first data buffer to the non-volatile memory. The system performs a garbage collection operation on the data, which comprises accessing the data via a second channel allocated for garbage collection operations. | 2022-06-30 |
20220206895 | ERROR CORRECTION ON LENGTH-COMPATIBLE POLAR CODES FOR MEMORY SYSTEMS - Inventive aspects include a polar code encoding system, which includes a partitioning unit to receive and partition input data into partitioned input data units. Encoders encode the partitioned input data units, and generate encoded partitioned input data units. Multiplier units perform matrix multiplication on the partitioned input data units and generator matrices, and generate matrix products. Adder units perform matrix addition on the encoded partitioned input data units and the matrix products. A combining unit combines outputs of the encoders into a target code word X. The target code word X may be a length-N code word X, where N=N | 2022-06-30 |
20220206896 | METHODS AND APPARATUS TO ASSIGN INDICES AND RELOCATE OBJECT FRAGMENTS IN DISTRIBUTED STORAGE SYSTEMS - Methods and apparatus to dynamically assign and relocate object fragments in distributed storage systems are disclosed. In some examples, an apparatus to compile fragments of an object includes a fragment compiler to: compile an object from fragments stored in storage nodes, respective ones of the fragments corresponding to (a) a node index of storage identifiers representative of the storage nodes and (b) a fragment index of fragment identifiers associated with the respective ones of the fragments of the object, respective ones of the fragment identifiers being representative of a sequential order of the fragments of the object, the respective ones of the fragment identifiers to be associated with the respective ones of the storage identifiers to enable verification of storage locations of the respective ones of the fragments of the object relative to respective storage nodes; request a first one of the fragments from a first one of the respective storage nodes; determine if a first fragment index assigned to the first one of the fragments matches a first node index assigned to the first one of the fragments; when the first fragment index matches the first node index, compile the first one of the fragments into the object based on the first node index; and when the first fragment index does not match the first node index, compile the first one of the fragments into the object based on the first fragment index. | 2022-06-30 |
20220206897 | DISTRIBUTED STORAGE SYSTEM, DATA RECOVERY METHOD, AND DATA PROCESSING PROGRAM - User data and redundant codes are stored in a distributed manner, and data is read while suppressing performance degradation due to the occurrence of a failure. If a first node in a distributed storage system receives a read request from a host to read the user data when a storage device of its own node is blocked, the first node executes a first collection read request which requests recovery of data from the secondary redundant code corresponding to target data of the read request; and if at least part of the target data has failed to be recovered according to the first collection read request, regarding insufficient data among a plurality of pieces of data which are necessary to recover the target data by using the primary redundant code, the first node executes a second collection read request which requests recovery of the insufficient data from the secondary redundant code. | 2022-06-30 |
20220206898 | METHOD AND APPARATUS FOR PREDICTING HARD DISK FAULT OCCURRENCE TIME, AND STORAGE MEDIUM - Disclosed are method and apparatus for predicting hard disk fault occurrence time of hard disk failure, and storage medium. The method includes steps of: screening a hard disk on the verge of failure from a plurality of hard disks according to state data acquired of hard disks; calculating variation quantity and discrete quantity of each piece of the state data of the hard disk on the verge of failure acquired over a first preset period of time, to obtain a first predicted data set; and inputting the first predicted data set into a first training model to obtain probability of occurrence of failure for the hard disk on the verge of failure over a future second preset period of time. | 2022-06-30 |
20220206899 | METHOD AND APPARATUS TO SUPPORT INSTRUCTION REPLAY FOR EXECUTING IDEMPOTENT CODE IN DEPENDENT PROCESSING IN MEMORY DEVICES - Methods and processing devices are provided for error protection to support instruction replay for executing idempotent instructions at a processing in memory PIM device. The processing apparatus includes a PIM device configured to execute an idempotent instruction. The processing apparatus also includes a processor, in communication with the PIM device, configured to issue the idempotent instruction to the PIM device for execution at the PIM device and reissue the idempotent instruction to the PIM device when one of execution of the idempotent instruction at the PIM device results in an error and a predetermined latency period expires from when the idempotent instruction is issued. | 2022-06-30 |
20220206900 | LEADER ELECTION IN A DISTRIBUTED SYSTEM - Example implementations relate to consensus protocols in a stretched network. According to an example, a distributed system includes continuously monitoring network performance and/or network latency among a cluster of a plurality of nodes in a distributed computer system. Leadership priority for each node is set based at least in part on the monitored network performance or network latency. Each node has a vote weight based at least in part on the leadership priority of the node. Each node's vote is biased by the node's vote weight. The node having a number of biased votes higher than a maximum possible number of votes biased by respective vote weights received by any other node in the cluster is selected as a leader node. | 2022-06-30 |
20220206901 | PROVIDING HOST-BASED ERROR DETECTION CAPABILITIES IN A REMOTE EXECUTION DEVICE - Providing host-based error detection capabilities in a remote execution device is disclosed. A remote execution device performs a host-offloaded operation that modifies a block of data stored in memory. Metadata is generated locally for the modified of block of data such that the local metadata generation emulates host-based metadata generation. Stored metadata for the block of data is updated with the locally generated metadata for the modified portion of the block of data. When the host performs an integrity check on the modified block of data using the updated metadata, the host does not distinguish between metadata generated by the host and metadata generated in the remote execution device. | 2022-06-30 |
20220206902 | APPLICATION TEMPLATE FOR APPLICATION CONSISTENT BACKUP AND RESTORE OF DATABASE APPLICATIONS IN KUBERNETES - Embodiments of an application template process that provides application consistent backups for a wide range of database applications and deployment configurations. The defined application template allows specifying suspend (quiesce) and restart (unquiesce) commands for each type of database application and template selectors to select resources to sequence dependent resources to ensure application consistency of the backup operation in a cluster configuration. Prehook and posthook annotations provide entry points for execution of appropriate program scripts to suspend and restart the respective resource during execution of the application. | 2022-06-30 |
20220206903 | RECOVERY POINT OBJECTIVE (RPO) DRIVEN BACKUP SCHEDULING IN A DATA STORAGE MANAGEMENT SYSTEM - To perform Recovery Point Objective (RPO) driven backup scheduling, the illustrative data storage management system is enhanced in several dimensions. Illustrative enhancements include: streamlining the user interface to take in fewer parameters; backup job scheduling is largely automated based on several factors, and includes automatic backup level conversion for legacy systems; backup job priorities are dynamically adjusted to re-submit failed data objects with an “aggressive” schedule in time to meet the RPO; only failed items are resubmitted for failed backup jobs. | 2022-06-30 |
20220206904 | DEVICE AND METHOD FOR MANAGING RECOVERY INFORMATION OF AUXILIARY STORAGE DEVICE - A device that can efficiently manage capacity of a backup auxiliary storage device in an auxiliary storage device and a method of managing backup auxiliary storage device are disclosed. The auxiliary storage device includes an original auxiliary storage device, a backup auxiliary storage device, and a user input device. A controller that controls these devices is disclosed. The backup auxiliary storage device stores recovery information about the original auxiliary storage device. The user input device receives a user input for switching between a normal mode and a backup mode. When in the normal mode, the controller controls the auxiliary storage device so that a host computer boots using an OS in the original auxiliary storage device and is not able to access the backup auxiliary storage device. | 2022-06-30 |
20220206905 | HYBRID MEMORY SYSTEM WITH CONFIGURABLE ERROR THRESHOLDS AND FAILURE ANALYSIS CAPABILITY - A system and method for configuring fault tolerance in nonvolatile memory (NVM) are operative to set a first threshold value, declare one or more portions of NVM invalid based on an error criterion, track the number of declared invalid NVM portions, determine if the tracked number exceeds the first threshold value, and if the tracked number exceeds the first threshold value, perform one or more remediation actions, such as issue a warning or prevent backup of volatile memory data in a hybrid memory system. In the event of backup failure, an extent of the backup can still be assessed by determining the amount of erased NVM that has remained erased after the backup, or by comparing a predicted backup end point with an actual endpoint. | 2022-06-30 |
20220206906 | DISTRIBUTED SITE DATA RECOVERY IN A GEOGRAPHICALLY DISTRIBUTED DATA STORAGE ENVIRONMENT - The described technology is generally directed towards recovering a chunk (or similar block of data) when the chunk is erasure coded into fragments, and recovery fragments need to be obtained from geographically distributed sites. The recovery fragments needed to perform recovery of a chunk are determined, and assigned to the geographically distributed sites as subtasks. Each site that receives a subtask from the requesting site obtains XOR-related fragments needed to produce the recovery fragment, performs the XOR operations on the XOR-related fragments to produce the recovery fragment, and returns the recovery fragment to the requesting site. When finished, a site receives another subtask until no subtasks remain, such that the fastest site or sites receive the most subtasks. The requesting site recovers the chunk from the received recovery fragments. The shared participation in the chunk recovery among the distributed sites provides for efficient distribution of the recovery-related resources and work. | 2022-06-30 |
20220206907 | EXTERNAL DYNAMIC VIRTUAL MACHINE SYNCHRONIZATION - Embodiments disclosed herein include systems and processes for replicating one or more user computing systems of an information management system at an external resource system to create a backup or fallback of the user computing systems. Replicating the user computing systems may include replicating data as well as the applications, operating systems and configuration of the user computing systems. This replicated or fallback user computing system may be implemented on a virtual machine at the external resource system. Thus, if a user computing system becomes inaccessible, a new user computing system can be generated based on the backup copy of the user computing system at the external resource system. Further, in some embodiments, the copy of the user computing system may be interacted with at the external resource system. Thus, certain embodiments disclosed herein can be used to transition an information management system to an external resource system. | 2022-06-30 |
20220206908 | TECHNIQUES FOR REPLICATING STATE INFORMATION FOR HIGH AVAILABILITY - A Network Virtualization Device (NVD) executes a set of Virtual Network Interface Cards (VNICs). The set of VNICs includes a first VNIC that forwards packets for a set of one or more packet flows. The NVD stores a first VNIC-related information that includes information identifying a first set of one or more packet flows and associated state information The NVD in response to determining that the state information for the first VNIC is to be synchronized with another NVD, identifies a first backup NVD for the first VNIC, wherein the first backup NVD is a backup for the first VNIC, and communicates to the first backup NVD, a portion of the state information stored by the NVD for the first VNIC. | 2022-06-30 |
20220206909 | SUBSTITUTION APPARATUS, SUBSTITUTION CONTROL PROGRAM, AND SUBSTITUTION METHOD - A substitution apparatus for installation in a vehicle in which a plurality of in-vehicle control apparatuses are implemented, the substitution apparatus including a control unit and a substitute unit. The control unit is configured to control the substitute unit based on transmission data transmitted from the in-vehicle control apparatuses, specify an abnormal in-vehicle control apparatus based on the transmission data, disable the specified abnormal in-vehicle control apparatus, and apply, to the substitute unit, a program for exhibiting functions otherwise normally executed by the specified abnormal in-vehicle control apparatus. The substitute unit is configured to substitute for the disabled in-vehicle control apparatus by executing the applied program. | 2022-06-30 |
20220206910 | DUAL CLASS OF SERVICE FOR UNIFIED FILE AND OBJECT MESSAGING - A storage system has priority queues for real time-class file system messaging and backup-class file system messaging. The storage system includes servers, coupled as a storage cluster, storage devices and a network coupling the servers and the storage devices. The servers have priority queues. The servers operate the priority queues for messaging from the servers to the storage devices via the network in accordance with a real time-class file system and a backup-class file system. A first subset of the priority queues has higher priority on the network for real time-class file system messaging of at least one type. A second subset of the priority queues has lower priority on the network for backup-class file system messaging of at least one type. | 2022-06-30 |
20220206911 | DEVICE MANAGEMENT SYSTEM, NETWORK ADAPTER, SERVER, DEVICE, DEVICE MANAGEMENT METHOD, AND PROGRAM - A device management system ( | 2022-06-30 |
20220206912 | SYSTEM FOR DETERMINING AUDIO AND VIDEO OUTPUT ASSOCIATED WITH A TEST DEVICE - An enclosure for testing performance of an application contains one or more devices. A first device being tested presents output using a display or a speaker. A camera or microphone, which may be associated with a second device in the enclosure, acquires information regarding the output, such as by acquiring data representing the display output of the first device using a camera. An interface presenting information regarding the performance of the application includes information determined using the camera or microphone, which may be useful when the first device is unable to directly capture the output that is presented. In other cases, a second device in the enclosure may provide a display output or an audio output, and the first device may receive the output using a camera or microphone, enabling the performance of the application relating to receipt of input by the first device to be tested. | 2022-06-30 |
20220206913 | VIRTUAL DEVICE FOR PROVIDING TEST DATA - A virtual device acquires a transaction history between a legacy computing device and a linked device; obtains a first request provided from the legacy computing device based on the transaction history and a first response received from the linked device in response to the first request; receives a second request corresponding to the first request from a new computing device and determines a second response to the second request; and provides test information for the new computing device based on a comparison of the first response and the second response. | 2022-06-30 |
20220206914 | MERGED INFRASTRUCTURE FOR MANUFACTURING AND LIFECYCLE MANAGEMENT OF BOTH HARDWARE AND SOFTWARE - A merged infrastructure for manufacturing and lifecycle management of both hardware and software is disclosed. In various embodiments, a library comprising a superset of device drivers is stored, the superset including for each of a plurality of supported systems a corresponding set of device drivers for devices comprising that supported system. A context in which a processor is deployed is determined, the context being associated with a specific corresponding one of the plurality of supported systems. The library is used to provision based on the determined context at least a subset of devices accessible by the processor in the context in which the processor is deployed. | 2022-06-30 |
20220206915 | METHOD AND SYSTEM FOR OPEN NAND BLOCK DETECTION AND CORRECTION IN AN OPEN-CHANNEL SSD - One embodiment provides a system which facilitates data management. The system allocates a superblock of a storage device, wherein the superblock is in an open state. The system writes data to the superblock. The system monitors, by a controller of the storage device, an amount of time that the superblock remains in the open state. Responsive to detecting a failure associated with a flash translation layer (FTL) module, the system determines that the monitored amount of time exceeds a predetermined threshold, and seals, by the controller, the superblock by writing directly to a respective free page in the superblock while bypassing one or more data-processing modules. | 2022-06-30 |
20220206916 | METHODS AND APPARATUS FOR MANAGING DATA IN STACKED DRAMS - Methods and apparatus manage data in memories disposed in a stacked relation with respect to one or more processors. The method includes receiving at least one hint indicating future processor usage of a software component, where the future processor usage is indicative of future usage of the one or more processors when executing the software component or a code section of the software component. In some implementations, the method includes selecting a memory location in the memories for data used by the software component based on the hint. | 2022-06-30 |
20220206917 | SYSTEM AND METHOD FOR IN-MEMORY COMPUTATION - A method for computing. In some embodiments, the method includes: calculating an advantage score of a first computing task, the advantage score being a measure of an extent to which a plurality of function in memory circuits is capable of executing the first computing task more efficiently by than one or more extra-memory processing circuits, the first computing task including instructions and data; in response to determining that the advantage score of the first computing task is less than a first threshold, executing the first computing task by the one or more extra-memory processing circuits; and in response to determining that the first computing task is at least equal to the first threshold: compiling the instructions for execution by the function in memory circuits; formatting the data for the function in memory circuits; and executing the first computing task, by the function in memory circuits. | 2022-06-30 |
20220206918 | DYNAMIC EMOTION DETECTION BASED ON USER INPUTS - A method by a network device for dynamically detecting emotional states of a user operating a client end station to interact with an application. The method includes receiving information regarding user inputs received by the client end station from the user while the user interacted with the application during a particular time period and determining an emotional state of the user based on analyzing the information and information regarding user inputs received by the client end station from the user while the user interacted with the application during one or more previous time periods that together with the particular time period form a time window. | 2022-06-30 |
20220206919 | SELF-ADAPTIVE CIRCUIT BREAKERS FOR APPLICATIONS THAT INCLUDE EXECUTION LOCATION MARKERS - An example system includes: an application engine comprising code segments and execution location markers which return time indicators identifying times to implemented the code segments; a circuit breaker (CB) awareness builder engine to: receive from the application engine, the time indicators; and store the time indicators; a CB awareness engine to: receive a request to process the application engine; retrieve, via the CB awareness builder engine, the time indicators; determine whether a code segment is open or closed based on the time indicators; modify the request to include CB indications of respective code segments of the application engine that are open; and provide the request, as modified to include the CB indications, to the application engine to cause the application engine to return an exception for open code segments in response to receiving the request, as modified to include the CB indications. | 2022-06-30 |
20220206920 | Data Routing Techniques to Delay Thermal Throttling - Aspects of a storage device are provided which delay thermal throttling in response to temperature increases based on different reliable temperatures for different types of cells, such as SLCs, hybrid SLCs and MLCs. Initially, a controller writes first data to a block of MLCs at a first data rate when a temperature of the block meets a first temperature threshold for MLCs. Subsequently, the controller writes second data to the block at a second data rate lower than the first data rate when the temperature of the block meets a second temperature threshold for SLCs. For hybrid SLCs, the MLCs are each configured to store a first number of bits, and the controller writes a second number of bits smaller than the first number of bits in each of one or more of the cells. Storage device performance is thus improved through delayed thermal throttling without compromising data integrity. | 2022-06-30 |
20220206921 | BLOCK-BASED ANOMALY DETECTION IN COMPUTING ENVIRONMENTS - An anomaly service receives log data from nodes in a computing environment, which includes a sequence of information indicative of log messages produced by the nodes. The anomaly service identifies dominant patterns in the sequence of information that are representative of non-anomalous blocks of the log messages. Having identified the dominant patterns, the service is able to extract the non-anomalous blocks from the log data to reveal anomalous blocks that do not fit the dominant patterns. The service may then generate anomaly vectors based on the anomalous blocks, which can be distributed to the nodes to detect anomalies. | 2022-06-30 |
20220206922 | BLOCK-BASED ANOMALY DETECTION IN COMPUTING ENVIRONMENTS - An anomaly service receives log data from nodes in a computing environment, which includes a sequence of information indicative of log messages produced by the nodes. The anomaly service identifies dominant patterns in the sequence of information that are representative of non-anomalous blocks of the log messages. Having identified the dominant patterns, the service is able to extract the non-anomalous blocks from the log data to reveal anomalous blocks that do not fit the dominant patterns. The service may then generate anomaly vectors based on the anomalous blocks, which can be distributed to the nodes to detect anomalies. | 2022-06-30 |
20220206923 | OPERATION LOGS VISUALIZATION DEVICE, OPERATION LOGS VISUALIZATION METHOD AND OPERATION LOGS VISUALIZATION PROGRAM - An operation log visualization device includes processing circuitry configured to store operation logs each containing a captured image of an operation screen captured during an operation and information identifying a position of an operation location in an operation target window on the operation screen, generate images in each of which a portion corresponding to the position in the captured image is highlighted, and generate a flowchart by arranging the generated images in an order of processing of operation logs corresponding to the images. | 2022-06-30 |