33rd week of 2021 patent applcation highlights part 48 |
Patent application number | Title | Published |
20210255836 | Systems and Methods for Performing Lossless Source Coding - Systems and methods in accordance with various embodiments of the invention perform lossless source coding. In several embodiments, a nested code structure is utilized to perform Random Access Source Coding (RASC), where the number of active encoders is initially unknown. In several embodiments, the decoder can attempt to decode using a number of Slepian-Wolf decoders corresponding to an estimated number of sources. One embodiment includes multiple source encoders configured to receive a start message and transmit a portion of a codeword selected by encoding data from a source until an end of epoch message is received. A source decoder can transmit at least one start message, and receive codeword portions transmitted by the plurality of source encoders. When a decoding rule is satisfied, the source decoder can decode data from multiple source encoders based upon received codeword portions, and cause the broadcast transmitter to transmit an end of epoch message. | 2021-08-19 |
20210255837 | OPC UA SERVER, SYSTEM OPERATING USING OPC UA, AND METHOD OF EXECUTING OPC UA SYSTEM - An Open Platform Communications (OPC) Unified Architecture (UA) server includes: a storage that stores a configuration file written in a compiled programming language and in which an interpreter is embedded; a transceiver that receives, from an OPC UA client, an execution request to execute a calculation defined in the configuration file; and a processor that executes the calculation using the interpreter. | 2021-08-19 |
20210255838 | EXTENSIBLE USER DEFINED FUNCTIONS VIA OBJECT PROPERTIES - Embodiments of extending properties of objects in a designer tool for generating applications are disclosed therein. In one embodiment, a method includes receiving a formula as an input value to a property of an instance of an object surfaced via a graphical user interface of a designer tool. The property corresponds to a function having an input parameter to the function, and the received formula includes the input parameter. The method further includes in response to receiving the formula as the input value to the property, automatically deploying programming codes in the application to incorporate the received formula in order to extend functionality of the instance of the object. | 2021-08-19 |
20210255839 | STANDARDIZED MODEL PACKAGING AND DEPLOYMENT - Standardized model packaging and deployment, including: generating a model package comprising: model definition data for a model; function code facilitating execution of the model; and at least one interface for at least one operating system. | 2021-08-19 |
20210255840 | AUTOMATIC CONTAINERIZATION OF OPERATING SYSTEM DISTRIBUTIONS - Embodiments of the present disclosure relate to containerizing the packages of an operating system. More specifically, a dependency level of each of a plurality of packages included in an operating system may be determined. The plurality of packages may be sorted based on their dependency level, and an image file may be created for each of the plurality of packages sequentially, based on the dependency level of each of the plurality of packages. The image file for each of the plurality of packages may be uploaded to a registry server, and in response to a request to generate a container on which to run an application, generating the container using one or more of the plurality of image files, wherein the one or more of the plurality of image files correspond to one or more of the plurality of packages that the application is dependent on. | 2021-08-19 |
20210255841 | ELECTRONIC DEVICE PERFORMING RESTORATION ON BASIS OF COMPARISON OF CONSTANT VALUE AND CONTROL METHOD THEREOF - An electronic device includes a memory storing one or more instructions, and at least one processor configured to execute the one or more instructions to identify whether an annotation binding a first type object and a second type object is declared, and bind the first type object and the second type object, and sign both the bound first type object and the bound second type object based on identifying that the annotation is declared. | 2021-08-19 |
20210255842 | Low-Code Development Platform - A computer-implemented low-code development platform is provided including a user interface and having access to a library of step macros configured for user configuration and interconnection via the user interface to generate executable code. Each step macro includes a step configuration generator and an execution code generator. The step configuration generator is configured to generate a step configuration file based on user-configurable data points configurable via the user interface. The execution code generator is configured to generate executable code in the form of a compiled step file configured for storage in memory and execution by a processor of a computing system. The execution code generator receives and inputs the step configuration file into a metaprogramming component configured to interpret the user-configurable data points of the step configuration file and to generate and output the compiled step file. | 2021-08-19 |
20210255843 | Exchange of Data Objects Between Task Routines Via Shared Memory Space - An apparatus includes a processor to: based on data dependencies specified in a job flow definition, identify first and second tasks of the corresponding job flow to be performed sequentially, wherein the first task outputs a data object used as an input to the second; store, within a task queue, at least one message conveying at least an identifier of the first task, and an indication that the data object is to be exchanged through a shared memory space; within a task container, in response to storage of the at least one message within the task queue, sequentially execute first and second task routines to sequentially perform the first and second tasks, respectively, and instantiate the shared memory space to be accessible to the first and second task routines during their executions; and upon completion of the job flow, transmit an indication of completion to another device via a network. | 2021-08-19 |
20210255844 | Creating and Using Native Virtual Probes in Computing Environments - Concepts and technologies are disclosed herein for creating and using native virtual probes in computing environments. A request for a service that includes a virtual function can be received, where the virtual function is to be monitored by a native virtual probe. An image of the service can be obtained, where the image can include a first image component for the virtual function and a second image component for the native virtual probe. The image can be deployed. Deployment of the image can result in instantiation of the virtual function on a computing device and instantiation of the native virtual probe on the computing device. | 2021-08-19 |
20210255845 | ON-BOARD UPDATE APPARATUS, PROGRAM, AND METHOD FOR UPDATING PROGRAM OR DATA - An on-board update apparatus is an on-board update apparatus for performing a process for updating a program or data of an on-board ECU, the on-board update apparatus including: a receiving unit configured to receive update information on an update of the program or data; an obtaining unit configured to obtain, based on the update information received by the receiving unit, the program or data from a providing apparatus configured to provide the program or data; a storage unit configured to store a current program or data obtained from the on-board ECU in a predetermined storage area; and a transmitting unit configured to transmit, after the storage unit stores the current program or data, the program or data obtained by the obtaining unit to the on-board ECU. | 2021-08-19 |
20210255846 | COGNITIVELY DETERMINING UPDATES FOR CONTAINER BASED SOLUTIONS - In an approach to cognitively determining and applying image updates to one or more containers, one or more computer processors detect an updated image for a container. The one or more computer processors, responsive to a pull request for the detected updated image, create a set of update information, wherein the set of update information includes one or more, bug fixes, features of the updated image, developer suggestions, and details of limitations introduced in the updated image. The one or more computer processors calculate a requirement value for the updated image. The one or more computer processors, responsive to exceeding a requirement threshold, update the container with the updated image. | 2021-08-19 |
20210255847 | MODEL-BASED DIFFERENCING TO SELECTIVELY GENERATE AND DEPLOY IMAGES IN A TARGET COMPUTING ENVIRONMENT - A system includes a memory that stores computer-executable components and a processor, operably coupled to the memory, that executes the computer-executable components stored in the memory. The computer-executable components include a deployment generator component that analyzes current component versions of application services and determines differences with previous deployment versions of the application services deployed to a target computing environment. A service deployment output component generates instructions to selectively update the application services to the target computing environment based on the determined differences between the previous deployment versions and the current component versions of the application services. | 2021-08-19 |
20210255848 | SECURELY UPDATING SOFTWARE ON CONNECTED ELECTRONIC DEVICES - This disclosure describes, in part, techniques for securely updating a point-of-sale (POS) system that includes a merchant-facing device and a buyer-facing device. For instance, the merchant-facing device may execute first software that provides first POS functionality and the buyer-facing device may execute second software that provides second POS functionality. To update both devices, the merchant-facing device may receive a software update from a payment service via a network connection, and update the first software using the software update. The merchant-facing device can then cause, via a physical connection, the buyer-facing device to reboot in an update mode and send the software update to the buyer-facing device. In response, the buyer-facing device can update the second software using the software update and then reboot in a payments mode. In some instances, the buyer-facing device can then update a secure enclave on the buyer-facing device using the software update. | 2021-08-19 |
20210255849 | INFORMATION PROCESSING APPARATUS AND METHOD - According to one embodiment, an information processing apparatus includes: a non-volatile first memory configured to store first data relating to an operation of the information processing apparatus; a main memory configured to store the first data loaded from the first memory; a receiving unit configured to receive second data from an external apparatus that manages the second data for updating the first data; a non-volatile second memory configured to store the second data received by the receiving unit; and an update unit configured to update the first data stored in the first memory using the second data stored in the second memory while the information processing apparatus is operating based on the first data loaded to the main memory. | 2021-08-19 |
20210255850 | HOT UPDATES TO CONTROLLER SOFTWARE USING TOOL CHAIN - Disclosed embodiments relate to performing updates to Electronic Control Unit (ECU) software while an ECU of a vehicle is operating. Operations may include receiving, at the vehicle while the ECU of the vehicle is operating, a software update file for the ECU software; writing, while the ECU is operating, the software update file into a first memory location in a memory of the ECU while simultaneously executing a code segment of existing code in a second memory location in the memory of the ECU; and updating a plurality of memory addresses associated with the memory of the ECU based on the software update file and without interrupting the execution of the code segment currently being executed in the second memory location in the memory of the ECU. | 2021-08-19 |
20210255851 | CODE MONITORING AND RESTRICTING OF EGRESS OPERATIONS - One example method of operation may include identifying an attempted action taken to code, determining whether to block the attempted action based on one or more of user profile access rights assigned to a user profile and a code permission assigned to the code, and responsive to determining whether to block the attempted action, blocking one or more of access to the code, access to a file containing the code and a port used to connect to a server hosting the code. | 2021-08-19 |
20210255852 | MERGING CHANGES FROM UPSTREAM CODE TO A BRANCH - A computer-implemented method is provided for program repository management. The method includes identifying commits in an upstream commit log of an upstream branch and commits in a development commit log of a development branch. The method further includes extracting the commits in the development commit log of the development branch. The method also includes identifying, by a hardware processor in the upstream commit log, a code which is identical or similar to the extracted commits from the commit log of the development branch. The method additionally includes showing the identified code as a commit candidate of change in an upstream program code. | 2021-08-19 |
20210255853 | VERSION CONTROL MECHANISMS AUGMENTED WITH SEMANTIC ANALYSIS FOR DETERMINING CAUSE OF SOFTWARE DEFECTS - A plurality of metadata corresponding to a plurality of code versions of an application stored in a version control system is generated. A determination is made of a set of changes between a first metadata of a first code version and a second metadata of a second code version. A classification is made of elements in the set of changes into a first category and a second category based on a set of predetermined rules, wherein the elements classified into the first category are better candidates to determine causes of defects in the application than the elements classified into the second category. The elements classified in the first category are used to determine a cause of a defect in the application. | 2021-08-19 |
20210255854 | AUTOMATED BRANCHING WORKFLOW FOR A VERSION CONTROL SYSTEM - A traditional version control system workflow for branching and release management is not compatible with development environments that require long release cycles. When a release branch from a long release cycle is merged into a master branch, bug fixes made to the release branch and to the master branch are not merged into the development branch in a timely manner. A version control system automatically merges document versions from the release branch into the development branch when changes are merged into the release branch or the master branch, thus keeping the development current with respect to the release and master branches without formally closing the release branch. | 2021-08-19 |
20210255855 | COMPUTER-BASED SYSTEMS CONFIGURED TO GENERATE AND/OR MAINTAIN RESILIENT VERSIONS OF APPLICATION DATA USABLE BY OPERATIONALLY DISTINCT CLUSTERS AND METHODS OF USE THEREOF - Systems and methods associated with generating and/or maintaining resilient versions of application data usable by operationally distinct clusters are disclosed. In one embodiment, an exemplary method may comprise operating plural instances of a software application in a first cluster and a second cluster, assessing requirements of streaming architecture of both clusters that impact the instances' ability to process application data, creating at least two versions of the application including a main version in one cluster and a replica version for an operationally distinct cluster, automatically mirroring replica versions of the application data from each cluster into a distinct cluster for access and use by the software instance in the distinct cluster, and storing indexes for main and replica versions and all data that the software application requires to provide consistent responses in all such operationally distinct clusters. | 2021-08-19 |
20210255856 | HYBRID QUANTUM-CLASSICAL COMPUTER FOR VARIATIONAL COUPLED CLUSTER METHOD - A hybrid quantum classical (HQC) computer, which includes both a classical computer component and a quantum computer component, solves linear systems. The HQC decomposes the linear system to be solved into subsystems that are small enough to be solved by the quantum computer component, under control of the classical computer component. The classical computer component synthesizes the outputs of the quantum computer component to generate the complete solution to the linear system. | 2021-08-19 |
20210255857 | INTELLIGENT THREAD DISPATCH AND VECTORIZATION OF ATOMIC OPERATIONS - A mechanism is described for facilitating intelligent dispatching and vectorizing at autonomous machines. A method of embodiments, as described herein, includes detecting a plurality of threads corresponding to a plurality of workloads associated with tasks relating to a graphics processor. The method may further include determining a first set of threads of the plurality of threads that are similar to each other or have adjacent surfaces, and physically clustering the first set of threads close together using a first set of adjacent compute blocks. | 2021-08-19 |
20210255858 | COMPUTATION DEVICE - The purpose of the invention is to reduce the amount of copying of information elements generated when renaming a physical register. With respect to information elements stored in logic registers, this computation device stores, in a third logic register, a third element group that includes information elements representing computation results related to vector operations between portions of a first element group that have a predetermined vector length and portions of a second element group that have a predetermined vector length, the first element group and the second element group being groups of information elements stored in a first logic register and second logic register, respectively. The number of regions in which the information elements can be stored simultaneously and which are provided in the physical registers storing the first element group, the second element group, and the third element group is two or less. | 2021-08-19 |
20210255859 | MACRO-OP FUSION - Systems and methods are disclosed for macro-op fusion. Sequences of macro-ops that include a control-flow instruction are fused into single micro-ops for execution. The fused micro-ops may avoid the use of control-flow instructions, which may improve performance. A fusion predictor may be used to facilitate macro-op fusion. | 2021-08-19 |
20210255860 | ISA ENHANCEMENTS FOR ACCELERATED DEEP LEARNING - Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element is enabled to execute instructions in accordance with an ISA. The ISA is enhanced in accordance with improvements with respect to deep learning acceleration. | 2021-08-19 |
20210255861 | ARITHMETIC LOGIC UNIT - Systems, apparatuses, and methods related to arithmetic logic circuitry are described. A method utilizing such arithmetic logic circuitry can include performing, using a processing device, a first operation using one or more vectors formatted in a posit format. The one or more vectors can be provided to the processing device in a pipelined manner. The method can include performing, by executing instructions stored by a memory resource, a second operation using at least one of the one or more vectors and outputting, after a fixed quantity of time, a result of the first operation, the second operation, or both. | 2021-08-19 |
20210255862 | Initialization of Parameters for Machine-Learned Transformer Neural Network Architectures - An online system trains a transformer architecture by an initialization method which allows the transformer architecture to be trained without normalization layers of learning rate warmup, resulting in significant improvements in computational efficiency for transformer architectures. Specifically, an attention block included in an encoder or a decoder of the transformer architecture generates the set of attention representations by applying a key matrix to the input key, a query matrix to the input query, a value matrix to the input value to generate an output, and applying an output matrix to the output to generate the set of attention representations. The initialization method may be performed by scaling the parameters of the value matrix and the output matrix with a factor that is inverse to a number of the set of encoders or a number of the set of decoders. | 2021-08-19 |
20210255863 | PROCESSING METHOD AND PROCESSING DEVICE WITH MATRIX MULTIPLICATION COMPUTATION - A processor-implemented method includes: determining a first multiplication matrix and a second multiplication matrix, based on an input multiplicand matrix and an input multiplier matrix that are generated from an input signal; determining a matrix to be restored, based on the first multiplication matrix and the second multiplication matrix; determining a matrix restoration constraint value, based on the matrix to be restored; determining a multiplication result of the input multiplicand matrix and the input multiplier matrix, based on the matrix restoration constraint value and the matrix to be restored; and analyzing the input signal based on the multiplication result. | 2021-08-19 |
20210255864 | Multiple Types of Thread Identifiers for a Multi-Threaded, Self-Scheduling Reconfigurable Computing Fabric - Representative apparatus, method, and system embodiments are disclosed for configurable computing. A representative system includes an interconnection network; a processor; and a plurality of configurable circuit clusters. Each configurable circuit cluster includes a plurality of configurable circuits arranged in an array; a synchronous network coupled to each configurable circuit of the array; and an asynchronous packet network coupled to each configurable circuit of the array. A representative configurable circuit includes a configurable computation circuit and a configuration memory having a first, instruction memory storing a plurality of data path configuration instructions to configure a data path of the configurable computation circuit; and a second, instruction and instruction index memory storing a plurality of spoke instructions and data path configuration instruction indices for selection of a master synchronous input, a current data path configuration instruction, and a next data path configuration instruction for a next configurable computation circuit. | 2021-08-19 |
20210255865 | APPARATUS AND METHOD FOR CONFIGURING SETS OF INTERRUPTS - An apparatus and method are described for efficiently processing and reassigning interrupts. For example, one embodiment of an apparatus comprises: a plurality of cores; and an interrupt controller to group interrupts into a plurality of interrupt domains, each interrupt domain to have a set of one or more interrupts assigned thereto and to map the interrupts in the set to one or more of the plurality of cores. | 2021-08-19 |
20210255866 | ACCELERATION UNIT, SYSTEM-ON-CHIP, SERVER, DATA CENTER, AND RELATED METHOD - An acceleration unit including a primary core and a secondary core. The primary core includes: a first on-chip memory; a primary core sequencer adapted to decode a received first cross-core copy instruction; and a primary core memory copy engine adapted to acquire a first operand from a first address in the first on-chip memory and copy the acquired first operand to a second address in a second on-chip memory of the secondary core. The secondary core includes: a second on-chip memory; a secondary core sequencer adapted to decode a received second cross-core copy instruction; and a secondary core memory copy engine adapted to acquire the first operand from the second address in the second on-chip memory and copy the acquired first operand back to the first address in the first on-chip memory. | 2021-08-19 |
20210255867 | FUNCTION VIRTUALIZATION FACILITY FOR BLOCKING INSTRUCTION FUNCTION OF A MULTI-FUNCTION INSTRUCTION OF A VIRTUAL PROCESSOR - In a processor supporting execution of a plurality of functions of an instruction, an instruction blocking value is set for blocking one or more of the plurality of functions, such that an attempt to execute one of the blocked functions, will result in a program exception and the instruction will not execute, however the same instruction will be able to execute any of the functions that are not blocked functions. | 2021-08-19 |
20210255868 | Scaling Performance Across a Large Number of Customer Nodes - Described are systems and methods for scaling performance across a large number of customer nodes by delegating management of execution of one or more tasks to the customer nodes. An example method may commence with ascertaining a set of the customer nodes eligible for delegation of the one or more tasks. The method may continue with deploying one or more control agents to the eligible set of the customer nodes. The one or more control agents may be configured to coordinate and execute the one or more tasks on the eligible set of customer nodes and selectively take one or more actions based on results of the execution of the one or more tasks. | 2021-08-19 |
20210255869 | METHOD FOR PERFORMING RANDOM READ ACCESS TO A BLOCK OF DATA USING PARALLEL LUT READ INSTRUCTION IN VECTOR PROCESSORS - This disclosure is directed to the problem of paralleling random read access within a reasonably sized block of data for a vector SIMD processor. The invention sets up plural parallel look up tables, moves data from main memory to each plural parallel look up table and then employs a look up table read instruction to simultaneously move data from each parallel look up table to a corresponding part a vector destination register. This enables data processing by vector single instruction multiple data (SIMD) operations. This vector destination register load can be repeated if the tables store more used data. New data can be loaded into the original tables if appropriate. A level one memory is preferably partitioned as part data cache and part directly addressable memory. The look up table memory is stored in the directly addressable memory. | 2021-08-19 |
20210255870 | System and Method for Instruction Unwinding in an Out-of-Order Processor - A system and corresponding method unwind instructions in an out-of-order (OoO) processor. The system comprises a mapper. In response to a restart event causing at least one instruction to be unwound, the mapper restores a present integer mapper state and present floating-point (FP) mapper state, used for mapping instructions, to a former integer mapper state and former FP mapper state, respectively. The mapper stores integer snapshots and FP snapshots of the present integer and FP mapper state, respectively, to expedite restoration to the former integer and FP mapper state, respectively. Access to the FP snapshots is blocked, intermittently, as a function of at least one FP present indicator used by the mapper to record presence of FP registers used as destinations in the instructions. Blocking the access, intermittently, improves power efficiency of the OoO processor. | 2021-08-19 |
20210255871 | LOOK-AHEAD TELEPORTATION FOR RELIABLE COMPUTATION IN MULTI-SIMD QUANTUM PROCESSOR - A technique for processing qubits in a quantum computing device is provided. The technique includes determining that, in a first cycle, a first quantum processing region is to perform a first quantum operation that does not use a qubit that is stored in the first quantum processing region, identifying a second quantum processing region that is to perform a second quantum operation at a second cycle that is later than the first cycle, wherein the second quantum operation uses the qubit, determining that between the first cycle and the second cycle, no quantum operations are performed in the second quantum processing region, and moving the qubit from the first quantum processing region to the second quantum processing region. | 2021-08-19 |
20210255872 | SYSTEMS AND METHODS FOR MINIMIZING BOOT TIME AND MINIMIZING UNAUTHORIZED ACCESS AND ATTACK SURFACE IN BASIC INPUT/OUTPUT SYSTEM - An information handling system may include a processor and a basic input/output system communicatively coupled to the processor and comprising a plurality of firmware volumes embodied in non-transitory computer readable media, each firmware volume comprising executable code for a respective functionality of the basic input/output system, wherein the basic input/output system is configured to, based on the presence or absence of an action or event associated with the basic input/output system, select a boot path for execution from a plurality of boot paths, each of the plurality of boot paths comprising a respective trust chain of a subset of the plurality of firmware volumes and execute the boot path selected. | 2021-08-19 |
20210255873 | SYSTEMS AND METHODS FOR BINDING SECONDARY OPERATING SYSTEM TO PLATFORM BASIC INPUT/OUTPUT SYSTEM - An information handling system may include a processor, non-transitory computer readable media communicatively coupled to the processor and having stored thereon a primary operating system of the information handling system and a secondary operating system of the information handling system, and a basic input/output system communicatively coupled to the processor and having provisioned thereon a signed signature of the secondary operating system signed with a private key of a public-private key pair and a public key of the public-private key pair. The basic input/output system is configured to, responsive to a determination to boot to the secondary operating system in lieu of booting to the primary operating system of the information handling system verify the secondary operating system using the signed signature of the secondary operating system and the public key and responsive to verifying the secondary operating system, allow the information handling system to boot to the secondary operating system. | 2021-08-19 |
20210255874 | ACCELERATED SYSTEM BOOT - An information handling system may include at least one processor, and a computer-readable medium having instructions thereon that are executable by the at least one processor. The instructions may be executable for: in response to detection of a first trigger event, enabling an accelerated boot process; and in response to detection of a second, different trigger event, enabling a non-accelerated boot process. The non-accelerated boot process may include parsing an internal forms representation (IFR), and the accelerated boot process may include not parsing the IFR. | 2021-08-19 |
20210255875 | IDENTIFIER AUTOMATIC ASSIGNING DATA PROCESSING BOARD, DATA PROCESSING MODULE INCLUDING IDENTIFIER AUTOMATIC ASSIGNING DATA PROCESSING BOARD AND DATA PROCESSING SYSTEM INCLUDING DATA PROCESSING MODULE - A data processing board according to an embodiment of the present disclosure includes a data processing module including at least one data processing board for automatically assigning an identifier according to the voltage value measured in the internal circuit and a communication board for transmitting and receiving signal to/from the data processing board, and a data processing system including the data processing module and a monitor collecting device for selecting and parallel-processing the data received from the data processing module. | 2021-08-19 |
20210255876 | AUTOMATIC FORMATION OF A VIRTUAL CHASSIS USING ZERO TOUCH PROVISIONING - A network device may obtain information concerning a virtual chassis that indicates that the network device and an additional network device are to be included in the virtual chassis. The network device may determine, based on the information concerning the virtual chassis, that the network device is connected to the additional network device, wherein the network device is connected to the additional network device via a link between a network interface of the network device and a network interface of the additional network device. The network device may cause the network interface of the network device to be converted to a virtual chassis interface and the network interface of the additional network device to be converted to a virtual chassis interface to enable the network device and the additional network device to be included in the virtual chassis to allow bootstrapping of the virtual chassis as a single logical device. | 2021-08-19 |
20210255877 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM - An information processing apparatus includes: a processor configured to acquire configuration information which is information on a module used in software, the software being configured with the module which realizes a predetermined function; acquire module information which is information on a module used in a device into which the software is introduced, from the device; specify the software introduced into the device, based on the configuration information and the module information; and perform control of displaying the introduced software and displaying a screen for changing a setting of the module in correspondence with the displayed software. | 2021-08-19 |
20210255878 | DOMESTIC APPLIANCE, DOMESTIC APPLIANCE SYSTEM, AND METHOD FOR OPERATING A DOMESTIC APPLIANCE - A network-compatible household appliance includes a communications facility for coupling via a data link to an external database. The household appliance is configured to receive operating parameters relevant for an operating sequence from the database via the communications facility, to store the operating parameters for implementing the operating sequence by the household appliance, and, when triggered by a user, to automatically transfer the operating parameters for the operating sequence to the database via the communications facility. | 2021-08-19 |
20210255879 | IMPROVED PROCESS OF PROGRAMMING FIELD PROGRAMMABLE GATE ARRAYS USING PARTIAL RECONFIGURATION - Programming field programmable gate array (FPGA) digital electronic integrated circuits (ICs) or other ICs that support partial reconfiguration, a particular FPGA having reconfigurable partitions and primitive variations configurable in each of the reconfigurable partitions, comprises: before writing configuration bitstreams to the FPGA, compiling and storing primitive bitstreams for different primitive functions that can be implemented on the particular FPGA; receiving input in a graphical user interface to connect graphical blocks representing functional logic of an algorithm to implement on the particular FPGA, the graphical blocks relating to reconfigurable logic; automatically determining a subset of the primitive functions comprising particular primitive functions that correspond to the graphical blocks; obtaining, from the digital storage, a subset of the primitive bitstreams that corresponds to the subset of the primitive functions; using partial reconfiguration operations, writing the subset of the primitive bitstreams to the particular FPGA. | 2021-08-19 |
20210255880 | SYSTEMS AND METHODS FOR USER INTERFACE ADAPTATION FOR PER-USER METRICS - A computer system for transforming a user interface according to data store mining includes a data store configured to store a parameter related to a user and index event data of a set of events. A data processing circuit is configured to identify a first set of identifiers and train a machine learning model based on event data by the data store. An interface circuit is configured to receive an indication of a selected identifier of the plurality of identifiers, determine a first intake metric of the selected identifier using the machine learning model, and a second intake metric of the selected identifier and the parameter using the machine learning model. The interface circuit is configured to transform the user interface according to the first intake metric and the second intake metric. | 2021-08-19 |
20210255881 | CONTROL DEVICE AND CONTROL METHOD - The objective of the present invention is to prevent a conflict between variable names and consequently the unintentional overwriting of data when a plurality of programs that define a shared variable exist. A control device ( | 2021-08-19 |
20210255882 | DYNAMIC DEVICE VIRTUALIZATION FOR USE BY GUEST USER PROCESSES BASED ON OBSERVED BEHAVIORS OF NATIVE DEVICE DRIVERS - A system and method for providing dynamic device virtualization is herein disclosed. According to one embodiment, the computer-implemented method includes providing a hypervisor and one or more guest virtual machines (VMs). Each guest VM is disposed to run a guest user process and the hypervisor is split into a device hypervisor and a compute hypervisor. The computer-implemented method further includes providing an interface between the device hypervisor and the compute hypervisor. The compute hypervisor manages an efficient use of CPU and memory of a host and the device hypervisor manages a device connected to the host by exploiting hardware acceleration of the device. | 2021-08-19 |
20210255883 | INTEGRITY-PRESERVING COLD MIGRATION OF VIRTUAL MACHINES - A method includes identifying a source virtual machine to be migrated from a source domain to a target domain, extracting file-in-use metadata and shared asset metadata from virtual machine metadata of the source virtual machine, and copying one or more files identified in the file-in-use metadata to a target virtual machine in the target domain. For each of one or more shared assets identified in the shared asset metadata, the method further includes (a) determining whether or not the shared asset already exists in the target domain, (b) responsive to the shared asset already existing in the target domain, updating virtual machine metadata of the target virtual machine to specify the shared asset, and (c) responsive to the shared asset not already existing in the target domain, copying the shared asset to the target domain and updating virtual machine metadata of the target virtual machine to specify the shared asset. | 2021-08-19 |
20210255884 | MIGRATION OF A DESKTOP WORKLOAD - A computer system includes a client device, geographically distributed data centers and a server. The client device remotely accesses a virtual desktop, with the virtual desktop configured to run and store a workload for an end-user of the client device. One of the data centers is assigned to host a virtual desktop for the client device based on a current location of the end-user. The server determines an indication of a future change in location of the end-user from the current location to a target location that is different from the current location. The server further determines which data center is to be reassigned to host the virtual desktop in response to the determined indication, and cooperates with the data centers to migrate the workload to the reassigned data center in response to travel of the end-user to the target location. | 2021-08-19 |
20210255885 | SYSTEM AND METHOD FOR MULTI-CLUSTER STORAGE - An illustrative embodiment disclosed herein is an apparatus including a processor having programmed instructions to maintain an object store including a primary cluster having one or more compute resources and one or more first storage resources, identify a secondary cluster having one or more second storage resources, select the secondary cluster to be added to the object store, allocate an available portion of the one or more second storage resources to the object store, and shard an object across the one or more second storage resources and the available portion of the one or more second storage resources. | 2021-08-19 |
20210255886 | DISTRIBUTED MODEL EXECUTION - Distributed model execution, including: identifying, for each model of a plurality of models, based on one or more execution constraints for the plurality of models, a corresponding node of a plurality of nodes, wherein the plurality of nodes each comprise one or more computing devices or one or more virtual machines; deploying each model of the plurality of models to the identified corresponding node of the plurality of nodes; and wherein the plurality of models are configured to generate, based on data input to at least one model of the plurality of models, a prediction associated with the data. | 2021-08-19 |
20210255887 | Dedicated Distribution of Computing Resources in Virtualized Environments - Concepts and technologies directed to dedicated optical distribution of computing resources in virtualized environments are disclosed herein. In various aspects, a system can include a processor and memory storing instructions that, upon execution, cause performance of operations. The operations can include receiving a virtual machine creation request that includes a virtual processing requirement and a virtual memory requirement for a virtual machine. The operations can include accessing a physical host infrastructure map that identifies remainder resources from physical host servers within a datacenter. The operations can include creating a simulation test routine and assembling a candidate resource set from the remainder resources. The operations can include establishing a dedicated processing path and a dedicated memory path for the candidate resource set. The operations can include initiating the simulation test routine on the candidate resource set via the dedicated processing path and the dedicated memory path. | 2021-08-19 |
20210255888 | EXECUTING COMMANDS IN A VIRTUAL ENVIRONMENT - An apparatus for executing one or more commands, for use with a virtualization environment operable to execute one or more virtualization functions, the apparatus comprising: an interface operable to determine an identifier associated with a first virtualization function; a parser operable to determine one or more commands available for execution using the first virtualization function; a store for storing each determined command with the first virtualization function identifier; a searcher, responsive to input of a first command, for matching the first command with each determined command in order to determine one or more matching commands; and an executor, responsive to selection of a first matching command, for executing the associated first virtualization function and the first matching command. | 2021-08-19 |
20210255889 | Hardware Transactional Memory-Assisted Flat Combining - An HTM-assisted Combining Framework (HCF) may enable multiple (combiner and non-combiner) threads to access a shared data structure concurrently using hardware transactional memory (HTM). As long as a combiner executes in a hardware transaction and ensures that the lock associated with the data structure is available, it may execute concurrently with other threads operating on the data structure. HCF may include attempting to apply operations to a concurrent data structure utilizing HTM and if the HTM attempt fails, utilizing flat combining within HTM transactions. Publication lists may be used to announce operations to be applied to a concurrent data structure. A combiner thread may select a subset of the operations in the publication list and attempt to apply the selected operations using HTM. If the thread fails in these HTM attempts, it may acquire a lock associated with the data structure and apply the selected operations without HTM. | 2021-08-19 |
20210255890 | SYSTEMS AND METHODS FOR STALLING HOST PROCESSOR - Systems and methods for stalling a host processor. In some embodiments, the host processor may be caused to initiate one or more selected transactions, wherein the one or more selected transactions comprise a bus transaction. The host processor may be prevented from completing the one or more selected transactions, to thereby stall the host processor. | 2021-08-19 |
20210255891 | SYSTEM AND METHOD FOR GENERATION OF EVENT DRIVEN, TUPLE-SPACE BASED PROGRAMS - In a system for automatic generation of event-driven, tuple-space based programs from a sequential specification, a hierarchical mapping solution can target different runtimes relying on event-driven tasks (EDTs). The solution uses loop types to encode short, transitive relations among EDTs that can be evaluated efficiently at runtime. Specifically, permutable loops translate immediately into conservative point-to-point synchronizations of distance one. A runtime-agnostic which can be used to target the transformed code to different runtimes. | 2021-08-19 |
20210255892 | SYSTEM AND METHOD OF OBTAINING MULTIPLE FACTOR PERFORMANCE GAIN IN PROCESSING SYSTEM - A processing system including a memory, command sequencers, accelerators, and memory banks. The memory stores program code including instruction threads sequentially listed in the program code. The command sequencers include a master command sequencer and multiple slave command sequencers. The master command sequencer executes the program code including distributing the instruction threads for parallel execution among the slave command sequencers. The instruction threads may be provided inline or accessed via inline thread line pointers. Each accelerator is available to each command sequencer in which multiple command sequencers may access multiple accelerators for parallel execution. The memory banks are simultaneously available to multiple accelerators. The master command sequencer may perform implicit synchronization by waiting for completion of simultaneous execution of multiple instruction threads. A command sequencer arbiter may arbitrate among the command sequencers. A memory bank arbiter may arbitrate among the accelerators for accessing the memory banks. | 2021-08-19 |
20210255893 | METHOD AND SYSTEM FOR MANAGING CONTINUOUS EXECUTION OF AN ACTIVITY DURING A USER DEVICE SWITCHOVER - A method and activity continuation system for managing continuous execution of an activity during a user device switchover is disclosed. The method includes detecting a switchover from a first user device to a second user device, where one or more activities are being executed in the first user device during the switchover. On detecting the switchover, the method includes determining device data and user related data associated with the second user device and applications data associated with one or more activities operated at the first user device. Further, based on the device data, the user related data and the applications data, contextual information is generated for the one or more activities. Thereafter, the method includes managing continuous execution of the one or more activities in the second user device on switchover based on the contextual information. Thus, the present disclosure facilitates users in providing application session continuity while switching between user devices. | 2021-08-19 |
20210255894 | GARBAGE COLLECTION WORK STEALING MECHANISM - Systems and methods for processing hierarchical tasks in a garbage collection mechanism are provided. The method includes determining chunks in a task queue. Each chunk is a group of child tasks created after processing one task. The method includes popping, by an owner thread, tasks from a top side of the task queue pointed at by a chunk in a first in first out (FIFO) pop. The method also includes stealing, by a thief thread, tasks from a chunk in an opposite side of the task queue. | 2021-08-19 |
20210255895 | COMPUTER-BASED SYSTEMS CONFIGURED FOR PERSISTENT STATE MANAGEMENT AND CONFIGURABLE EXECUTION FLOW AND METHODS OF USE THEREOF - Embodiments of an activities-defined software object execution management platform include instantiation of a program based on a program configuration, including customizable scheduling configurations and execution steps of program stages. A current state of the program is received from a state persistence storage. A stage configuration of the current stage is configured. A program execution readiness is determined to identify when to execute the current stage of the program based on an execution configuration and program-specific parameterized values. The current stage is instantiated based on the program execution readiness. An execution status of the stage is determined based on a validation configuration. A previous stage is determined to rollback before the current stage based on the execution status and a rollback configuration. The current state is updated in the persistent storage based on the execution of the state step to form a subsequent state of the program. | 2021-08-19 |
20210255896 | METHOD FOR PROCESSING TASKS IN PARALLEL, DEVICE AND STORAGE MEDIUM - Embodiments of the present disclosure disclose a method for processing tasks in parallel, a device and a storage medium, and relate to a field of artificial intelligent technologies. The method includes: determining at least one parallel computing graph of a target task; determining a parallel computing graph and an operator scheduling scheme based on a hardware execution cost of each operator task of each of the at least one parallel computing graph in a cluster, in which the cluster includes a plurality of nodes for executing the plurality of operator tasks, and each parallel computing graph corresponds to at least one operator scheduling scheme; and scheduling and executing the plurality of operator tasks of the determined parallel computing graph in the cluster based on the determined parallel computing graph and the determined operator scheduling scheme. | 2021-08-19 |
20210255897 | TECHNOLOGIES FOR OPPORTUNISTIC ACCELERATION OVERPROVISIONING FOR DISAGGREGATED ARCHITECTURES - Technologies for opportunistic acceleration overprovisioning for disaggregated architectures that include multiple processors on one or more compute devices. The disaggregated architecture to also include a compute device that includes at least one accelerator device and acceleration management circuitry. The acceleration management circuitry receives a plurality of job execution requests. The acceleration management circuitry to overprovision one or more accelerators by scheduling two or more job execution requests from among the plurality of job execution requests for execution by each accelerator device. | 2021-08-19 |
20210255898 | SYSTEM AND METHOD OF PREDICTING APPLICATION PERFORMANCE FOR ENHANCED USER EXPERIENCE - A system and method for optimizing the allocation of available system resources for smooth running of the system and enhanced user metrics. The system can monitor the applications including user interactions with the applications for determining the criticality of an application. The system can provide for historical and real-time analysis of the applications and classification of the applications as critical or non-critical. Based on the classification, the system can predictively provide for autonomous optimization and distribution of the system's resources between the critical and non-critical applications. | 2021-08-19 |
20210255899 | Method for Establishing System Resource Prediction and Resource Management Model Through Multi-layer Correlations - A method for establishing system resource prediction and resource management model through multi-layer correlations is provided. The method builds an estimation model by analyzing the relationship between a main application workload, resource usage of the main application, and resource usage of sub-application resources and prepares in advance the specific resources to meet future requirements. This multi-layer analysis, prediction, and management method is different from the prior arts, which only focus on single-level estimation and resource deployment. The present invention can utilize more interactive relationships at different layers to effectively perform predictions, thereby achieving the advantage of reducing hidden resource management costs when operating application services. | 2021-08-19 |
20210255900 | INITIALIZATION DATA MEMORY SPACE ALLOCATION SYSTEM - An initialization data memory space allocation system includes a memory system having a memory space that includes an initialization data bucket that reserves a contiguous subset of the memory space for initialization data. Each initialization engine that is coupled to the memory system is configured during initialization operations to allocate, for that initialization engine, a portion of the contiguous subset of the memory space reserved by the initialization data bucket, and then store initialization data in that portion of the contiguous subset of the memory space reserved by the initialization data bucket. A runtime engine that is coupled to the memory system is configured, during runtime operations, to claim the contiguous subset of the memory space reserved for initialization data by the initialization data bucket for runtime data, and store runtime data in at least a portion of the contiguous subset of the memory space. | 2021-08-19 |
20210255901 | ENHANCED HEALING AND SCALABILITY OF CLOUD ENVIRONMENT APP INSTANCES THROUGH CONTINUOUS INSTANCE REGENERATION - Techniques for refreshing application instances periodically based on a refresh rate parameter, providing enhanced health and stability for instances actively executing workloads. When a workload is received requesting one or more application instance(s), a refresh rate is determined, and the instance(s) are monitored. Periodically, based on the refresh rate, the monitored application instance(s) are refreshed. One or more instance(s) are identified for refreshing, one or more new replacement instance(s) are generated, and the identified instances are removed from active service and decommissioned. Workloads continue execution upon the newly generated instances, which are in turn monitored and refreshed as dictated by the refresh rate. | 2021-08-19 |
20210255902 | Cloud Computing Burst Instance Management - An example cloud computing burst management system includes a first cloud computing resource including a first processor and a first memory, a second cloud computing resource including a second processor and a second memory, and one or more data networks connecting the first cloud computing resource and the second cloud computing resource. The first cloud computing resource is configured to perform at least one cloud computing task, and to monitor one or more leading indicator parameters associated with operation of the first cloud computing resource while performing the at least one cloud computing task. In response to the one or more leading indicator parameters satisfying a first burst criteria, the first cloud computing resource is configured to provision a task instance on the second cloud computing resource for performing at least one portion of the cloud computing task. | 2021-08-19 |
20210255903 | DYNAMIC CORE ALLOCATION - Some embodiments provide a method for updating a core allocation among processes of a gateway datapath executing on a gateway computing device having multiple cores. The gateway datapath processes include a first set of data message processing processes to which a first set of the cores are allocated and a second set of processes to which a second set of the cores are allocated in a first core allocation. Based on data regarding usage of the cores, the method determines a second core allocation that allocates a third set of the cores to the first set of processes and a fourth set of the cores to the second set of processes. The method updates a load balancing operation to load balance received data messages over the third set of cores rather than the first set of cores. The method reallocates the cores from the first allocation to the second allocation. | 2021-08-19 |
20210255904 | RELIABILITY DETERMINATION OF WORKLOAD MIGRATION ACTIVITIES - Techniques for determining reliability of a workload migration activity are disclosed. In one embodiment, sub-tasks associated with the workload migration activity may be determined. Further, statistical data associated with an execution of the sub-tasks corresponding to different instances of the workload migration activity may be retrieved. Furthermore, a reliability model may be trained through machine learning using the statistical data to determine reliability of the workload migration activity. Then, the reliability of a new workload migration activity may be determined using the trained reliability model. | 2021-08-19 |
20210255905 | SYNC GROUPINGS - A work accelerator is connected to a gateway. The gateway enables the transfer of data to the work accelerator from an external storage at pre-compiled data synchronisation points attained by the work accelerator. The work accelerator is configured to send to a register of the gateway an indication of a sync group comprising the gateway. The work accelerator then sends to the gateway, a synchronisation request for a synchronisation to be performed at an upcoming pre-compiled data exchange synchronisation point. The sync propagation circuits are each configured to receive at least one synchronisation request and propagate or acknowledge the synchronisation request in dependence upon the indication of the sync group received from the work accelerator. | 2021-08-19 |
20210255906 | DATA PROCESSING DEVICE, DATA PROCESSING SYSTEM, DATA PROCESSING METHOD, AND PROGRAM - A data processing device ( | 2021-08-19 |
20210255907 | REAL-TIME SYNTHETICALLY GENERATED VIDEO FROM STILL FRAMES - Systems and methods for generating synthetic video are disclosed. For example, a system may include a memory unit and a processor configured to execute the instructions to perform operations. The operations may include receiving video data, normalizing image frames, generating difference images, and generating an image sequence generator model. The operations may include training an autoencoder model using difference images, the autoencoder comprising an encoder model and a decoder model. The operations may include identifying a seed image frame and generating a seed difference image from the seed image frame. The operations may include generating, by the image sequence generator model, synthetic difference images based on the seed difference image. In some aspects, the operations may include using the decoder model to synthetic normalized image frames from the synthetic difference images. The operations may include generating synthetic video by adding background to the synthetic normalized image frames. | 2021-08-19 |
20210255908 | SYSTEMS AND METHODS FOR PROVIDING RESTOCK NOTIFICATIONS USING A BATCH FRAMEWORK - The embodiments of the present disclosure provide systems and methods for providing restock notification, comprising a memory storing instructions and at least one processor configured to execute the instructions. The processor may be configured to receive, from a user interface associated with a user, a first request for a restock notification associated with a product, and modify a database to assign a first status to the product. The processor may further be configured to receive a message indicating that the product is available for purchase, and modify the database to assign a second status to the product. The processor may configure a batch framework to periodically analyze the database to identify product with the second status assigned, and determine a notification schedule for sending the restock notification to the user. The processor may be configured to send the restock notification to the user based on the determined notification schedule. | 2021-08-19 |
20210255909 | SYSTEM AND METHOD FOR CREATING AND MANAGING AN INTERACTIVE NETWORK OF APPLICATIONS - A method of creating an interactive application network includes displaying, by a processor, a workspace and a toolbox on a display screen, instantiating, by the processor, a first interaction container to display a first output in response to a first user input, instantiating, by the processor, a second interaction container to display a second output based on the first output, in response to a second user input, the second user input being a user interaction with the first interactive container, and linking, by the processor, the first interactive container with the second interactive container via an interactive link to form the interactive application network. | 2021-08-19 |
20210255910 | ISOLATING COMMUNICATION STREAMS TO ACHIEVE HIGH PERFORMANCE MULTI-THREADED COMMUNICATION FOR GLOBAL ADDRESS SPACE PROGRAMS - Systems, apparatuses and methods may provide for detecting an outbound communication and identifying a context of the outbound communication. Additionally, a completion status of the outbound communication may be tracked relative to the context. In one example, tracking the completion status includes incrementing a sent messages counter associated with the context in response to the outbound communication, detecting an acknowledgement of the outbound communication based on a network response to the outbound communication, incrementing a received acknowledgements counter associated with the context in response to the acknowledgement, comparing the sent messages counter to the received acknowledgements counter, and triggering a per-context memory ordering operation if the sent messages counter and the received acknowledgements counter have matching values. | 2021-08-19 |
20210255911 | PROGRAMMABLE DEVICE, HIERARCHICAL PARALLEL MACHINES, AND METHODS FOR PROVIDING STATE INFORMATION - Programmable devices, hierarchical parallel machines and methods for providing state information are described. In one such programmable device, programmable elements are provided. The programmable elements are configured to implement one or more finite state machines. The programmable elements are configured to receive an N-digit input and provide a M-digit output as a function of the N-digit input. The M-digit output includes state information from less than all of the programmable elements. Other programmable devices, hierarchical parallel machines and methods are also disclosed. | 2021-08-19 |
20210255912 | DETERMINATION OF A RELIABILITY STATE OF AN ELECTRICAL NETWORK - Method for determining a reliability state of an electrical network, the electrical network comprising a plurality of interconnected electrical devices, the method including the following steps:
| 2021-08-19 |
20210255913 | METHODS OF PREDICTING ELECTRONIC COMPONENT FAILURES IN AN EARTH-BORING TOOL AND RELATED SYSTEMS AND APPARATUS - A method or system for predicting failures in an earth-boring tool. Communication between one or more nodes in the earth-boring tool may be monitored. Metadata from the communication may be stored in a storage device. The metadata may be compared to historical communication metadata. The metadata may be fed into models built from historic metadata. Predictions from the models may be aggregated in to a recommendation. A failure prediction for each of the one or more nodes may be generated from the comparison. | 2021-08-19 |
20210255914 | PROACTIVE LEARNING OF NETWORK SOFTWARE PROBLEMS - The present embodiments relate to proactive learning of network software problems. In an embodiment, a method includes receiving, by a detection system, webpage data from a computer system. The computer system can receive the webpage data from a plurality of client devices. The webpage data can be associated with user identifiers identifying each client device of the plurality of client devices. The detection system can then receive assistance data from a user assistance computer. The user assistance computer can receive the assistance data from a plurality of user devices. The assistance data can be associated with user identifiers identifying each user device of the plurality of user devices. The detection system can label the webpage data based on the assistance data including matching user identifiers and then determine at least a pattern based on the labeled webpage data. | 2021-08-19 |
20210255915 | CLOUD-BASED SCALE-UP SYSTEM COMPOSITION - Technologies for composing a managed node with multiple processors on multiple compute sleds to cooperatively execute a workload include a memory, one or more processors connected to the memory, and an accelerator. The accelerator further includes a coherence logic unit that is configured to receive a node configuration request to execute a workload. The node configuration request identifies the compute sled and a second compute sled to be included in a managed node. The coherence logic unit is further configured to modify a portion of local working data associated with the workload on the compute sled in the memory with the one or more processors of the compute sled, determine coherence data indicative of the modification made by the one or more processors of the compute sled to the local working data in the memory, and send the coherence data to the second compute sled of the managed node. | 2021-08-19 |
20210255916 | MEMORY DEVICE AND OPERATING METHOD OF THE SAME - A memory device includes a memory cell array including memory cells connected to word lines and bit lines. Each of the memory cells includes a switch element and a memory element, and has a first state or a second state in which a threshold voltage is within a first voltage range or a second voltage range, lower than the first voltage range. A memory controller is configured to execute a first read operation for the memory cells using a first read voltage, higher than a median value of the first voltage range, program first defect memory cells turned off during the first read operation to the first state, execute a second read operation for the memory cells using a second read voltage, lower than a median value of the second voltage range, and execute a repair operation for second defect memory cells turned on during the second read operation. | 2021-08-19 |
20210255917 | Structured Software Delivery And Operation Automation - A system and method for the structured automation of the delivery and operation of software functionality is proposed. A declarative notation approach is used to describe automation sequences independently from concrete automation tools and services that may be used to execute the automation sequences. An automation service abstraction layer is introduced that hides concrete automation service characteristics from the declarative automation definition layer. The service abstraction layer enables a transparent change or update of concrete automation service without affecting automation sequences that use the corresponding service abstractions. For operation automation, the automation execution may be combined with causation-capable monitoring systems that both identifies critical changes of operating conditions for which automated remediation is indicated and corresponding root cause changes for those critical changes. The remediation automation may, on notification of such a critical change, identify and apply remediation actions that counteract the identified root cause changes. | 2021-08-19 |
20210255918 | ERROR DETECTION CODE GENERATION TECHNIQUES - Methods, systems, and devices related to error detection code generation techniques are described. A memory device may identify a first set of bits for transmission to a host device and calculate an error detection code associated with the first set of bits. Prior to transmitting the first set of bits, the memory device may modify one or more bits of the first set of bits to generate a second set of bits for transmission from the memory device to the host device. The memory device may modify one or more bits of the first error detection code to generate a second error detection code based on a parity of the modified one or more bits of the first set of bits. The memory device may transmit the second set of bits and the second error detection code to the host device. | 2021-08-19 |
20210255919 | REPLACEABLE UNIT AND APPARATUS TO WHICH THE REPLACEABLE UNIT IS ATTACHED - A replaceable unit includes: a communication unit configured to perform communication with a main body; a non-volatile memory storing code information indicating whether a configuration of an error detection code is a first configuration or a second configuration. The communication unit is further configured to store, in a volatile memory, the code information, execute the communication in accordance with the code information in the volatile memory, and, upon receiving a change command, update the code information stored in the volatile memory, the first configuration uses an error detection code of a first code length, the second configuration uses an error detection code of a second code length longer than the first code length, and the first configuration is used for the change command in order to change from the first configuration to the second configuration. | 2021-08-19 |
20210255920 | BUDGETING OPEN BLOCKS BASED ON POWER LOSS PROTECTION - A storage system has zones in solid-state storage memory, with power loss protection. The system identifies portions of data for processes that utilize power loss protection. The system determines to activate or deactivate power loss protection for the portions of data for the processes. The system tracks activation and deactivation of power loss protection in zones in the solid-state storage memory, in accordance with the portions of data having power loss protection activated or deactivated. | 2021-08-19 |
20210255921 | METHOD OF CONTROLLING VERIFICATION OPERATIONS FOR ERROR CORRECTION OF NON-VOLATILE MEMORY DEVICE, AND NON-VOLATILE MEMORY DEVICE - A method of controlling verification operations for error correction of a non-volatile memory device includes the following. A tolerated error bit (TEB) number for error correction of the non-volatile memory device is set to a first value to control verification operations in accordance with the TEB number. After at least one portion of the non-volatile memory device is programmed for a specific number of times, the TEB number is changed from the first value to a second value to control the verification operations in accordance with the TEB number, wherein the second value is greater than the first value and is less than or equal to the TEB threshold. The method may be performed while the at least one portion of the non-volatile memory device is programmed and verified. | 2021-08-19 |
20210255922 | MEMORY DEVICE, MEMORY SYSTEM, AND METHOD OF OPERATING THE SAME - A memory device, a memory system, and a method of operating the same. The memory device includes a memory cell array including a plurality of memory cells and a write command determination unit (WCDU) that determines whether a write command input to the memory device is (to be) accompanied a masking signal. The WCDU produces a first control signal if the input write command is (to be) accompanied by a masking signal. A data masking unit combines a portion of read data read from the memory cell array with a corresponding portion of input write data corresponding to the write command and generates modulation data in response to the first control signal. An error correction code (ECC) engine generates parity of the modulation data. | 2021-08-19 |
20210255923 | CONTROLLER AND MEMORY SYSTEM - A controller includes an Error Correction Code (ECC) encoder adding a first parity to data to generate a data set, and encoding the data set to generate a first parity data set, a buffer temporarily storing the first parity data set, an ECC decoder decoding the first parity data set received from the buffer to generate a decoded data set, a first checker performing a Low Density Parity Check (LDPC) encoding on the decoded data set to generate an LDPC data set to which a second parity is added, and a second checker performing a syndrome check operation on the LDCP data set including the first and second parities. | 2021-08-19 |
20210255924 | RAID STORAGE-DEVICE-ASSISTED DEFERRED PARITY DATA UPDATE SYSTEM - A RAID storage-device-assisted deferred parity data update system includes a RAID primary data drive that retrieves second primary data via a DMA operation from host system, and XOR's it with first primary data to produce first interim parity data, which causes a RAID storage controller device to provide an inconsistent parity stripe journal entry in the host system. The RAID primary data drive then retrieves third primary data via a DMA operation from the host system, XORs it with the second primary data and the first interim parity data to produce second interim parity data. A RAID parity data drive retrieves the second interim parity data via a DMA operation, and XORs it with first parity data to produce second parity data that it uses to overwrite the first parity data, which causes the RAID storage controller device to remove the inconsistent parity stripe journal entry from the host system. | 2021-08-19 |
20210255925 | MULTI-LEVEL ERASURE SYSTEM WITH COOPERATIVE OPTIMIZATION - A data storage erasure system may have a host connected to a plurality of data storage devices via a network controller with each of the plurality of data storage devices and the network controller connected to a pods controller and each of the plurality of the data storage devices having a device controller. A rebuild strategy can be generated with a rebuild module connected to the plurality of data storage devices, the network controller, and the pods controller. The rebuild strategy may be directed to minimize data rebuild times in the event of a failure in the plurality of data storage devices by executing the rebuild strategy in response to a detected or predicted failure in at least one data storage device of the plurality of data storage devices. | 2021-08-19 |
20210255926 | Backup Agent Scaling with Evaluation of Prior Backup Jobs - A number of backup agents to be deployed to a system can be predicted by training one or more machine learning (ML) objects of a first prediction algorithm and training one or more ML objects of a second prediction algorithm. The training can be performed with archived backup job data. Both prediction algorithms can be applied to the backup job data to predict execution duration of the backup jobs. The prediction algorithm with a lower error can be used to predict a total execution duration of a current number of backup jobs. An optimal number of backup agents can be predicted based on the predicted total execution duration and the current number of backup jobs. | 2021-08-19 |
20210255927 | Granular Voltage Tuning - A system and related method operate solid-states storage memory. The system performs a first tuning process that has a first set of tuning options, on a first portion of solid-states storage memory. The system identifies one or more second portions of solid-states storage memory, within the first portion of solid-states storage memory that fail readability after the first tuning process. The system performs a second tuning process that has a differing second set of tuning options, on each of the one or more second portions of solid-states storage memory. | 2021-08-19 |
20210255928 | RESTORING VIRTUAL NETWORK FUNCTION (VNF) PERFORMANCE VIA VNF RESET OF LIFECYCLE MANAGEMENT - Techniques for identifying and remedying performance issues of Virtualized Network Functions (VNFs) are discussed. An example system includes processor(s) configured to: process VNF Performance Measurement (PM) data received from a network Element Manager (EM) for a VNF; determine whether the VNF has a negative performance issue based on the VNF PM data; request that the EM create a Virtualization Resource (VR) PM job associated with a VR of the VNF when the VNF has the negative performance issue; process VR PM data received from the EM; determine whether to restart the VNF based on the VR PM data and the VNF PM data; and request a network function virtualization orchestrator (NFVO) to restart the VNF based on a determination to restart the VR. | 2021-08-19 |
20210255929 | UNINTERRUPTED BLOCK-BASED RESTORE OPERATION USING A READ-AHEAD BUFFER - Methods and systems for restoring data are described. According to some embodiments, the method, in response to receiving a first restore request, initiates a second restore request to a hybrid data buffer to route blocks of backup data to the hybrid data buffer. The method further invokes an interrupt service routine (ISR) that is initialized with reserved addresses. When the blocks of backup data are transmitted to the hybrid data buffer, the method further tags, by the ISR, the blocks of backup data to a specified location, where the specified location is one of the reserved addresses. | 2021-08-19 |
20210255930 | OPTIMIZING BACKUP PERFORMANCE WITH HEURISTIC CONFIGURATION SELECTION - Embodiments are described for a heuristic configuration selection process as part of or accessible by the backup management process. This processing component provides a method to automatically determine the configuration parameters needed to obtain optimal performance for a given backup/restore job. This process involves identifying key parameters that determine backup performance and suggest means to derive and incorporate those configurable parameters into the backup software automatically. Embodiments can be applied to stream based backups, or other types of backup software as well. | 2021-08-19 |
20210255931 | VALIDATING METERING OF A DISASTER RECOVERY SERVICE USED IN CLOUDS - An aspect of the present disclosure facilitates validating metering of a disaster recovery service used in clouds. In one embodiment, a system receives a request to validate metering of usage of a disaster recovery service (DRS) in a first cloud. The system collects from a metering service of the DRS, measured values representing the actual usage of the DRS in a second cloud and then compares the measured values with corresponding expected values representing expected usage of the DRS in the second cloud. The system sends a response to the request based on a result of the comparing. In one embodiment, the request is received from a tenant (customer/owner) owning the first cloud. | 2021-08-19 |
20210255932 | PRIORITIZING VIRTUAL MACHINES FOR BACKUP PROTECTION AT A VIRTUAL MACHINE DISK LEVEL - According to one embodiment, a method identifies a plurality of parameters associated with one or more virtual machines to be backed up to a backup storage system and a number of available backup proxy sessions. The method further assigns each of the available backup proxy sessions to a virtual disk of the one or more virtual machines based on the plurality of parameters and the number of available backup proxy sessions. The method then initiates backup operations, wherein each assigned backup proxy session is to back up a corresponding virtual disk to which it is assigned. | 2021-08-19 |
20210255933 | USING INODE ENTRIES TO MIRROR DATA OPERATIONS ACROSS DATA STORAGE SITES - A computer-implemented method, according to one approach, includes: receiving a data operation request which includes an activated compound operation flag. The data operation request is added to a queue in a gateway node, and the data operation request is eventually transmitted to a disaster recovery site. An inode entry which corresponds to the portion of data is locked, and metadata associated with the inode entry is updated to indicate that the data operation request has been performed at the disaster recovery site. Supplemental data operation requests which correspond to the portion of data are also identified by evaluating the metadata associated with the inode entry. These supplemental data operation requests are transmitted to the disaster recovery site, and the metadata associated with the inode entry is updated to indicate that the supplemental data operation requests have been performed at the disaster recovery site. Furthermore, the inode entry is unlocked. | 2021-08-19 |
20210255934 | AUTOMATED DISASTER RECOVERY SYSTEM AND METHOD - Methods and systems for recovering a host image of a client machine to a recovery machine comprise comparing a profile of a client machine of a first type to be recovered to a profile of a recovery machine of a second type different from the first type, to which the client machine is to be recovered, by a first processing device. The first and second profiles each comprise at least one property of the first type of client machine and the second type of recovery machine, respectively. At least one property of a host image of the client machine is conformed to at least one corresponding property of the recovery machine. The conformed host image is provided to the recovery machine, via a network. The recovery machine is configured with at least one conformed property of the host image by a second processing device of the recovery machine. | 2021-08-19 |
20210255935 | DATABASE PROTECTION USING BLOCK-LEVEL MAPPING - A system according to certain aspects may include a client computing device including: a database application configured to output a database file in a primary storage device(s), the database application outputting the database file as a series of application-level blocks; and a data agent configured to divide the database file into a plurality of first blocks having a first granularity larger than a second granularity of the application-level blocks such that each of the first blocks spans a plurality of the application-level blocks. The system may include a secondary storage controller computer(s) configured to: in response to instructions to create a secondary copy of the database file: copy the plurality of first blocks to a secondary storage device(s) to create a secondary copy of the database file; and create a table that provides a mapping between the copied plurality of first blocks and corresponding locations on the secondary storage device(s). | 2021-08-19 |