37th week of 2022 patent applcation highlights part 48 |
Patent application number | Title | Published |
20220291924 | CALCULATION ENGINE FOR PERFORMING CALCULATIONS BASED ON DEPENDENCIES IN A SELF-DESCRIBING DATA SYSTEM - A method includes receiving a request to modify a first value of a first field of a first item in a self-describing data system, and obtaining a domain comprising items in the self-describing data system. The first item and a second item are included in items, and the second item comprises a second field having a second value. The method includes calculating, based on a rule of the second field, a dependency of the second value on the first value. The rule specifies how the second value is to be calculated using the first value. The method includes modifying, based on the request, the first value. The method includes receiving an event triggered by the modification to the first value. The method includes, responsive to the event, calculating the second value based on the rule, and storing the second value in the second field. | 2022-09-15 |
20220291925 | PARAMETRIC FILTER USING HASH FUNCTIONS WITH IMPROVED TIME AND MEMORY - Method for searching an item using a parametric hash filter includes forming an input vector from input data stream; forming a hash matrix having a first portion and a second portion; multiplying the hash matrix with the input vector to generate a second input vector including a hash values of the first input vector; generating a perfect hash vector and a universal hash vector, by applying a smooth periodic function to the second input vector; mapping onto a Markov random field the coordinates of locations of hash values in a search domain for which there is no possibility of collisions in the perfect hash vector to form an energy function; minimizing the energy function to generate a compressed hash table; fitting a band of acceptable locations in the compressed hash table, based on a predetermined false positive rate; and searching for a new item in the band of acceptable locations. | 2022-09-15 |
20220291926 | SYSTEMS, METHODS, AND APPARATUSES FOR TILE STORE - Embodiments detailed herein relate to matrix operations. In particular, the loading of a matrix (tile) from memory. For example, support for a loading instruction is described in at least a form of decode circuitry to decode an instruction having fields for an opcode, a source matrix operand identifier, and destination memory information, and execution circuitry to execute the decoded instruction to store each data element of configured rows of the identified source matrix operand to memory based on the destination memory information | 2022-09-15 |
20220291927 | SYSTEMS, METHODS, AND APPARATUSES FOR TILE STORE - Embodiments detailed herein relate to matrix operations. In particular, the loading of a matrix (tile) from memory. For example, support for a loading instruction is described in at least a form of decode circuitry to decode an instruction having fields for an opcode, a source matrix operand identifier, and destination memory information, and execution circuitry to execute the decoded instruction to store each data element of configured rows of the identified source matrix operand to memory based on the destination memory information | 2022-09-15 |
20220291928 | EVENT CONTROLLER IN A DEVICE - Examples described herein relate to a device comprising circuitry to perform at least one action for at least one error or exception handling event based on a configuration specified by an instruction set consistent with a programmable packet processing language. | 2022-09-15 |
20220291929 | METHOD FOR MULTI-CORE COMMUNICATION, ELECTRONIC DEVICE AND STORAGE MEDIUM - The present disclosure relates to a method for multi-core communication, an electronic device and a storage medium. The method includes controlling a plurality of cores to run; establishing a communication connection between a publishing core and a receiving core in the plurality of cores based on a communication layer; performing, by the publishing core, an operation on a topic message through calling a preset interface of the communication layer via a publish-subscribe layer; and accessing the topic message in response to the receiving core calling a preset interface of the publish-subscribe layer. | 2022-09-15 |
20220291930 | METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO IMPROVE PERFORMANCE OF A COMPUTE DEVICE BY DETECTING A SCENE CHANGE - Methods, apparatus, systems, and articles of manufacture are disclosed to improve performance of a compute device by detecting a scene change. An example apparatus includes scene change detection circuitry and interrupt circuitry. The example scene change detection circuitry is to determine a first score value for a first metric of similarity between a first image of a field of view (FOV) of an image sensor and a second image of the FOV, determine a second score value for a second metric of similarity between the first image and the second image, and compute a composite score value based on the first score value and the second score value. The example interrupt circuitry is to generate an interrupt to processor circuitry of the compute device to cause the processor circuitry to adjust a computation condition of the compute device based on the composite score. | 2022-09-15 |
20220291931 | GENERATING VIDEO SEQUENCES FROM USER INTERACTIONS WITH GRAPHICAL INTERFACES - A video sequence may be generated that animates user interactions across a number of different user interfaces for an application. Visual representations of the user interfaces can be combined together into an image that acts as a canvas or background for the video sequence. A record of user interactions with the user interfaces can be mapped to locations on the canvas, and the video sequence can be generated that incrementally animates user actions as they move between different containers or controls in the user interfaces. The animation may show individual users or aggregated user groups represented by graphics that move across the user interfaces to form a path represented by connectors and arcs. | 2022-09-15 |
20220291932 | Computer-Generated Macros and Voice Invocation Techniques - In examples, a set of actions performed by a user is identified as an action sequence. If user performance of the same action sequence or similar action sequences exceeds a predetermined threshold, a recommendation to create a macro may be generated. The macro may have one or more associated triggers, such that it may be invoked using voice input or via a user interface, among other examples. A macro may have an associated context in which it applies. In some instances, a trigger used to invoke the macro comprises an indication as to such a context. For example, the macro may be invoked in the context of a document, such that one or more document parts are processed accordingly. As another example, the macro may be invoked to process multiple documents, as may be related in subject matter or associated with the same application. | 2022-09-15 |
20220291933 | DYNAMIC MODELER - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for dynamically modeling a page using dynamic data. One of the methods includes receiving a first dynamic input request with corresponding contextual inputs comprising data characterizing a single dynamic first event of a main task; generating, in response to the first dynamic input request, a dynamic smart interface responding to the contextual inputs; generating, in response to the first dynamic input request, a model comprising a single shared dynamic control load and dynamic data load responding to the contextual inputs; receiving a second dynamic input request comprising data characterizing a single dynamic final event of the main task; triggering, in response to the second dynamic input request, a dynamic process comprising a rule monitor, a smart task generator, and a smart contract; and presenting, to a user in response to the second dynamic input request, dynamic rule options. | 2022-09-15 |
20220291934 | Method for Displaying Splash Screen Information of Application and Electronic Device - A method for displaying splash screen information of an application and an electronic device. The method includes receiving, by an electronic device, an operation, performed by a user, of opening a first application, determining, using an operating system of the electronic device, that the first application has a splash screen function, determining that a system splash screen capability is enabled for the first application, obtaining splash screen information of the first application, where the splash screen information includes at least one of a brand slogan or a splash screen advertisement of the first application, loading and rendering the splash screen information of the first application, displaying, by the electronic device, the splash screen information of the first application, and starting the first application after displaying the splash screen information, and displaying a page of the first application. | 2022-09-15 |
20220291935 | Real-Time Context Preserving Visual Guidance - A method, apparatus, system, and computer program code for real-time visual guidance. A set of actions for a user instance of an application is captured during an operation of the user instance of the application by a user. A visual guidance of a set of steps performed to use a feature in the user instance of the application is generated in response to a user input requesting assistance with the feature. The visual guidance takes into account the set of actions and includes a context of a graphical user interface present for the user instance of the application displayed on a display system when the user input requesting the assistance with the feature is received during the operation of the user instance of the application. The visual guidance of the set of steps performed to use the feature is displayed on the display system. | 2022-09-15 |
20220291936 | SYSTEMS AND METHODS OF GENERATING VIDEO MATERIAL - Systems and methods of automatically generating a video are described. Systems and methods include receiving a test script, generating a step action tree comprising a plurality of actions based on the test script, receiving a selection of a first action of the plurality of actions in the step action tree, and based on the selection of the first action, generating a video clip of a graphical user interface performing the first action and associating the multimedia clip with the first action. | 2022-09-15 |
20220291937 | DATA MANAGEMENT IMPROVEMENTS INCLUDING DOCKING LIMITED-FEATURE DATA MANAGEMENT DEVICES TO A FULL-FEATURED DATA MANAGEMENT SYSTEM - Software, firmware, and systems are described herein that permit an organization to dock previously-utilized, limited-feature data management modules with a full-featured data management system. By docking limited-feature data management modules to a full-featured data management system, metadata and data from the various limited-feature data management modules can be integrated and utilized more efficiently and effectively. Moreover, additional data management features can be provided to users after a more seamless transition. | 2022-09-15 |
20220291938 | COMPILING A SPECIFIED INSTRUCTION FROM A FIRST VIRTUAL APPLICATION TO A SECOND VIRTUAL APPLICATION - Systems and methods are described for compiling a specified instruction from a first virtual application to a second virtual application. Each virtual application may be associated with different programming languages. In an example method, a computing device receives a request to execute the specified instruction in the second virtual application. A target data structure may be created, using a library of the second virtual application, where a template directory may be stored. First syntax features, each defining a respective variable may be identified. An abstract syntax tree may be used to derive, for each first syntax feature, a modified definition for the respective variable. Second syntax features may be generated that define the respective variables more precisely than the first syntax features. The specified instruction may be rendered the second virtual application and may be expressed via the second syntax features and their respective variables. | 2022-09-15 |
20220291939 | SYSTEMS AND METHODS FOR EXECUTING A PROCESS USING A LIGHTWEIGHT JAVA WORKFLOW ORCHESTRATION LIBRARY AND A GRAPH DATA STRUCTURE - A method for executing a process may include: receiving a process comprising data to execute from a sub-system; identifying a process graph for the process to execute, the process graph comprising a plurality of nodes connected by edges, each of the plurality of nodes in the process graph represents a type of operation to perform, an identification of data input, and an address for a handler; retrieve the identified process graph from a process graph source; traverse the identified process graph to a first node; call a first handler identified by the first node with the data; receive a first result; select, based on the result, one of a plurality of edges from the first node to a second node; call a second handler identified by the second node with the first result and the data; receive a second result; and output the second result to the sub-system. | 2022-09-15 |
20220291940 | METHOD FOR DEPLOYING PRODUCT APPLICATIONS WITHIN VIRTUAL MACHINES ONTO ON-PREMISES AND PUBLIC CLOUD INFRASTRUCTURES - A method for deploying product applications within virtual machines onto on-premises and public cloud infrastructures. Specifically, the disclosed method proposes a migration scheme of virtual machine images (configured at least with product applications and guest operating systems) from an on-premises infrastructure to a public cloud infrastructure. Further, the migration scheme considers two workflows—a normal workflow contingent on the public cloud infrastructure having up-to-date support for the guest operating systems; and an exception workflow contingent on the public cloud infrastructure lacking up-to-date support for the guest operating systems. | 2022-09-15 |
20220291941 | Compute Platform Recommendations for New Workloads in a Distributed Computing Environment - Techniques for an optimization service of a service provider network to help optimize the selection, configuration, and utilization, of virtual machine (VM) instance types to support workloads on behalf of users. The optimization service may implement the techniques described herein at various stages in a life cycle of a workload to help optimize the performance of the workload, and reduce underutilization of computing resources. For example, the optimization service may perform techniques to help new users select an optimized VM instance type on which to initially launch their workload. Further, the optimization service may monitor a workload for the life of the workload, and determine new VM instance types, and/or configuration modifications, that optimize the performance of the workload. The optimization service may provide recommendations to users that help improve performance of their workloads, and that also increase the aggregate utilization of computing resources of the service provider network. | 2022-09-15 |
20220291942 | COOPERATIVE CLOUD INFRASTRUCTURE USING BLOCKCHAINS FOR HARDWARE OWNERSHIP AND ACCESS - A system includes a memory, a processor in communication with the memory, a hypervisor executing on the processor, a pool of hypervisor resources, and a cloud-sharing module (CSM). The CSM runs in a kernel to assign an anonymous identity to a hypervisor resource from the pool of hypervisor resources. The CSM broadcasts a transaction for the hypervisor resource and determines which provider owns the hypervisor resource. A first provider is associated with a second anonymous identity and a second provider is associated with a third anonymous identity. Additionally, the CSM receives mining information that includes a block associated with the transaction, where the block is part of a blockchain. The CSM completes the transaction for the first anonymous identity associated with the hypervisor resource between the second anonymous identity and the third anonymous identity. | 2022-09-15 |
20220291943 | LOGICAL PROCESSING FOR CONTAINERS - Some embodiments provide a local network controller that manages a first managed forwarding element (MFE) operating to forward traffic on a host machine for several logical networks and configures the first MFE to forward traffic for a set of containers operating within a container virtual machine (VM) that connects to the first MFE. The local network controller receives, from a centralized network controller, logical network configuration information for a logical network to which the set of containers logically connect. The local network controller receives, from the container VM, a mapping of a tag value used by a second MFE operating on the container VM to a logical forwarding element of the logical network to which the set of containers connect. The local network controller configures the first MFE to apply the logical network configuration information to data messages received from the container VM that are tagged with the tag value. | 2022-09-15 |
20220291944 | INFORMATION PROCESSING DEVICE, ANOMALY DETECTION METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - In an ECU, virtualization software operates a first virtual machine (VM) and a second VM. A transfer unit of the second VM acknowledges communication data transmitted from the first VM and destined to the second VM. A transfer unit generates a parameter related to communication between the VMs, based on the communication data acknowledged. A detection unit of the second VM detects abnormal communication, based on the parameter generated by the transfer unit. | 2022-09-15 |
20220291945 | System For Live Migration of Virtual Machines With Assigned Peripheral Devices - Hardware transactions or other techniques, such as custom PCIe handling devices, are used to atomically move pages from one hoses memory to another hoses memory. The hosts are connected by one or two non-transparent bridges (NTBs), which make each host's memory and devices available to the other, while allowing each host to reboot independently. | 2022-09-15 |
20220291946 | SOFTWARE CONTAINER CONFIGURATION - A container specification is received. The container specification includes a definition of an image. The image definition specifies the running of one or more prestart runtime commands. The image definition is inspected to identify whether the image definition includes specifying the running of one or more prestart runtime commands. The image is started on a host system, wherein in response to identifying that the image definition includes running one or more prestart runtime commands, the starting of the image includes running the one or more prestart runtime commands prior to the container entering a running state. | 2022-09-15 |
20220291947 | APPARATUS, SYSTEMS, AND METHODS FOR FACILITATING EFFICIENT HARDWARE-FIRMWARE INTERACTIONS - A system for facilitating efficient hardware-firmware interactions may include (i) a plurality of memory registers, (ii) a hardware module that directly reads from and writes to the plurality of memory registers and is configured to interpret a special marker that distinguishes between register write operations and non-register-write operations, and (iii) a firmware module that directs the hardware module to perform operations at least in part by sending the special marker. Various other methods, systems, and computer-readable media are also disclosed. | 2022-09-15 |
20220291948 | PRIORITY BASED MANAGEMENT OF ACCESS TO SHARED RESOURCES - A system, computer readable medium and a method that may include performing multiple iterations of: determining, by each active initiator of the multiple initiators, a number of pending access requests generated by the active initiator, wherein each access request is a request to access a shared resource out of the shared resources; determining, by each active initiator, a priority level to be assigned to all pending access requests generated by the active initiator, wherein the determining is based on the number of pending access requests generated by the active initiator, a number of active initiators out of the multiple initiators, and a number of access requests serviceable by the shared resource; for each active initiator, informing an arbitration hardware of a network on chip about the priority level to be assigned to all pending access requests generated by the active initiator; and managing access to the shared resources, by the arbitration hardware, based on the priority level to be assigned to all pending access requests generated by each active initiator. | 2022-09-15 |
20220291949 | SOFTWARE SERVICE INTEGRATION IN A CENTRAL SOFTWARE PLATFORM - In an embodiment, an apparatus comprises one or more processors and one or more memories communicatively coupled to the one or more processors and storing instructions which, when processed by the one or more processors, cause: providing one or more first software services to a client computing device corresponding to a particular entity profile for a particular entity; identifying data in the particular entity profile identifying one or more second software services used by the particular entity; while executing a first workflow for the one or more first software services, identifying a trigger for the one or more second software services; in response to identifying the trigger, executing a second workflow for the one or more second software services using data extracted from the first workflow. | 2022-09-15 |
20220291950 | AUTOMATED SEMANTIC TAGGING - Methods and systems are disclosed for automated semantic tagging that include detecting a particular thread executed by a processor and identifying a root process of the particular thread. An object-process link may be by linking an object that executed code that called the particular thread to the root process. A thread list of thread definitions of the object may be identified. A particular thread definition that corresponds to the particular thread can be mapped. Resource types to be consumed upon executing an instance of the thread instantiated from the particular thread definition can be identified and the corresponding values of the resource types can be determined. A process specification can be generated that encapsulates the thread definition, resource types and values so as to reproduce a state of the root process at a point in which the particular thread executed. | 2022-09-15 |
20220291951 | AUTOMATED SEMANTIC TAGGING - Methods and systems are disclosed for automated semantic tagging that include detecting a particular thread executed by a processor and identifying a root process of the particular thread. An object-process link may be by linking an object that executed code that called the particular thread to the root process. A thread list of thread definitions of the object may be identified. A particular thread definition that corresponds to the particular thread can be mapped. Resource types to be consumed upon executing an instance of the thread instantiated from the particular thread definition can be identified and the corresponding values of the resource types can be determined. A process specification can be generated that encapsulates the thread definition, resource types and values so as to reproduce a state of the root process at a point in which the particular thread executed. | 2022-09-15 |
20220291952 | OPTIMAL DISPATCHING OF FUNCTION-AS-A-SERVICE IN HETEROGENEOUS ACCELERATOR ENVIRONMENTS - Systems and methods are provided for incorporating an optimized dispatcher with an FaaS infrastructure to permit and restrict access to resources. For example, the dispatcher may assign requests to “warm” resources and initiate a fault process if the resource is overloaded or a cache-miss is identified (e.g., by restarting or rebooting the resource). The warm instances or accelerators associated with the allocation size that are identified may be commensurate to the demand and help dynamically route requests to faster accelerators. | 2022-09-15 |
20220291953 | DYNAMICALLY VALIDATING HOSTS USING AI BEFORE SCHEDULING A WORKLOAD IN A HYBRID CLOUD ENVIRONMENT - A method, computer system, and a computer program product for host validation is provided. The present invention may include receiving a job from a user. The present invention may include selecting, by a scheduler, a host in a hybrid cloud environment to run the received job. The present invention may include classifying, by a learning component, the selected host's subsystems. The present invention may include determining, based on the classification, that the selected host can run the received job. | 2022-09-15 |
20220291954 | Computer System and Field Operation Supporting Method - A field operation computer includes a field operation unit, a task management unit, and a data management unit that retains configuration information indicating a relationship between a sensor and a monitored area. The field operation unit collects acquired sensor information in order to monitor the monitored area by the sensor. The task management unit generates field status prediction information including prediction information of an activity indicating a degree to which a status in which the sensor information is not collectable from the sensor influences the monitoring of the monitored area, and controls a timing at which the task is executed based on the field status prediction information and the configuration information. | 2022-09-15 |
20220291955 | ASYNCHRONOUS INPUT DEPENDENCY RESOLUTION MECHANISM - Described herein is a graphics processor configured to perform asynchronous input dependency resolution among a group of interdependent workloads. The graphics processor can dynamically resolve input dependencies among the workloads according to a dependency relationship defined for the workloads. Dependency resolution be performed via a deferred submission mode which resolves input dependencies prior to thread dispatch to the processing resources or via immediate submission mode which resolves input dependencies at the processing resources. | 2022-09-15 |
20220291956 | DISTRIBUTED CONTAINER SCHEDULING METHOD AND SYSTEM BASED ON SHARED GPUS - A distributed container scheduling method includes: monitoring a container creation event in a Kubernetes API-Server in real time, and validating a container created once a new container creation event is detected; updating a container scheduling queue with containers passing the validation; when the container scheduling queue is empty, performing no operation until the containers passing the validation are added to the queue; when the container scheduling queue is not empty, reading the containers to be scheduled from the container scheduling queue in sequence, and selecting, from a Kubernetes cluster, an optimal node corresponding to the containers to be scheduled to generate a container scheduling two-tuple; and scheduling, based on the container scheduling two-tuple, the containers to be scheduled to the optimal node to finish the distributed container scheduling operation. | 2022-09-15 |
20220291957 | PARALLEL PROCESSING ARCHITECTURE WITH DISTRIBUTED REGISTER FILES - Techniques for task processing based on a parallel processing architecture with distributed register files are disclosed. A two-dimensional array of compute elements is accessed. Each compute element is known to a compiler and is coupled to its neighboring compute elements. The array of compute elements is controlled on a cycle-by-cycle basis. The controlling is enabled by a stream of wide, variable length, control words generated by the compiler. Virtual registers are mapped to a plurality of physical register files distributed among one or more of the compute elements. Virtual registers are represented by the compiler. The mapping is performed by the compiler. A broadcast write operation is enabled to two or more of the physical register files. Operations contained in the control words are executed. Operations are enabled by at least one of the distributed physical register files. Implementation in separate compute elements enables parallel operation processing. | 2022-09-15 |
20220291958 | MOBILE PHONE OPERATING SYSTEM FOR MINORS AND ITS ARCHITECTURE AND ECOLOGICAL DEVELOPMENT METHOD - The present application proposes a mobile phone operating system architecture and a corresponding ecological development method. The mobile phone operating system includes: the operating system core layer, which is used to manage hardware devices and task scheduling, etc.; the subsystem management layer, which runs on the operating system kernel and is used to provide the necessary management functions for the operation of the subsystem; the subsystem, which runs on above the subsystem management layer, it is used to provide the running environment of the application components; the application program and its components run in the subsystem, and are used to provide specific functional services to the users of the mobile phone operating system. The mobile phone operating system architecture provided by this application can accommodate various existing or future mobile phone application ecosystems, and realize the perfect decoupling of mobile phone hardware and ecological software. The present application also provides a feasible mobile phone operating system ecological development strategy. | 2022-09-15 |
20220291959 | Activity scheduling method, system, terminal and storage medium based on high response ratio - An activity scheduling method based on a high response ratio includes steps of: setting basic software and hardware information, an activity queue, a user authority, and a user weight factor through an administrator terminal; wherein the user weight factor comprises: a user weight factor dynamic value, a user weight factor quantity value, and a user weight factor cooling time; manually choosing whether to adopt the user weight factor for an activity through a user terminal, and then submitting the activity; and entering an activity queuing stage of the user terminal, determining an activity priority of the user terminal according to a scheduling method, and running the activity in a descending order of the activity priority. The activity scheduling method allows the system to determine and run the current urgent and important activities without manual intervention in software and hardware resources. | 2022-09-15 |
20220291960 | AUTO-RECOVERY FRAMEWORK - The present disclosure relates to computer-implemented methods, software, and systems for an automatic recovery job execution through a scheduling framework in a cloud environment. One or more recovery jobs are scheduled to be performed periodically for one or more registered service components included in a service instance running on a cluster node of a cloud platform. Each recovery job is associated with a corresponding service component of the service instance. A health check operation is invoked at a service component based on executing a recovery job at the scheduling framework corresponding to the service component. In response to determining that the service component needs a recovery measure based on a result from the health check operation, a recovery operation is invoked as part of executing a set of scheduled routines of the recovery job. Implemented logic for the recovery operation is stored and executed at the service component. | 2022-09-15 |
20220291961 | OPTIMIZING RAN COMPUTE RESOURCES IN A VERTICALLY SCALED VRAN DEPLOYMENT - Configurations of a system and a method for optimizing an allocation of computing resources via a disaggregated architecture, are described. In one aspect, the disaggregated architecture may include a Layer 2 (L2) controller that may be configured to optimize an allocation of computing resources in a virtualized radio access network (vRAN). The disaggregated architecture in a distributed unit may disaggregate an execution of the operations of the distributed unit by the computing resources deployed therein. Further, the disaggregated architecture may provision statistical multiplexing and provision a mechanism for allocating the computing resources based on real-time conditions in the network. The disaggregated architecture may provision a mechanism that may enable dynamic swapping, allocation, scaling up, management, and maintenance of the computing resources deployed in the distributed unit (DU). | 2022-09-15 |
20220291962 | STACK MEMORY ALLOCATION CONTROL BASED ON MONITORED ACTIVITIES - An integrated circuit includes: a processor; a memory coupled to the processor; and a stack memory allocation controller coupled to the processor and the memory. The stack memory allocation controller has: a stack use manager configured to monitor activities of the processor. The stack memory allocation controller also has a virtual memory translator configured to: obtain a first mapping of pointers to a first sub-set of memory blocks of the memory assigned to a memory stack for a thread executed by the processor; and determine a second mapping of pointers to a second sub-set of memory blocks of the memory assigned to the memory stack and different than the first sub-set of memory blocks responsive to the monitored activities. | 2022-09-15 |
20220291963 | INPUT-SHAPING METHOD AND INPUT-SHAPING UNIT FOR GROUP-MODULATED INPUT SCHEME IN COMPUTING-IN-MEMORY APPLICATIONS - An input-shaping method for a group-modulated input scheme in a plurality of computing-in-memory applications is configured to shape a plurality of multi-bit input signals. The input-shaping method for the group-modulated input scheme in the plurality of computing-in-memory applications includes performing an input splitting step, a threshold setting step and an input shaping step. The input splitting step includes splitting the multi-bit input signals into a plurality of input sub-groups via an input-shaping unit. The threshold setting step includes setting at least one shaping threshold via the input-shaping unit. The input shaping step includes shaping at least one of the input sub-groups according to the at least one shaping threshold via the input-shaping unit to form a plurality of shaped multi-bit input signals so as to increase a probability of a bit equal to 0 occurring in the at least one of the input sub-groups. | 2022-09-15 |
20220291964 | WORKFLOW MEMOIZATION - Workflow memoization can include generating an embedding associated with a node in a workflow. The embedding can be generated by encoding at least the node's executable and input data to the node. A matching embedding can be retrieved from a database of embeddings, which matches the generated embedding according to a match criterion. The database of embeddings can store embeddings associated with previously run nodes. Output data associated with the matching embedding can be retrieved. The output data can be used as the node's output without having to run the node in the workflow. | 2022-09-15 |
20220291965 | POLICY MANAGEMENT IN TARGET ENVIRONMENTS - Examples described herein relate to policy management in target environments. A workload attestation request including a workload specification of a workload is received. A workload profile is determined based on the workload specification. A policy stored in a policy database is identified based on the workload profile. An attestation identifier indicating the workload profile is provided in response to the workload attestation request. On receiving a policy request including the attestation identifier from a controller node at a target environment, policies are compiled from the policy database using the attestation identifier, and provided to the controller node, which applies the policy in the target environment. | 2022-09-15 |
20220291966 | SYSTEMS AND METHODS FOR PROCESS MINING USING UNSUPERVISED LEARNING AND FOR AUTOMATING ORCHESTRATION OF WORKFLOWS - A system for discovering business processes using unsupervised learning is configured to: (a) receive multimodal event data from a plurality of sources, the multimodal event data including a plurality of event instances; (b) associate the multimodal event data with a vector representation, such that the plurality of event instances is represented as a plurality of event vectors; (c) correlate the plurality of event vectors using unsupervised learning to identify one or more processes; and (d) generate a process model script for the one or more processes. A method for automated orchestration of a workflow is also disclosed. | 2022-09-15 |
20220291967 | METHOD FOR MANAGING RESOURCES AND ELECTRONIC DEVICE - Provided is a method for managing resources. The method includes: determining an author state of a first resource, wherein the first resource is a resource in an online state, and the author state is configured to indicate a state corresponding to author identification of an author to which the first resource belongs; taking the first resource offline in response to the author state being a deactivated state; and marking a management operation type of the first resource as automatically going offline. | 2022-09-15 |
20220291968 | DETECTION OF INSTANCE LIVENESS - According to a method, at a given instance of a cluster of instances of at least one service, at least one monitored instance is selected from the cluster of instances according to a selection criterion such that each instance of the cluster of instances is selected as a monitored instance by at least one other instance of the cluster of instances. The given instance is caused to detect an operational status of the at least one monitored instance. If the operational status indicates that one of the at least one monitored instance is failed, the operational status of the failed monitored instance is provided to a centralized controller for the cluster of instances. Through the solution, the detection of instance liveness can be executed by individual instances symmetrically in a distributed and self-management manner. | 2022-09-15 |
20220291969 | INTELLIGENT CLOUD MANAGEMENT BASED ON PROFILE - The present disclosure provides technical solutions related to intelligent cloud management based on profile. Artificial intelligent is applied to cloud management and cloud management suggestion may be proposed intelligently. In daily work, behaviors in using cloud resources may show characteristics of cloud users or cloud tenants themselves. The technical solution of intelligent cloud management of the present disclosure generates profile identifying cloud using characteristics by extracting behavior data in using cloud and intelligently proposes cloud management suggestions based on the profile. | 2022-09-15 |
20220291970 | CORE TO RESOURCE MAPPING AND RESOURCE TO CORE MAPPING - Core to resource and resource to core mapping is disclosed. In an embodiment, a method includes obtaining an input pattern including a plurality of resource identifiers corresponding to resources. The method further includes applying the input pattern to a guaranteed regular and uniform distribution process to obtain a distribution pattern that indicates a distribution of resources across cores or a distribution of the cores across the resources. The method further includes distributing the resources across the cores or distributing the cores across the resources according to the distribution pattern. | 2022-09-15 |
20220291971 | SYNCHRONIZATION OBJECT HAVING A STAMP FOR FLOWS IN A STORAGE SYSTEM - In one aspect, an example methodology implementing the disclosed techniques includes, responsive to a determination, by a first thread attempting to start an operation, that a second thread has started the operation, obtaining a value of a stamp included in a synchronization object related to the operation. The method also includes determining, by the first thread, whether the value of the stamp obtained is the same as a current value of the stamp and, responsive to a determination that the obtained value of the stamp is not the same as the current value of the stamp, continuing execution of the first thread. The method may further include, responsive to a determination that the obtained value of the stamp is the same as the current value of the stamp, suspending execution of the first thread. | 2022-09-15 |
20220291972 | Methods And Systems For Application Program Interface Management - Provided are methods and systems for application program interface (API) management. An API management device may receive requests from client devices to submit an API and/or API update for implementation. The API management device may determine an operable status of the API and/or the API update by determining whether the API and/or the API update is configured and/or updated for implementation. The API and/or the API update may be determined to be configured and/or updated for implementation when the API and/or the API update does not violate one or more rules. The API management device, based on operable status, may allow or deny the request for implementation. | 2022-09-15 |
20220291973 | DYNAMIC SERVICE MESH - One example method includes receiving, from a microservice, a service request that identifies a service needed by the microservice, and an API of an endpoint that provides the service, evaluating the service request to determine whether the service request conforms to a policy, when the service request has been determined to conform with the policy, evaluating the endpoint to determine if endpoint performance meets established guidelines, and when it is determined that the endpoint performance does not meet the established guidelines, identifying an alternative endpoint that meets the established guidelines and that provides the requested service. Next, the method includes transforming the API of the service identified in the service request to an alternative API of the service provided by the alternative endpoint, and sending the service request and the alternative API to the alternative endpoint. | 2022-09-15 |
20220291974 | Processing a query having calls to multiple data sources - A method, including receiving, from a client, a unified query, and extracting, from the unified query, an endpoint query for a first data source on a first server and an endpoint query for a second data source on a second server. The extracted endpoint query for the first data source is forwarded to the first server. Upon receiving a response to the endpoint query forwarded to the first server, one or more parameters are extracted from the response. The endpoint query for the second data source is updated so as to include the extracted one or more parameters, and the updated endpoint query for the second data source is forwarded to the second server. Upon receiving, from the second server, a response to the forwarded endpoint query, a result for the received unified query is generated based on the receive responses, and the generated result is conveyed to the client. | 2022-09-15 |
20220291975 | TECHNIQUES FOR MANAGING ACCESS TO FILE SYSTEMS - This application sets forth techniques for browsing and accessing files stored by a storage solution. The technique includes the steps of (1) prior to receiving a command to open a file, operating in a user space and engaging a first pathway by (a) instantiating, by an application, a preview application; (b) constructing a file path associated with the file stored in the volume; (c) providing the file path to the preview application; (d) generating, by the preview application, preview data of the file; and (e) receiving, by the preview application, a request to open the file; and (2) in response to receiving the request to open the file, engaging a second pathway to retrieve the file from the volume by: (a) generating, by the preview application, a system call to open the file; and (b) transmitting the system call to a kernel process executing within a kernel space. | 2022-09-15 |
20220291976 | MESSAGE COMMUNICATION BETWEEN INTEGRATED COMPUTING DEVICES - One example provides an integrated computing device, comprising one or more computing clusters, and one or more network controllers, each network controller comprising a local data notification queue to queue send message notifications originating from the computing clusters on the integrated computing device, a remote data notification queue to queue receive message notifications originating from network controllers on remote integrated computing devices, a local no-data notification queue to queue receive message notifications originating from computing clusters on the integrated computing device, and a connection scheduler configured to schedule sending of data from memory on the integrated computing device when a send message notification in the local data notification queue is matched with a receive message notification in the remote data notification queue, and to schedule sending of receive message notifications from the local no-data notification queue. | 2022-09-15 |
20220291977 | SINGLE FLOW EXECUTION - Methods, systems, and devices supporting data processing are described. In some systems, a user device may leverage a single flow execution (SFE) service for an application including a flow. A connector may retrieve one or more messages using a polling source, and a processing component may process a single message of the retrieved messages (e.g., to avoid processing complexity and error propagation associated with batch or periodic polling). The processing component may disable the connector upon retrieving at least one message and may execute the flow for the deployed application on the single message of the retrieved message, for example, based on an indication to run the SFE. Upon completion of executing the flow on the message, the processing component may store, at a collector, information related to the flow execution and may undeploy the application from a runtime engine instance based on completing the SFE for the application. | 2022-09-15 |
20220291978 | FLEXIBLE COMMUNICATION-DEVICE MANAGEMENT VIA MULTIPLE USER INTERFACES - A computer network device (such as an access point, a switch or a router) that has multiple user interfaces is described. During operation, the computer network device may execute program instructions for the user interfaces and a shared messaging module, where a given user interface includes an agent corresponding to an application. When a message associated with the application is received via a user interface in the user interfaces, the corresponding agent in the user interface may extract a command or operation from the message. Then, the shared messaging module may translate the command or operation into a common format of the application. Moreover, the shared messaging module may provide (or route) the translated command or operation addressed to the application via a single communication path associated with the application and the agents for the application in the user interfaces. | 2022-09-15 |
20220291979 | MOBILE APPLICATION INTEGRATION - Methods and systems are disclosed for enabling transaction processing utilizing a single device. A mobile device can store an application comprising first and second software modules. One module may execute acceptance processing while another software module may emulate a transaction device associated with a user. One or more identifiers (e.g., QR codes, bar codes) may be obtained corresponding to one or more physical items. Authorization may be requested via the mobile device. In response, data may be exchanged between the software modules and this data (e.g., transaction data, a payment token, a cryptogram, etc.) may be provided to a remote computer (e.g., a cloud based acceptance service) that can generate an authorization request message for the transaction. | 2022-09-15 |
20220291980 | MEMORY CRASH PREVENTION FOR A COMPUTING DEVICE - A computing device can monitor a set of memory usage metrics of the computing device. Based on the set of memory usage metrics, the computing device can determine whether memory usage will exceed a critical memory threshold within a future period of time. In response to determining that the memory usage will exceed the critical memory threshold within the future period of time, the computing device can degrade one or more application features of an application executing on the computing device. | 2022-09-15 |
20220291981 | DEDUCING A ROOT CAUSE ANALYSIS MODEL FROM AUGMENTED REALITY PEER ASSISTANCE SESSIONS - In an approach for deducing a root cause analysis model, a processor trains a classifier based on labeled data to identify entities. A processor trains the classifier with first taxonomy and ontology. A processor uses the classifier to classify each component from one or more augmented reality peer assistance sessions into a class. A processor generates a root cause analysis model based on the identified entities and the classified components. | 2022-09-15 |
20220291982 | METHODS AND SYSTEMS FOR INTELLIGENT SAMPLING OF NORMAL AND ERRONEOUS APPLICATION TRACES - Computer-implemented methods and systems described herein perform intelligent sampling of application traces generated by an application. Computer-implemented methods and systems determine different sampling rates based on frequency of occurrence of normal traces and erroneous traces of the application. The sampling rates for low frequency normal and erroneous traces are larger than the sampling rates for high frequency normal and erroneous traces. The relatively larger sampling rates for low frequency trace ensures that low frequency traces are sampled in sufficient numbers and are not passed over during sampling of the application traces. The sampled normal and erroneous traces are stored in a data storage device. | 2022-09-15 |
20220291983 | ANALYSIS SYSTEM, METHOD OF PRESENTING RESULT OF INSPECTION IN ANALYSIS SYSTEM AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM - An analysis system includes a recorder that records a point in time at which an event occurs in an analysis system and event contents as a log, an extractor that extracts a related log relating to an error from the log acquired between a last point in time at which the analysis system passes an inspection and a point in time at which the error occurs, when the error occurs in the inspection performed on the analysis system, and a presenter that presents the related log extracted by the extractor to a user. | 2022-09-15 |
20220291984 | MONITORING STATUSES OF MONITORING MODULES OF A DISTRIBUTED COMPUTING SYSTEM - Systems and methods are disclosed for monitoring features of a computing device of a distributed computing system using a self-monitoring module. The self-monitoring module can include multiple feature-specific monitoring modules and one or more parent nodes for the feature-specific monitoring modules. A feature-specific monitoring module can identify or detect a fault status change, such as a fault condition or fault resolution, for one or more features. Based on the identified fault conditions or fault resolutions, the feature-specific monitoring module can determine an internal status and communicate an updated status to a parent node. | 2022-09-15 |
20220291985 | CONTROLLER THAT RECEIVES A CYCLIC REDUNDANCY CHECK (CRC) CODE FOR BOTH READ AND WRITE DATA TRANSMITTED VIA BIDIRECTIONAL DATA LINK - A controller includes a link interface that is to couple to a first link to communicate bi-directional data and a second link to transmit unidirectional error-detection information. An encoder is to dynamically add first error-detection information to at least a portion of write data. A transmitter, coupled to the link interface, is to transmit the write data. A delay element is coupled to an output from the encoder. A receiver, coupled to the link interface, is to receive second error-detection information corresponding to at least the portion of the write data. Error-detection logic is coupled to an output from the delay element and an output from the receiver. The error-detection logic is to determine errors in at least the portion of the write data by comparing the first error-detection information and the second error-detection information, and, if an error is detected, is to assert an error condition. | 2022-09-15 |
20220291986 | Cloud-Based Monitoring Of Hardware Components In A Fleet Of Storage Systems - Cloud-based monitoring of hardware components in a fleet of storage systems, including: collecting, for a plurality of hardware components that are included in a physical storage system, information describing the operation each hardware component, wherein information is collected for the hardware components of multiple physical storage systems; predicting, based on the information describing the operation each hardware component and historical information describing the operation of one or more other hardware components, the expected performance of each hardware component; and modifying, based on the expected performance of each hardware component, the utilization of at least one or more of the physical storage systems in the fleet. | 2022-09-15 |
20220291987 | MULTI-TENANT INTEGRATION ENVIRONMENT - A computer-implemented method for a multi-tenant integration environment includes, in response to an error occurring during a state of execution of an integration flow, generating error data for the error. The method further includes associating the generated error data with the error. The method further includes storing the generated error data in a data storage component. The generated error data includes (i) error state information corresponding to the state of execution of the integration flow and (ii) target state information corresponding to a target state of execution of the integration flow. | 2022-09-15 |
20220291988 | CRITICAL PROBLEM EXCEPTION HANDLING - Methods, apparatus, computer program products for handling critical problem exceptions during an execution of an application are provided. The method comprises: detecting, by one or more processing units, an occurrence of a certain type of critical problem exception during an execution of an application, the critical problem exception resulting in a termination of the application; instructing, by one or more processing units, to call a Super Handling Routine (SHR) corresponding to the type of the critical problem exception at a pre-configured address based on a pre-determined context registered by the application, the SHR being configured to handle critical problem exceptions; and handing, by one or more processing units, control to the SHR to handle the type of the critical problem exception. | 2022-09-15 |
20220291989 | ADAPTIVE LOG DATA LEVEL IN A COMPUTING SYSTEM - Disclosed are embodiments for improving remote diagnostics of a computer system. Some embodiments obtain operational parameter values and log data from a plurality of network devices, and provide the operational parameter values and log data to a machine learning model. The model is trained to identify a root cause of a degradation of the computer system based on the operational parameter values and log data, and to provide recommendations of log data level settings for the network devices. If the model identifies a root cause of the degradation with sufficient confidence, a remedial action is identified and applied to the computer system. If the confidence level is insufficient, log data level settings of the network devices are modified based on the recommendations of the model. This process is performed iteratively, for example, such that the model receives log data based on its recommended log data levels, until a root cause is identified with sufficient confidence. | 2022-09-15 |
20220291990 | SYSTEM AND COMPUTER-IMPLEMENTED METHOD FOR MANAGING ROBOTIC PROCESS AUTOMATION (RPA) ROBOTS - A system for managing one or more robots is provided. The system is configured to resolve the one or more issues or faults that lead to failure of execution of one or more automation processes executed by the one or more robots. The system is configured to receive information of an issue associated with at least one robot of the one or more robots and further configured to obtain job log data, associated with the at least one robot, for the issue. The system is further configured to determine, using a trained machine learning model, a corrective action, and its associated confidence score for resolving the received issue, based on the job log data and an analysis performed by the trained machine learning model. Further, system performs the corrective action based on the confidence score and the analysis, for managing the one or more robots. | 2022-09-15 |
20220291991 | METHOD AND DEVICE FOR POSITIONING FAULTY DISK - Disclosed are a method and device for positioning a faulty disk. The method comprises: in response to detecting that a first disk is faulted, determining positioning information of the first disk, the positioning information comprising a logic Enclosure Identity (EID) and a logic Slot Identity (SID); and positioning the first disk according to the EID and SID of the first disk. | 2022-09-15 |
20220291992 | Energy-Efficient Error-Correction-Detection Storage - A memory system employs an addressing scheme to logically divide rows of memory cells into separate contiguous regions, one for data storage and another for error detection and correction (EDC) codes corresponding to that data. Data and corresponding EDC codes are stored in the same row of the same bank. Accessing data and corresponding EDC code in the same row of the same bank advantageously saves power and avoids bank conflicts. The addressing scheme partitions the memory without requiring the requesting processor to have an understanding of the memory partition. | 2022-09-15 |
20220291993 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus including a memory and a memory controller writing data to the memory in response to a write for writing the data to the memory, in which the memory executes error correction processing for each data of a predetermined data length, and the memory controller executes, in place of the memory, read modify write processing in a case where a data length of the data related to the write instruction is smaller than the predetermined data length. | 2022-09-15 |
20220291994 | INFORMATION PROCESSING SYSTEM, STORAGE DEVICE, AND HOST - In general, according to an embodiment, a storage device includes a non-volatile memory and a controller. The non-volatile memory includes a plurality of pages, each of the pages including a data area of a first size and a redundant area of a second size smaller than the first size. The controller is configured to receive, from a host, a write command, receive, from the host, transfer data associated with the write command. The transfer data includes write data of the first size appended with a first error detection code for the write data. The controller is further configured to store the write data into the data area of one of the pages and the first error detection code into the redundant area of the one of the pages. | 2022-09-15 |
20220291995 | MITIGATING READ DISTURB EFFECTS IN MEMORY DEVICES - A die read counter and a block read counter are maintained for a specified block of a memory device. An estimated number of read events associated with the specified block is determined based on a value of the block read counter and a value of the die read counter. Responsive to determining that the estimated number of read events satisfies a criterion, a media management operation of one or more pages associated with the specified block is performed. | 2022-09-15 |
20220291996 | SYSTEMS, METHODS, AND DEVICES FOR FAULT RESILIENT STORAGE - A method of operating a storage device may include determining a fault condition of the storage device, selecting a fault resilient mode based on the fault condition of the storage device, and operating the storage device in the selected fault resilient mode. The selected fault resilient mode may include one of a power cycle mode, a reformat mode, a reduced capacity read-only mode, a reduced capacity mode, a reduced performance mode, a read-only mode, a partial read-only mode, a temporary read-only mode, a temporary partial read-only mode, or a vulnerable mode. The storage device may be configured to perform a namespace capacity management command received from the host. The namespace capacity management command may include a resize subcommand and/or a zero-size namespace subcommand. The storage device may report the selected fault resilient mode to a host. | 2022-09-15 |
20220291997 | CAPACITOR ENERGY MANAGEMENT FOR UNEXPECTED POWER LOSS IN DATACENTER SSD DEVICES - Various implementations described herein relate to systems and methods for a Solid State Drive (SSD) to manage data in response to a power loss event, including writing data received from a host to a volatile storage of the SSD, detecting the power loss event before the data is written to a non-volatile storage of the SSD, storing the write commands to a non-volatile storage of the SSD, marking at least one storage location of the SSD associated with the write commands as uncorrectable, for example, after the power is restored. | 2022-09-15 |
20220291998 | BACKING-UP APPLICATION DATA FROM CLOUD-NATIVE APPLICATIONS - Embodiments described herein are directed to backing up and recovering cloud-native applications. In some embodiments, the data engine maps a first set of data volumes to a data repository dedicated to store a backup of the data associated with the application. Furthermore, the data engine transmits, using a dynamically generated process, the data stored in the identified first set of data volumes to the data repository for backup based on the mapping. The data engine may also initiate a recovery of the application. The data engine may use a new dynamically generated process to identify and transmit a respective data set to a corresponding data volume for storage. Moreover, the data engine may use the new process to restore the components of the application using each respective identified data set. | 2022-09-15 |
20220291999 | ENCRYPTION KEY MANAGEMENT - Disclosed herein are system, method, and computer program product embodiments for encryption key management. An embodiment operates by executing an initial non-backup instance of an application and generates a primary key using a cryptographic algorithm. The embodiment requests a customer to create a passphrase configured to encrypt and decrypt the primary key. The embodiment generates a derived key using a cryptographic algorithm and the customer passphrase as input. The embodiment then encrypts the primary key using the generated derived key and stores the encrypted primary key in a catalog. | 2022-09-15 |
20220292000 | COMMUNICATION NETWORK DATA FAULT DETECTION AND MITIGATION - A processing system may apply a binary classifier to detect whether a first data pattern of a first data source associated with a communication network performance indicator is consistent with prior data patterns of the first data source that are labeled as correct data patterns, determine, via the binary classifier, that the first data pattern is not consistent, apply a clustering model to a first input data set comprising the first data pattern and invalid data patterns of the first data source to obtain a first plurality of clusters, verify that the first data pattern is an invalid data pattern when the first plurality of clusters is the same as a second plurality of clusters generated by applying the clustering model to a second input data set comprising the invalid data patterns, and replace the first data source with a replacement data source as an active data source in response. | 2022-09-15 |
20220292001 | PREDICTIVE OPTIMAL QUEUE LENGTH MANAGEMENT FOR BACKUP SESSIONS USING DISTRIBUTED PROXIES - Techniques described herein relate to methods and systems for managing backup operations. The method may include receiving a request to perform a first backup operation for a first virtual machine (VM); making a first determination, using a vProxy preference map, that a first vProxy is assigned to the first VM based on a backup capability associated with the first vProxy; making a second determination that the first vProxy is not available to perform the first backup operation; making a third determination, based on the second determination and using a vProxy information database, that a second vProxy is available to perform the first backup operation based on the second vProxy being associated with the same backup capability as the first vProxy; and performing the first backup operation using the second vProxy. | 2022-09-15 |
20220292002 | DATA RECOVERY IN VIRTUAL DESKTOP INFRASTRUCTURE ENVIRONMENTS - An apparatus comprises a processing device configured to receive from a virtual desktop infrastructure client a request to recover data, to identify virtual desktops associated with the virtual desktop infrastructure client that are hosted on virtual machines running on virtualization infrastructure of a virtual desktop infrastructure environment, and to push a token to at least one of the virtual desktops. The processing device is further configured to authenticate the request to recover data based at least in part on validating a proof of knowledge of the token that is received from the virtual desktop infrastructure client, to receive from the virtual desktop infrastructure client a selection of at least a given one of a set of copies of the data of the virtual desktops, and to mount the given copy in at least one of the virtual desktops hosted on at least one of the virtual machines. | 2022-09-15 |
20220292003 | IMAGE DISPLAY SYSTEM, IMAGE PROCESSOR CIRCUIT, AND PANEL DRIVING METHOD - An image display system includes a display device, a second memory circuit, and an image processor circuit. The display device includes a panel and a first memory circuit, in which the first memory circuit is configured to store first predetermined data for controlling the panel. The second memory circuit is configured to store second predetermined data. The image processor circuit is configured to read first part data in the first predetermined data and second part data in the second predetermined data and compare the first part data with the second part data. If the first part data is identical to the second part data, the image processor circuit is further configured to output a driving signal according to the second predetermined data to control the panel to start displaying an image | 2022-09-15 |
20220292004 | MEDIATOR ASSISTED SWITCHOVER BETWEEN CLUSTERS - Techniques are provided for metadata management for enabling automated switchover in accordance with a configuration of storage solution that expresses a preference for either maintaining availability (e.g., a non-zero RPO mode) of the storage solution or avoiding data loss (e.g., a zero RPO mode). In one example, responsive to detecting a switchover trigger event, a node of a local cluster of a cross-site storage solution determines whether performance of an automated switchover from a failed cluster to a surviving cluster of the cross-site storage solution is enabled. Responsive to an affirmative determination, the node selectively proceeds with the automated switchover based on the configuration. | 2022-09-15 |
20220292005 | CROSS-PLATFORM REPLICATION - One or more techniques and/or computing devices are provided for cross-platform replication. For example, a replication relationship may be established between a first storage endpoint and a second storage endpoint, where at least one of the storage endpoints, such as the first storage endpoint, lacks or has incompatible functionality to perform and manage replication because the storage endpoints have different storage platforms that store data differently, use different control operations and interfaces, etc. Accordingly, replication destination workflow, replication source workflow, and/or a proxy representing the first storage endpoint may be implemented at the second storage endpoint comprising the replication functionality. In this way, replication, such as snapshot replication, may be implemented between the storage endpoints by the second storage endpoint using the replication destination workflow, the replication source workflow, and/or the proxy that either locally executes tasks or routes tasks to the first storage endpoint such as for data access. | 2022-09-15 |
20220292006 | System for Automatically Generating Insights by Analysing Telemetric Data - A system and method for analyzing telemetry of a technology system and generating automated insights are provided. The system receives telemetry from the technology system. The system identifies key metric types in the received telemetry and parses the telemetry. The system categorizes the parsed telemetry and applies domain specific context and rules to the categorized telemetry. The system performs on-demand operations on the categorized telemetry and generates a list of insights based on user preferences. The system generates insightful information comprising human readable text statements, proactive actionable suggestions for preventive measures, predictive forecast of upcoming events, and any combination thereof, for each of the insights. The system creates an output dashboard for the generated insightful information and displays the output dashboard on the user device. | 2022-09-15 |
20220292007 | INSERTING PROBABILISTIC MODELS IN DETERMINISTIC WORKFLOWS FOR ROBOTIC PROCESS AUTOMATION AND SUPERVISOR SYSTEM - Probabilistic models may be used in a deterministic workflow for robotic process automation (RPA). Machine learning (ML) introduces a probabilistic framework where the outcome is not deterministic, and therefore, the steps are not deterministic. Deterministic workflows may be mixed with probabilistic workflows, or probabilistic activities may be inserted into deterministic workflows, in order to create more dynamic workflows. A supervisor system may be used to monitor an ML model and raise an alarm, disable an RPA robot, bypass an RPA robot, or roll back to a previous version of the ML model when an error is detected by a data drift detector, a concept drift detector, or both. | 2022-09-15 |
20220292008 | METHOD AND SYSTEM FOR FAILURE PREDICTION IN CLOUD COMPUTING PLATFORMS - The present disclosure relates to system and techniques for prediction of failures in resources deployed in a data plane of a cloud based infrastructure. The resource are selected from a plurality of cloud based resources arranged in a hierarchical manner and allocated to a client device. A predictor employs a first prediction model to obtain a first prediction of a failure of a resource, and a second prediction model to obtain a second prediction of the failure of the resource. Weights are assigned to the first prediction and second prediction based at least in part on a criterion. The predictor computes an overall prediction of the failure of the resource based at least in part on at least one of the first prediction, the second prediction or the respective weights assigned to the predictions. The overall prediction is utilized for restoring the failure of the resource. | 2022-09-15 |
20220292009 | COMPUTER-IMPLEMENTED METHOD FOR GENERATING A COMPONENT FAULT AND DEFICIENCY TREE OF A MULTI-COMPONENT SYSTEM COMPRISING A PLURALITY OF COMPONENTS - Provided is a computer-implemented method for generating a Component Fault and Deficiency Tree of a multi-component system the method including:
| 2022-09-15 |
20220292010 | DEFECT RESOLUTION - Systems, methods, and non-transitory computer readable media are provided for facilitating improved defect resolution. Defect information and defect criteria information may be obtained. The defect information may identify defects of software and/or hardware in development. The defect criteria information may define one or more criteria for measuring the defects. The defects may be measured based on the one or more criteria. A defect analysis interface may be provided. The defect analysis interface may list a limited number of the defects based on the measurements of the defects. The defect analysis interface may provide costs (e.g., computing resources, time, personnel) of solving the defects. | 2022-09-15 |
20220292011 | AUTOMATED APPLICATION TESTING OF MUTABLE INTERFACES - Applications under test (AUT) may be tested by automated testing systems utilizing machine vision to recognize visual elements presented by the AUT and apply inputs to graphical elements, just as a human would. By utilizing the smallest image patch available, processing demands of the testing system are minimized. However, the image patch used to identify a portion of an AUT must be identifiable to the automated system. By selecting image patches that comprise the smallest size, but can be identified in an AUT by an automated system using machine vision, even as the AUT display is resized, reproportioned, noisy, or otherwise altered from the testing platform that was utilized for training. | 2022-09-15 |
20220292012 | SYSTEM AND METHOD FOR FULLY INTEGRATED REGRESSION & SYSTEM TESTING ANALYTICS - Various methods, apparatuses/systems, and media for automatically generating fully integrated regression and system testing (FIRST) analytics are disclosed. A processor accesses a production database to obtain production data associated with an application, and accesses a user acceptance testing (UAT) database to obtain UAT data associated with the application. The processor generates gap data on test coverage based on comparing the production data with the UAT data; analyzes generated gap data; automatically generates, in response to analyzing the generated gap data, executable full coverage of test scenarios for testing the application; and automatically executes testing of the application based on the generated test scenarios. | 2022-09-15 |
20220292013 | FAST OPERATING SYSTEM CONFIGURATION OPTION SPACE EXPLORATION VIA CROSS-OS GRAFTING - A method searches and tests for performance optima in an operating system (OS) configuration space. The method includes generating a plurality of OS configurations. For at least a first OS configuration, of the generated OS configurations, the method further includes: fetching a plurality of OS modules based on the first OS configuration; building a first OS image from the fetched OS modules; and testing the first OS image to determine a first value of a performance metric. | 2022-09-15 |
20220292014 | ADDRESS VECTORS FOR DATA STORAGE ELEMENTS - In some examples, a device includes a set of data storage elements, wherein each data storage element of the set of data storage elements is associated with a respective valid address vector, and wherein a bit flip in any bit of any of the valid address vectors leads to one of a set of invalid address vectors not associated with any of the set of data storage elements. The device also includes a decoder configured to receive a first address vector as part of a request and to check whether the first address vector corresponds to one of the valid address vectors or to one of the invalid address vectors. The decoder is also configured to select an associated data storage element in response to receiving the request and in response to determining that the first address vector corresponds to one of the valid address vectors. | 2022-09-15 |
20220292015 | Cache Victim Selection Based on Completer Determined Cost in a Data Processing System - A data processing apparatus includes a requester, a completer and a cache. Data is transferred between the requester and the cache and between the cache and the completer. The cache implements a cache eviction policy. The completer determines an eviction cost associated with evicting the data from the cache and notifies the cache of the eviction cost. The cache eviction policy implemented by the cache is based, at least in part, on the cost of evicting the data from the cache. The eviction cost may be determined, for example, based on properties or usage of a memory system of the completer. | 2022-09-15 |
20220292016 | COMPUTER INCLUDING CACHE USED IN PLURAL DIFFERENT DATA SIZES AND CONTROL METHOD OF COMPUTER - A computer includes a memory and a cache holding a part of data stored in the memory in any of a plurality of data regions. In a case of replacing first data of a first data size held in the cache with second data of a second data size larger than the first data size, allocation of data regions of the cache is changed in units of the second data size by referring to a first management list that includes a plurality of first entries that correspond to the plurality of data regions, respectively, for managing priorities of the data regions for each of the plurality of processes, and a second management list that includes a plurality of second entries corresponding to the first entries for a process that uses the first data size, for managing priorities of first data of the first data size held in the data regions. | 2022-09-15 |
20220292017 | ENHANCING CACHE DIRTY INFORMATION - A method performed by a controller comprising assigning a first status indicator to entries in a first address line in a volatile memory belonging to a first region of an LUT stored in a non-volatile memory, and a second status indicator to entries in the first address line in the volatile memory belonging to a second region of the LUT, setting either the first or second status indicator to a dirty status based on whether a cache updated entry at an address m in the volatile memory belongs to the first or second region of the LUT, and writing, based on the dirty status of the first and second status indicator at the address m, all entries in the volatile memory associated with the first region or the second region containing the updated entry to the non-volatile memory. | 2022-09-15 |
20220292018 | BIAS CONTROL FOR A MEMORY DEVICE - Methods, systems, and devices for bias control for a memory device are described. A memory system may store indication of whether data is coherent. In some examples, the indication may be stored as metadata, where a first value indicates that the data is not coherent and a second value or a third value indicate that the data is coherent. When a processing unit or other component of the memory system processes a command to access data, the memory system may operate according to a device bias mode when the indication is the first value, and according to a host bias mode when the indication is the second value or the third value. | 2022-09-15 |
20220292019 | MEMORY REQUEST THROTTLING TO CONSTRAIN MEMORY BANDWIDTH UTILIZATION - A processing system includes an interconnect fabric coupleable to a local memory and at least one compute cluster coupled to the interconnect fabric. The compute cluster includes a processor core and a cache hierarchy. The cache hierarchy has a plurality of caches and a throttle controller configured to throttle a rate of memory requests issuable by the processor core based on at least one of an access latency metric and a prefetch accuracy metric. The access latency metric represents an average access latency for memory requests for the processor core and the prefetch accuracy metric represents an accuracy of a prefetcher of a cache of the cache hierarchy. | 2022-09-15 |
20220292020 | Data Storage Device and Method for Application Identifier Handler Heads-Up for Faster Storage Response - A data storage device and method for application identifier handler heads-up for faster storage response are provided. In one embodiment, a data storage device is provided comprising a volatile memory, a non-volatile memory, and a controller. The controller is configured to: receive data and a logical address from a host, wherein the data is tagged with an identifier of an application on the host; store the data at a physical address in the non-volatile memory; maintain a logical-to-physical address table that comprises an entry associating the logical address, physical address, and identifier; determine that the application is subsequently reloaded on the host; and cache, in the volatile memory, a portion of the logical-to-physical address table that comprises the entry for the identifier. Other embodiments are provided. | 2022-09-15 |
20220292021 | Cache Aware Searching Based on One or More Files in Remote Storage - Embodiments are disclosed for performing cache aware searching. In response to a search query, a first bucket and a second bucket in remote storage for processing the search query. A determination is made that a first file in the first bucket is present in a cache when the search query is received. In response to the search query, a search is performed using the first file based on the determination that the first file is present in the cache when the search query is received, and the search is performed using a second file from the second bucket once the second file is stored in the cache. | 2022-09-15 |
20220292022 | DATA-RELATIONSHIP-BASED FAST CACHE SYSTEM - A data-relationship-based FAST cache system includes a storage controller that is coupled to first storage device(s) and second storage device(s). The storage controller identifies a relationship between first data stored in the first storage device(s) and second data stored in the first storage device (s), with the relationship based on a difference between a first number of accesses of the first data associated with a first time period and a second number of accesses of the second data associated with the first time period being within an access difference threshold range. Subsequent to identifying the relationship, the storage controller determines that the first data has been accessed in the first storage device(s) a number of times within a second time period that exceeds a FAST cache threshold and, in response, moves both the first data and the second data to the second storage device(s) based on the relationship. | 2022-09-15 |
20220292023 | VICTIM CACHE WITH WRITE MISS MERGING - A caching system including a first sub-cache, a second sub-cache, coupled in parallel with the first sub-cache, for storing cache data evicted from the first sub-cache and write-memory commands that are not cached in the first sub-cache, and a cache controller configured to receive two or more cache commands, determine a conflict exists between the received two or more cache commands, determine a conflict resolution between the received two or more cache commands, and sending the two or more cache commands to the first sub-cache and the second sub-cache. | 2022-09-15 |