52nd week of 2021 patent applcation highlights part 53 |
Patent application number | Title | Published |
20210406073 | METHODS AND APPARATUS FOR TENANT AWARE RUNTIME FEATURE TOGGLING IN A CLOUD ENVIRONMENT - Methods, apparatus, systems, and articles of manufacture to provide tenant aware runtime feature toggling in a cloud or other virtualized computing environment are disclosed. An example method includes determining a feature toggle associated with a resource of a provisioning request; retrieving the feature toggle from a database using a first tenant identifier, the feature toggle to have a first value for the first tenant identifier and a second value for a second tenant identifier; processing the feature toggle to provision the resource according to the first value of the feature toggle; and facilitating provisioning of the resource according to the first value. | 2021-12-30 |
20210406074 | DYNAMIC PRODUCT RESOURCE MAPPING OF CLOUD RESOURCES - Data characterizing a first address of a software service executing based on a first virtual resource that is within a remote computing environment is received. The executing includes transmitting a request for utilization of the first virtual resource. The received data further characterizes a log of the request for the first virtual resource. The log includes the first address and a second address of the first virtual resource. A mapping between the first address of the software service and the second address of the first virtual resource is determined using the received data. The mapping between the first address of the software service and the second address of the first virtual resource is provided. Related apparatus, systems, techniques and articles are also described. | 2021-12-30 |
20210406075 | APPARATUS AND METHOD FOR A RESOURCE ALLOCATION CONTROL FRAMEWORK USING PERFORMANCE MARKERS - An apparatus and method for dynamic resource allocation with mile/performance markers. For example, one embodiment of a processor comprises: resource allocation circuitry to allocate a plurality of hardware resources to a plurality of workloads including priority workloads associated with one or more guaranteed performance levels; and monitoring circuitry to evaluate execution progress of a workload across a plurality of nodes, each node to execute one or more processing stages of the workload, wherein the monitoring circuitry is to evaluate the execution progress of the workload, at least in part, by reading progress markers advertised by the workload at the specified processing stages, wherein the monitoring circuitry is to detect that the workload may not meet one of the guaranteed performance levels based on the progress markers, and wherein the resource allocation circuitry, responsive to the monitoring circuitry, is to reallocate one or more of the plurality of hardware resources to improve the performance level of the workload. | 2021-12-30 |
20210406076 | METHOD AND DEVICE FOR OPERATING INSTANCE RESOURCES - A method and device for operating instance resources. The method includes: receiving an operation request, the operation request including a type of an operation and a target resource; acquiring an instance resource associated with the target resource according to an instance arranging property; executing the operation on the instance resource associated with the target resource; and transmitting an operation response. | 2021-12-30 |
20210406077 | METHOD AND SYSTEM FOR PARALLEL COMPUTATION - Speeding up of parallel computation is to be achieved. A parallel computation method comprises a step for distributing respective first-level small pieces of data, that are formed by dividing data, to respective computation nodes in plural computation nodes; a step for further dividing, in at least one first computation node in the plural computation nodes, the first-level small piece of data into second-level small pieces of data; a step for transferring, in parallel, the respective second-level small pieces of data from the at least one first computation node to the plural computation nodes; a step for transferring, in parallel, the transferred second-level small pieces of data from the respective computation nodes in the plural computation nodes to at least one second computation node in the plural computation nodes; and a step for reconstructing, in the at least one second computation node, the first-level small piece of data by using the second-level small pieces of data transferred from the plural computation nodes. | 2021-12-30 |
20210406078 | CLIENT-DEFINED FIELD RESOLVERS FOR DATABASE QUERY LANGUAGE GATEWAY - A query gateway service for servicing API requests of software services, the query gateway service configured to monitor for, and execute, client-defined field resolvers so that client applications can define, at least in part, how data served to that client application in response to an API request is formatted, validated, mutated, or otherwise presented. | 2021-12-30 |
20210406079 | Persistent Non-Homogeneous Worker Pools - Function calls, such as function calls from a workflow, may be added to queues. Function calls are selected from the queue and executed by workers of a worker pool, each worker being a container. The workers may be of different types and function calls may require execution by a worker of a specific type. The workers of the worker pool may be created or deleted such that workers are of the type required by function calls in the queue. Creation and deletion of workers may be performed according to priority of function calls in the queue. Creation and deletion of workers may be scheduled according to a workflow including the plurality of function calls. | 2021-12-30 |
20210406080 | SYSTEM FOR ADAPTIVE MULTITHREADED RECALCULATION OPERATIONS - A method may include receiving an indication that a recalculation operation is to be completed for data stored in a data file; determining that a currently assigned number of threads for execution of the recalculation operation is lower than a target number of threads for the recalculation operation; requesting an additional thread for execution of the recalculation operation; beginning execution of the recalculation operation using the currently assigned number of threads; receiving an indication that the additional thread is available for execution of the recalculation operation; updating the currently assigned number of threads to include the additional thread; and continuing execution of the recalculation operation using the updated currently assigned number of threads. | 2021-12-30 |
20210406081 | Method For Detecting System Problems In A Distributed Control System And A Method For Allocating Foglets In A Fog Network - A method for detecting system problems in a distributed control system including a plurality of computational devices is suggested. The method includes:—deploying one or more software agents on one or more devices of the system;—monitoring, via the one or more software agents, a system configuration and/or a system functionality;—detecting a problem in the monitored system configuration and/or a system functionality;—adding one or more new software agents and deploying the one or more new software agents on one or more devices of the system associated with the problem;—collecting data associated with the problem, via the added software agents. | 2021-12-30 |
20210406082 | APPARATUS AND METHOD FOR MANAGING RESOURCE - An apparatus for managing a resource includes a buffer memory; and a processor configured to store, when target data for each of processes is acquired, the acquired target data in the buffer memory, the processes occurring asynchronously and periodically and being assigned degrees of priority, and assign a shared resource to some of the processes for which the target data is stored in the buffer memory in descending order of the priority at every predetermined timing, the shared resource being usable for each of the processes. | 2021-12-30 |
20210406083 | CONFIGURABLE LOGIC PLATFORM WITH RECONFIGURABLE PROCESSING CIRCUITRY - A configurable logic platform may include a physical interconnect for connecting the platform to a processor, a reconfigurable logic region having logic blocks configured based on configuration data, a configuration port for applying configuration data to the reconfigurable logic region, a reconfiguration logic function accessible via transactions of the physical interconnect and in communication with the configuration port, the reconfiguration logic function providing restricted access to the configuration port from the physical interconnect, and an interface function accessible via transactions of the physical interconnect and providing an interface to the reconfigurable logic region which allows information to be transmitted over the physical interconnect and prevents the reconfigurable logic region from directly accessing the physical interconnect. The reconfiguration logic function may be implemented in the reconfigurable logic region. | 2021-12-30 |
20210406084 | METHOD AND SYSTEM FOR PRE-ALLOCATION OF COMPUTING RESOURCES PRIOR TO PREPARATION OF PHYSICAL ASSETS - A method for managing computing resources includes obtaining, by a resource use manager, a physical asset request from a client, in response to the physical asset request: initiating allocation of a landing area device to the client based on the physical asset request, determining a physical asset to be provided to the client, sending, to a manufacturer, a physical asset preparation request, obtaining a confirmation of deployment of the physical asset from the client, performing a restoration on the physical asset using a most recent landing area incremental backup, and after the initiating the restoration, initiating a transfer of operation from the landing area device to the physical asset. | 2021-12-30 |
20210406085 | METHODS AND APPARATUS FOR ALLOCATING A WORKLOAD TO AN ACCELERATOR USING MACHINE LEARNING - Methods, apparatus, systems, and articles of manufacture for allocating a workload to an accelerator using machine learning are disclosed. An example apparatus includes a workload attribute determiner to identify a first attribute of a first workload and a second attribute of a second workload. An accelerator selection processor causes at least a portion of the first workload to be executed by at least two accelerators, accesses respective performance metrics corresponding to execution of the first workload by the at least two accelerators, and selects a first accelerator of the at least two accelerators based on the performance metrics. A neural network trainer trains a machine learning model based on an association between the first accelerator and the first attribute of the first workload. A neural network processor processes, using the machine learning model, the second attribute to select one of the at least two accelerators to execute the second workload. | 2021-12-30 |
20210406086 | AUTO-SIZING FOR STREAM PROCESSING APPLICATIONS - Techniques are provided for automatically resizing applications. In one technique, policy data that indicates an order of multiple policies is stored. The policies include (1) a first policy that corresponds to a first computer resource and a first resizing action and (2) a second policy that is lower in priority than the first policy and that corresponds to a second resizing action and a second computer resource. Resource utilization data is received from at least one application executing in a cloud environment. Based on the order, the first policy is identified. Based on the resource utilization data, it is determined whether criteria associated with the first policy are satisfied with respect to the application. If satisfied, then the first resizing action is performed with respect to the application; otherwise, based on the computer resource utilization data, it is determined whether criteria associated with the second policy are satisfied. | 2021-12-30 |
20210406087 | MEMORY POOLING BETWEEN SELECTED MEMORY RESOURCES - Apparatuses, systems, and methods related to memory pooling between selected memory resources are described. A system using a memory pool formed as such may enable performance of functions, including automated functions critical for prevention of damage to a product, personnel safety, and/or reliable operation, based on increased access to data that may improve performance of a mission profile. For instance, one apparatus described herein includes a memory resource, a processing resource coupled to the memory resource, and a transceiver resource coupled to the processing resource. The memory resource, the processing resource, and the transceiver resource are configured to enable formation of a memory pool between the memory resource and another memory resource at another apparatus responsive to a request to access the other memory resource transmitted from the processing resource via the transceiver. | 2021-12-30 |
20210406088 | FEDERATED OPERATOR FOR EDGE COMPUTING NETWORK - Systems and methods for inter-cluster deployment of compute services using federated operator components are generally described. In some examples, a first request to deploy a compute service may be received by a federated operator component. In various examples, the federated operator component may send a second request to provision a first compute resource for the compute service to a first cluster of compute nodes. In various examples, the first cluster of compute nodes may be associated with a first hierarchical level of a computing network. In some examples, the federated operator component may send a third request to provision a second compute resource for the compute service to a second cluster of compute nodes. The second cluster of compute nodes may be associated with a second hierarchical level of the computing network that is different from the first hierarchical level. | 2021-12-30 |
20210406089 | Determination of Cloud Resource Utilization - Data characterizing a log of requests by a plurality of software services executing based on a virtual resource that is within a remote computing environment is received. The executing includes transmitting the requests for utilization of the virtual resource. A metric of utilization of the virtual resource by a first software service of the plurality of software services is determined based on the log. The metric of utilization characterizes a portion of total usage of the virtual resource that is attributable to the first software service. The metric of utilization is provided. Related apparatus, systems, techniques and articles are also described. | 2021-12-30 |
20210406090 | METHODS, SYSTEMS AND APPARATUS FOR GOVERNANCE OF VIRTUAL COMPUTING INFRASTRUCTURE RESOURCES - Methods, apparatus and articles of manufacture for governance of virtual computing infrastructure resources are disclosed. An example cloud management system includes a plurality of hosts. The hosts are to manage requests and allocate resources through one or more virtual machines. The example system also includes an administrator to configure the plurality of hosts to accommodate resource provisioning requests by assigning a constraint and a skill to the hosts to define a placement of the hosts. The placement of a respective host is to dictate an availability of the host for provisioning. | 2021-12-30 |
20210406091 | TECHNOLOGIES TO OFFLOAD WORKLOAD EXECUTION - Examples described herein relate to an apparatus comprising: at least one processor and an accelerator pool comprising at least one fixed function hardware offload engine and at least one programmable hardware offload engine, wherein in connection with migration or instantiation of a service to execute on the at least one processor and unavailability of the at least one fixed function hardware offload engine to perform an operation for the service, configure at least one of the at least one programmable hardware offload engine to perform the operation for the service. In some examples, the operation comprises an operation performed by a fixed function hardware offload engine on a source platform from which the service was migrated. | 2021-12-30 |
20210406092 | CORE SELECTION BASED ON USAGE POLICY AND CORE CONSTRAINTS - A processing unit of a processing system compiles a priority queue listing of a plurality of processor cores to run a workload based on a cost of running the workload on each of the processor cores. The cost is based on at least one of a system usage policy, characteristics of the workload, and one or more physical constraints of each processor core. The processing unit selects a processor core based on the cost to run the workload and communicates an identifier of the selected processor core to an operating system of the processing system. | 2021-12-30 |
20210406093 | COMPUTING MACHINE, METHOD AND NON-TRANSITORY COMPUTER-READABLE MEDIUM - A computing machine according to the present disclosure includes: control managing means for controlling communication between a plurality of computing machines; and management register means for managing communication setting information which sets communication between the computing machines and communication state information which indicates a state of the communication. The computing machine includes edge control means for, upon receiving a signal from one of the computing machines, sorting the signal into a control signal and data based on the communication setting information set in the management register means and in accordance with the number of clocks for processing the signal. | 2021-12-30 |
20210406094 | Mixed Reality Complementary Systems - Multiple sound systems are used to provide a realistic audio MR audio experience for one or more users. In one example, an MR space sound system has one or more speakers distributed within an MR space. MR device sound systems for users provides sound directly to the users wearing the MR devices. Audio signals representative of sound in the MR experience are mixed by each sound system to provide sounds that complement each other. Both sound systems provide sound to the users based on events occurring in the MR experience. | 2021-12-30 |
20210406095 | SYSTEM AND METHOD FOR CONVERSION ACHIEVEMENT - Methods, systems and computer storage media are disclosed for providing resources to a platform issue. Embodiments describe associating educational resources and an event resource to resolve the platform issue. | 2021-12-30 |
20210406096 | SYSTEM AND METHOD FOR SMART SEARCHING FOR CONVERSION ACHIEVEMENT - Methods, systems and computer storage media are disclosed for providing resources to a platform issue. Embodiments describe associating educational resources and an event resource to resolve the platform issue. | 2021-12-30 |
20210406097 | SYSTEM AND METHOD FOR ADOPTION TRACKING AND INTERVENTION FOR CONVERSION ACHIEVEMENT - Methods, systems and computer storage media are disclosed for providing resources to a platform issue. Embodiments describe associating educational resources and an event resource to resolve the platform issue. | 2021-12-30 |
20210406098 | SYSTEM AND METHOD FOR NEW ISSUE MANAGEMENT FOR CONVERSION ACHIEVEMENT - Methods, systems and computer storage media are disclosed for providing resources to a platform issue. Embodiments describe associating educational resources and an event resource to resolve the platform issue. | 2021-12-30 |
20210406099 | DETERMINING WHETHER AND/OR WHEN TO PROVIDE NOTIFICATIONS, BASED ON APPLICATION CONTENT, TO MITIGATE COMPUTATIONALLY WASTEFUL APPLICATION-LAUNCHING BEHAVIOR - Implementations set forth herein relate to intervening notifications provided by an application for mitigating computationally wasteful application launching behavior that is exhibited by some users. A state of a module of a target application can be identified by emulating user inputs previously provided by the user to the target application. In this way, the state of the module can be determined without visibly launching the target application. When the state of the module is determined to satisfy criteria for providing a notification to the user, the application can render a notification for the user. The application can provide intervening notifications for a variety of different target applications in order to reduce a frequency at which the user launches and closes applications to check for variations in target application content. | 2021-12-30 |
20210406100 | SEGMENTING MACHINE DATA INTO EVENTS BASED ON SOURCE SIGNATURES - Methods and apparatus consistent with the invention provide the ability to organize and build understandings of machine data generated by a variety of information-processing environments. Machine data is a product of information-processing systems (e.g., activity logs, configuration files, messages, database records) and represents the evidence of particular events that have taken place and been recorded in raw data format. In one embodiment, machine data is turned into a machine data web by organizing machine data into events and then linking events together. | 2021-12-30 |
20210406101 | Independent Datastore In A Network Routing Environment - Systems, methods, and devices for offloading network data to a datastore. A system includes a publisher device in a network computing environment. The system includes a subscriber device in the network computing environment. The system includes a datastore independent of the publisher device and the subscriber device, the datastore comprising one or more processors in a processing platform configurable to execute instructions stored in non-transitory computer readable storage media. The instructions includes receiving data from the publisher device. The instructions include storing the data across one or more of a plurality of shared storage devices. The instructions include providing the data to the subscriber device. | 2021-12-30 |
20210406102 | METHOD AND APPARATUS FOR PROVIDING ASYNCHRONICITY TO MICROSERVICE APPLICATION PROGRAMMING INTERFACES - A method of handling an API call includes receiving a first API call from a job requestor, the first API call including a job to be executed by a microservice. The method also includes adding the job to a job queue, making a second, synchronous, API call including the job to the microservice, updating the job queue upon successful completion of the job by the microservice, and notifying the job requestor of the successful completion of the job. | 2021-12-30 |
20210406103 | CONTROLLING LOCATION-BASED FEATURES WITH USAGE MAPS - Systems, device and techniques are disclosed for controlling location-based features with usage maps. An application running on a device may receive a current location of the device. The application may determine a sector of a usage map that corresponds to the current location of the device. The usage map may be associated with the application and he usage map may include a map of a geographic area divided into sectors. The application may modify the operation of a remote API call of the application based on the sector of the usage map that corresponds to the current location of the device by disabling or rate-limiting the remote API call. | 2021-12-30 |
20210406104 | INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING API USE HISTORY DISPLAY PROGRAM - An information processing apparatus including: a memory; and a processor coupled to the memory, the processor being configured to perform processing, the processing including: executing a display form determination processing that includes extracting, from use history data of a computer service that uses a plurality of application programming interfaces (APIs), the numbers of times of execution of the APIs, and determining display forms for the APIs according to the numbers of times of execution of the APIs; and executing a display control processing that includes displaying, in a directed graph that represents a node that corresponds to each of the plurality of APIs, the nodes that correspond to the APIs in the determined display forms. | 2021-12-30 |
20210406105 | METHODS AND APPARATUS FOR IMPROVED FAILURE MODE AND EFFECTS ANALYSIS - A processor generates a first allocation matrix and accesses a representation of relationships between individual system elements, representing lower-level system elements to be considered at a current-level of design of a product, with each other. The processor creates an element entry in the first allocation matrix for at least a subset of the individual system elements and creates an element entry in the first allocation matrix for one or more directional relationships between the individual system elements as indicated in the representation. The processor creates function and requirements entries in the first allocation matrix based on pre-existing requirements for the product applicable to the current-level of design and generates additional function and requirements, as well as creates corresponding entries in the first allocation matrix, based on current-level architectural requirements, including at least functions and requirements for one or more of the directional relationships indicated in the representation. | 2021-12-30 |
20210406106 | ANOMALY RECOGNITION IN INFORMATION TECHNOLOGY ENVIRONMENTS - A method comprises obtaining a set of log files for a software system. The set of log files applies to an extended window. A periodic pattern in a first set of error-event surges in the set of log files is identified. The error-event surges in the first set is identified as event noise. A second set of log files for the software system is obtained. The second set of log files applies to a shortened window. Timeseries analysis on the second set of log files is performed. A particular error-event surge in a detection period in the second set of log files that is abnormal as compared to the shortened window is detected based on the timeseries analysis. That the particular error-event surge does not fit into the periodic pattern is determined, the particular error-event surge is characterized as an anomaly, based on the determining. | 2021-12-30 |
20210406107 | Write Abort Error Detection in Multi-Pass Programming - A storage device may detect errors during data transfer. Upon detection of one or more data transfer errors, for example, the storage device can begin to scan pages within a plurality of memory devices for uncorrectable error correction codes. Once scanned, a range of pages within the plurality of memory devices with uncorrectable error correction codes associated with a write abort error may be determined. The stage of multi-pass programming achieved on each page within that range is then established. Once calculated, the previously aborted multi-pass programming of each page within the range of pages can continue until completion. Upon completion, normal operations may continue without discarding physical data location. | 2021-12-30 |
20210406108 | Method and System for Data Transmission and Reception of Display Device - The present embodiment relates to a method and a system for data transmission and reception of a display device and, more specifically, to a method and a system for repeatedly checking whether an error has occurred in a data driving device configuration for high-speed communication when driving the display device to prevent the image quality degradation due to the configuration error. | 2021-12-30 |
20210406109 | SYSTEMS AND METHODS FOR DETECTING BEHAVIORAL ANOMALIES IN APPLICATIONS - Aspects of the disclosure relate to the field of detecting a behavioral anomaly in an application. In one exemplary aspect, a method may comprise retrieving and identifying at least one key metric from historical usage information for an application on a computing device. The method may comprise generating a regression model configured to predict usage behavior associated with the application and generating a statistical model configured to identify outliers in the data associated with the at least one key metric. The method may comprise receiving usage information in real-time for the application. The method may comprise predicting, using the regression model, a usage pattern for the application indicating expected values of the at least one key metric. In response to determining that the usage information does not correspond to the predicted usage pattern and does not comprise a known outlier, the method may comprise detecting the behavioral anomaly. | 2021-12-30 |
20210406110 | IDENTIFYING AND RANKING ANOMALOUS MEASUREMENTS TO IDENTIFY FAULTY DATA SOURCES IN A MULTI-SOURCE ENVIRONMENT - Techniques for identifying anomalous multi-source data points and ranking the contributions of measurement sources of the multi-source data points are disclosed. A system obtains a data point including a plurality of measurements from a plurality of sources. The system determines that the data point is an anomalous data point based on a deviation of the data point from a plurality of additional data points. The system determines a contribution of two or more measurements, from the plurality of measurements, to the deviation of the data point from the plurality of additional data points. The system ranks the at least the two or more measurements, from the plurality of measurements, based on the respective contribution of each of the two or more measurements to the deviation of the anomalous data point from the plurality of prior data points. | 2021-12-30 |
20210406111 | WATCHDOG CIRCUIT, CIRCUIT, SYSTEM-ON-CHIP, METHOD OF OPERATING A WATCHDOG CIRCUIT, METHOD OF OPERATING A CIRCUIT, AND METHOD OF OPERATING A SYSTEM-ON-CHIP - A watchdog circuit for monitoring a plurality of virtual machines provided by one core of a plurality of cores. The watchdog circuit may include a first memory portion, a second memory portion, and a control logic configured to count a number of pulses, to, when starting the watchdog circuit, store a global watchdog counter value in the first memory portion, and to store a local counter value for each virtual machine of the one or more virtual machines in the second memory portion, and, after a predefined number of pulses, to modify the global watchdog counter value and the local counter values, and, if the global watchdog counter value fulfills a predefined global watchdog reference criterion or any of the local watchdog counter values fulfills a predefined local watchdog reference criterion, to output an error signal. | 2021-12-30 |
20210406112 | ANOMALY CLASSIFICATION IN INFORMATION TECHNOLOGY ENVIRONMENTS - A method comprises receiving a set of log files that correspond to a detected anomaly in a software system. The set of log files are input into a first classification algorithm. A set of classified log events is received from the first classification algorithm. The set of classified log events is input into a second classification algorithm. A classification of the detected anomaly is obtained from the second classification algorithm. | 2021-12-30 |
20210406113 | SYSTEMS AND METHODS FOR DYNAMICALLY RESOLVING HARDWARE FAILURES IN AN INFORMATION HANDLING SYSTEM - An information handling system may include a processor and a basic input/output system configured to, responsive to an occurrence of an exception error, triage among various hardware components of the information handling system to determine existence of any signatures of potential hardware failures, write a database structure to a non-volatile memory including the signatures of potential hardware failures, upon boot of the basic input/output system, enable one or more control methods for hardware failure mitigations associated with the signatures of potential hardware failures, and perform the mitigations during execution of an operating system of the information handling system. | 2021-12-30 |
20210406114 | METHOD AND SYSTEM FOR FACILITATING A SELF-HEALING NETWORK - An event analysis system is provided. During operation, the system can determine an event description associated with the switch from an event log of the switch. The event description can correspond to an entry in a table in a switch configuration database of the switch. A respective database in the switch can be a relational database. The system can then obtain an event log segment, which is a portion of the event log, comprising the event description based on a range of entries. Subsequently, the system can apply a pattern recognition technique on the event log segment based on the entry in the switch configuration database to determine one or more patterns corresponding to an event associated with the event description. The switch can then apply a machine learning technique using the one or more patterns to determine a recovery action for mitigating the event. | 2021-12-30 |
20210406115 | Processor Repair - A processor comprises a plurality of processing units, wherein there is a fixed transmission time for transmitting a message from a sending processing unit to a receiving processing unit, based on the physical positions of the sending and receiving processing units in the processor. The processing units are arranged in a column, and the fixed transmission time depends on the position of a processing circuit in the column. An exchange fabric is provided for exchanging messages between sending and receiving processing units, the columns being arranged with respect to the exchange fabric such that the fixed transmission time depends on the distances of the processing circuits with respect to the exchange fabric. | 2021-12-30 |
20210406116 | TECHNIQUES FOR SCHEDULED ANTI-ENTROPY REPAIR DESIGN - Various embodiments of the invention disclosed herein provide techniques for performing distributed anti-entropy repair procedures across a plurality of nodes in a distributed database network. A node included in a plurality of nodes within the distributed database network determines, before all other nodes included in the plurality of nodes, that a first anti-entropy repair procedure has ended. The node determines that a second anti-entropy repair procedure is ready to begin. The node generates a schedule for executing one or more operations associated with the second anti-entropy repair procedure. The node writes the schedule to a shared repair schedule data structure to initiate the second anti-entropy repair procedure across multiple nodes included in the plurality of nodes. Each of the nodes included in the plurality of nodes then performs a node repair based on the schedule. | 2021-12-30 |
20210406117 | ERROR HANDLING OPTIMIZATION IN MEMORY SUB-SYSTEM MAPPING - A system includes a memory device having blocks of memory cells. A processing device is operatively coupled to the memory device, the processing device to detect an error event triggered within a source block of the memory cells. In response to detection of the error event, the processing device is to read data from the source block; write the data into a mitigation block that is different than the source block; and replace, in a block set map data structure, a first identifier of the source block with a second identifier of the mitigation block. The block set map data structure includes block location metadata for a data group, of the memory device, that includes the data. | 2021-12-30 |
20210406118 | ENDURANCE MODULATION FOR FLASH STORAGE - A method for storing input data in a flash memory. The method comprising generating a codeword by encoding the input data with an error correcting code and generating a shaped codeword by applying a shaping function to at least a part of the codeword. The shaping function comprising logically inverting every n-th occurrence of a bit associated with a high-charge storage state in the part of the codeword. The method further comprising writing the shaped codeword to the flash memory, generating an estimated shaped codeword by reading the flash memory, generating soft decision information for the estimated shaped codeword, and retrieving the input data by decoding the soft decision information using an error correcting code soft decoder. | 2021-12-30 |
20210406119 | METHOD, MEMORY CONTROLLER, AND MEMORY SYSTEM FOR READING DATA STORED IN FLASH MEMORY - An exemplary method for reading data stored in a flash memory includes: selecting an initial gate voltage combination from a plurality of predetermined gate voltage combination options; controlling a plurality of memory units in the flash memory according to the initial gate voltage combination, and reading a plurality of bit sequences; performing a codeword error correction upon the plurality of bit sequences, and determining if the codeword error correction successful; if the codeword error correction is not successful, determining an electric charge distribution parameter; determining a target gate voltage combination corresponding to the electric charge distribution parameter by using a look-up table; and controlling the plurality of memory units to read a plurality of updated bit sequences according to the target gate voltage combination. | 2021-12-30 |
20210406120 | MEMORY SUB-SYSTEM CODEWORD QUALITY METRICS STREAMING - Several embodiments of systems incorporating memory devices are disclosed herein. In one embodiment, a memory device can include a controller and a memory component operably coupled to the controller. The controller can include a memory manager, a quality metrics first in first out (FIFO) circuit, and an error correction code (ECC) decoder. In some embodiments, the ECC decoder can generate quality metrics relating to one or more codewords saved in the memory component and read into the controller. In these and other embodiments, the ECC decoder can stream the quality metrics to the quality metrics FIFO circuit, and the quality metrics FIFO circuit can stream the quality metrics to the memory manager. In some embodiments, the memory manager can save all or a subset of the quality metrics in the memory component and/or can use the quality metrics in post-processing, such as in error avoidance operations of the memory device. | 2021-12-30 |
20210406121 | Storage System and Method for Balanced Quad-Level Cell (QLC) Coding with Margin for an Internal Data Load (IDL) Read - A storage system and method for balanced quad-level cell (QLC) coding with margin for an internal data load (IDL) read are provided. In one example, an MLC-Fine programming approach uses a balanced 3-4-4-4 coding, where the data is encoded by assigning a unique binary sequence per state. The IDL read is supported by using a unique 3-4-4-4 coding that provides at least a three-state gap between the MLC states, while using the same ECC redundancy per page. This allows for a reduced write buffer by supporting the IDL read and provides a balanced bit error rate (BER) due to the balanced mapping. | 2021-12-30 |
20210406122 | Storage System and Method for Direct Quad-Level Cell (QLC) Programming - A storage system and method for direct quad-level cell (QLC) programming are provided. In one example, a controller of the storage system is configured to create codewords for lower, middle, and upper pages of data; program the codewords in the memory of the storage system using a triple-level cell programming operation; read the programming of the codewords for the lower, middle, and upper pages of data in the memory; create a codeword for a top page of data; and program the codeword in the memory. | 2021-12-30 |
20210406123 | APPARATUSES, SYSTEMS, AND METHODS FOR ERROR CORRECTION - Apparatuses, systems, and methods for error correction. A memory array may be coupled to an error correction code (ECC) circuit along a read bus and a write bus. The ECC circuit includes a read portion and a write portion. As part of a mask write operation, read data and read parity may be read out along the read bus to the read portion of the ECC circuit and write data may be received along data terminals by the write portion of the ECC circuit. The write portion of the ECC circuit may generate amended write data based on the write data and the read data, and may generate amended parity based on the read parity and the amended write data. The amended write data and amended parity may be written back to the memory array along the write bus. | 2021-12-30 |
20210406124 | 3-Dimensional NAND Flash Layer Variation Aware SSD Raid - An apparatus is disclosed having a parity buffer having a plurality of parity pages and one or more dies, each die having a plurality of layers in which data may be written. The apparatus also includes a storage controller configured to write a stripe of data across two or more layers of the one or more dies, the stripe having one or more data values and a parity value. When a first data value of the stripe is written, it is stored as a current value in a parity page of the parity buffer, the parity page corresponding to the stripe. For each subsequent data value that is written, an XOR operation is performed with the subsequent data value and the current value of the corresponding parity page and the result of the XOR operation is stored as the current value of the corresponding parity page. | 2021-12-30 |
20210406125 | MEMORY CONTROLLER, MEMORY SYSTEM INCLUDING THE SAME, AND METHOD OF OPERATING THE MEMORY CONTROLLER - A memory controller for controlling a memory operation of a memory device includes: an error correction code (ECC) circuit configured to detect an error of first read data read from the memory device and correct the error; an error type detection logic configured to write first write data to the memory device, compare second read data with the first write data, detect an error bit of the second read data based on a result of the comparing, and output information about an error type identified by the error bit; and a data patterning logic configured to change a bit pattern of input data to reduce an error of the second read data based on the information about the error type. | 2021-12-30 |
20210406126 | LOW LATENCY AVAILABILITY IN DEGRADED REDUNDANT ARRAY OF INDEPENDENT MEMORY - A computer-implemented method includes fetching, by a controller, data using a plurality of memory channels of a memory system. The method further includes detecting, by the controller, that a first memory channel of the plurality of memory channels has not returned data. The method further includes marking, by the controller, the first memory channel from the plurality of memory channels as unavailable. The method further includes, in response to a fetch, reconstructing, by the controller, fetch data based on data received from all memory channels other than the first memory channel. | 2021-12-30 |
20210406127 | METHOD TO ORCHESTRATE A CONTAINER-BASED APPLICATION ON A TERMINAL DEVICE - Provided is a method for orchestrating a container-based application that is executed on a terminal device, in which implementation information is received in an orchestration slave unit on the terminal device via a communication connection from an orchestration master unit, and the application is configured and/or controlled by the orchestration slave unit based on the implementation information, wherein the received implementation information is additionally saved persistently in a memory unit in the terminal device, and if the communication connection to the orchestration master unit is interrupted, the most recently saved implementation information is retrieved from the orchestration slave unit and the application is configured and/or controlled based on the most recently saved implementation information. | 2021-12-30 |
20210406128 | CLOUD STORAGE SYSTEM - Systems and methods for storing, analyzing, and remotely accessing fishing related data collected by a fish finder device are described. A fishing data management system is wirelessly connected to the fish finder device by an electronic communication connection. A mobile electronic device may be used to facilitate the electronic communication connection. The communication may be directed to the system wirelessly through a cellular, Bluetooth, or satellite navigation connection. The fishing management system is configured to receive and store data collected by the fish finder, analyze the data, and provide feedback to a user. A method of using the system allows the user to access the stored and analyzed data remotely. | 2021-12-30 |
20210406129 | INCREMENTAL BACKUP TO OBJECT STORE - Techniques are provided for incremental backup to an object store. A request may be received from an application to perform a backup from a volume hosted by a node to a backup target within the object store. A set of changed files within the volume since a prior backup of the volume was performed to the backup target is identified, along with metadata associated with the set of changed files. The metadata is utilized to identify changed data blocks comprising data of the set of changed files that was modified since the prior backup. The changed data blocks are backed up to the object store. | 2021-12-30 |
20210406130 | UPDATING A VIRTUAL MACHINE BACKUP - A virtual machine disk image file backup is selected among a plurality of virtual machine disk image file backups stored on a backup storage based on a backup update policy. A version of the selected virtual machine disk image file backup is mounted. Based on the backup update policy, an update to the mounted version of the selected virtual machine disk image file backup is applied without restoring the selected virtual machine disk image file backup. The updated version of the selected virtual machine disk image file backup is stored on the backup storage. | 2021-12-30 |
20210406131 | COORDINATED DATA PROTECTION FOR MULTIPLE NETWORKING DEVICES - Embodiments are described for a method and system of applying data protection software mechanisms to network devices to auto-discover the networking equipment, save changes from memory (TCAM) to local storage, backup changes to protection storage, provide auditing and tracking history of changes, and provide the ability to deploy test/development copies of changes using software defined networking techniques. A coordinator protects network devices organized into a plurality of partitions by creating a backup of each network device, pushing backup policies to individual data protection units for the network devices within each partition to provide a consistent-state backup of the network devices, and backing up the configuration changes of the network devices to a protection storage device. | 2021-12-30 |
20210406132 | COMPUTING AN UNBROKEN SNAPSHOT SEQUENCE - Methods, systems and computer program products for high-availability computing. In a computing configuration comprising a primary node, a first backup node, and a second backup node, a particular data state is restored to the primary node from a backup snapshot at the second backup node. Firstly, a snapshot coverage gap is identified between a primary node snapshot at the primary node and the backup snapshot at the second backup node. Next, intervening snapshots at the first backup node that fills the snapshot coverage gap are identified and located. Having both the backup snapshot from the second backup node and the intervening snapshots from the first backup node, the particular data state at the primary node is restored by performing differencing operations between the primary node snapshot, the backup snapshot from the second backup node, and the intervening snapshots of the first backup node. | 2021-12-30 |
20210406133 | ON THE FLY PIT SELECTION IN CLOUD DISASTER RECOVERY - On-the-fly point-in-time recovery operations are disclosed. During a recovery operation, the PiT being restored can be changed on-the-fly or during the existing recovery operation without restarting the recovery process from the beginning. In one example, this improves recovery time operation (RTO) and prevents aspects of the recovery operation to be avoided when changing to a different PiT. | 2021-12-30 |
20210406134 | LAN-FREE AND APPLICATION CONSISTENT BACKUP METHOD - Example implementations are directed to a local area network (LAN)-free and application-consistent backup with data copy offload from backup server to primary storage. Primary storage mounts the secondary storage volume and directly transfers the differential data that updated after the last backup. The example implementations reduce the LAN network load and the load on the backup server and speeding up the backup process. | 2021-12-30 |
20210406135 | AUTOMATED DEVELOPMENT OF RECOVERY PLANS - An automated system monitors network traffic to determine dependencies between different machines. These dependencies can be used to automatically develop a recovery plan for the machines, for example restoring servers in a certain order. This approach can also automatically adjust the recovery plan for changes in system configuration, for example as different servers come online or are taken offline or change their roles. | 2021-12-30 |
20210406136 | DISASTER RECOVERY FOR DISTRIBUTED FILE SERVERS, INCLUDING METADATA FIXERS - Examples of systems described herein include a virtualized file servers. Examples of virtualized file servers described herein may support disaster recovery of the virtualized file server. Accordingly, examples of virtualized file servers may support metadata fixing procedures to update metadata in a recovery setting. Examples of virtualized file servers may support hypervisor-agnostic disaster recovery. | 2021-12-30 |
20210406137 | SYSTEMS AND METHODS FOR CHECKING SAFETY PROPERTIES - In some embodiments, a system is provided, comprising enforcement hardware configured to execute, at run time, a state machine in parallel with application code. Executing the state machine may include maintaining metadata that corresponds to one or more state variables of the state machine; matching instructions in the application code to transitions in the state machine; and, in response to determining that an instruction in the application code does not match any transition from a current state of the state machine, causing an error handling routine to be executed. In some embodiments, a description of a state machine may be translated into at least one policy to be enforced at run time based on metadata labels associated with application code and/or data manipulated by the application code. | 2021-12-30 |
20210406138 | CAN TRANSCEIVER - A transceiver is disclosed. The transceiver includes a first receiver line, a first transmitter line, a second receiver line, and a second transmitter line, wherein the first receiver line and the second receiver line are coupled to a receiver line selector and the first transmitter line and the second transmitter line are coupled to a transmitter line selector. A system monitor is included that is configured to monitor a controller area network (CAN) bus and the first transmitter line and to select the second transmitter line and the second receiver line if an error condition is detected through the monitoring of the first transmission line. A bias voltage generator is included to generate a bias voltage for a terminating capacitor of the CAN bus, wherein the bias voltage generator is activated by the system monitor when an error condition is detected in the CAN bus. | 2021-12-30 |
20210406139 | TRUE ZERO RTO: PLANNED FAILOVER - One example method includes performing, as part a planned failover procedure, operations that include connecting a replica OS disk to a replica VM, powering up the replica VM, booting an OS of the replica VM, disconnecting a source VM from a network, and connecting replica data disks to the replica VM. IOs issued by an application at the source VM continue to be processed by the source VM while the replica OS disk is connected, the replica VM is powered up, and the OS of the replica VM is booted. | 2021-12-30 |
20210406140 | ARTIFICIAL INTELLIGENCE-BASED REDUNDANCY MANAGEMENT FRAMEWORK - Methods, apparatus, and processor-readable storage media for artificial intelligence-based redundancy management are provided herein. An example computer-implemented method includes obtaining telemetry data from one or more client devices within at least one system; predicting one or more hardware component failures in at least a portion of the one or more client devices within the at least one system by processing at least a portion of the telemetry data using a first set of one or more artificial intelligence techniques; determining, using a second set of one or more artificial intelligence techniques, one or more redundant hardware components for implementation in connection with the one or more predicted hardware component failures; and performing at least one automated action based at least in part on the one or more redundant hardware components. | 2021-12-30 |
20210406141 | COMPUTER CLUSTER WITH ADAPTIVE QUORUM RULES - The fail-over computer cluster enables multiple computing devices to operate using adaptive quorum rules to dictate which nodes are in the fail-over cluster at any given time. The adaptive quorum rules provide requirements for communications between nodes and connections with voting file systems. The adaptive quorum rules include particular recovery rules for unplanned changes in node configuration, such as due to a disruptive event. Such recovery quorum rules enable the fail-over cluster to continuing to operate with various changed configurations of its node members as a result of the disruptive event. In the changed configuration, access to voting file systems may not be required for a majority-group subset of nodes. If no majority-group subset remains, nodes may need direct or indirect access to voting file systems. | 2021-12-30 |
20210406142 | PROCESSOR HEALTH MONITORING WITH FAILBACK BASED ON TIMESTAMPS - Disclosed herein are system, method, and computer program product embodiments for a processor health monitoring system. An embodiment operates by determining that a plurality of messages are transmitted to both a primary processor and a recovery processor. A functional health of the primary processor is monitored based on one or more metrics, and a failure event is detected at the primary processor based on the one or more metrics. A timestamp of a first message of the plurality of messages is identified, the first message having been successfully processed by the primary processor prior to the failure event. The timestamp of the first message is provided to the recovery processor, wherein the recovery processor is configured to actively process the plurality of messages from the timestamp of the first message. The primary processor is deactivated. | 2021-12-30 |
20210406143 | Information Handling Systems And Related Methods For Testing Memory During Boot And During Operating System (OS) Runtime - Embodiments of information handling systems (IHSs) and computer-implemented methods are provided herein for testing system memory (or another volatile memory component) of an IHS. In the disclosed embodiments, memory testing is performed automatically: (a) during the pre-boot phase each time a new page of memory is allocated for the first time after a system boot, and (b) during OS runtime each time a read command is received and/or an event is detected. By proactively testing each page of memory, as the page is allocated but before information is stored therein, the systems and methods disclosed herein prevent “bad” memory pages from being used. | 2021-12-30 |
20210406144 | TEST AND MEASUREMENT SYSTEM FOR ANALYZING DEVICES UNDER TEST - A test and measurement system for analyzing a device under test, including a database configured to store test results related to tests performed with one or more prior devices under test, a receiver to receive new test results about a new device under test, a data analyzer configured to analyze the new test results based on the stored test results, and a health score generator configured to generate a health score for the new device under test based on the analysis from the data analyzer. | 2021-12-30 |
20210406145 | Configuring Cache Policies for a Cache Based on Combined Cache Policy Testing - An electronic device includes a cache with a cache controller and a cache memory. The electronic device also includes a cache policy manager. The cache policy manager causes the cache controller to use two or more cache policies for cache operations in each of multiple test regions in the cache memory, with different configuration values for the two or more cache policies being used in each test region. The cache policy manager selects a selected configuration value for at least one cache policy of the two or more cache policies based on performance metrics for cache operations while using the different configuration values for the two or more cache policies in the test regions. The cache policy manager causes the cache controller to use the selected configuration value when using the at least one cache policy for cache operations in a main region of the cache memory. | 2021-12-30 |
20210406146 | ANOMALY DETECTION AND TUNING RECOMMENDATION SYSTEM - Systems and methods are provided for detecting anomalies on multiple layers of a computer system, such as a compute server. For example, the system can detect anomalies from the lower firmware layer up to the upper application layer of the compute server. The system collects train data from the computer system that is under testing. The train data includes features that affect performance metrics, as defined by a selected benchmark. This train data is used in training machine learning (ML) models. The ML models create a train snapshot corresponding to the selected benchmark. Additionally with every new release, a test snapshot can be created corresponding to the selected benchmark or workload. The system can detect an anomaly based on the train snapshot and the test snapshot. Also, the system can recommend tunings for a best set of features based upon data collected over generations of compute server. | 2021-12-30 |
20210406147 | APPARATUS AND METHOD FOR A CLOSED-LOOP DYNAMIC RESOURCE ALLOCATION CONTROL FRAMEWORK - An apparatus and method for closed loop dynamic resource allocation. For example, one embodiment of a method comprises: collecting data related to usage of a plurality of resources by a plurality of workloads over one or more time periods, the workloads including priority workloads associated with one or more guaranteed performance levels and best effort workloads not associated with guaranteed performance levels; analyzing the data to identify resource reallocations from one or more of the priority workloads to one or more of the best effort workloads in one or more subsequent time periods while still maintaining the guaranteed performance levels; reallocating the resources from the priority workloads to the best effort workloads for the subsequent time periods; monitoring execution of the priority workloads with respect to the guaranteed performance level during the subsequent time periods; and preemptively reallocating resources from the best effort workloads to the priority workloads during the subsequent time periods to ensure compliance with the guaranteed performance level and responsive to detecting that the guaranteed performance level is in danger of being breached. | 2021-12-30 |
20210406148 | ANOMALY DETECTION AND ROOT CAUSE ANALYSIS IN A MULTI-TENANT ENVIRONMENT - System and methods are described for anomaly detection and root cause analysis in database systems, such as multi-tenant environments. In one implementation, a method comprises receiving an activity signal representative of resource utilization within a multi-tenant environment; detecting a plurality of anomalies in the activity signal; computing a priority score for each of the plurality of anomalies; correlating at least a subset of the plurality of anomalies to one or more performance metrics of the multi-tenant environment; and transmitting a remediation signal to one or more devices in the multi-tenant environment based on the correlations and the priority scores. | 2021-12-30 |
20210406149 | Application Execution Path Tracing for Inline Performance Analysis - Techniques are provided for application tracing for inline performance analysis. One method comprises obtaining trace events generated by instructions executed in response to trace points in instrumented software; updating, for each trace event, a buffer entry of a sampling buffer that corresponds to a particular processing core and a time window, wherein the buffer entry is identified based on (a) a flow type identifier associated with the instructions, (b) an identifier of a respective trace event, and (c) an identifier of an adjacent trace event to the respective trace event, and wherein the updating comprises updating, for the time window: (i) a first counter indicating a cumulative number of events for the respective and adjacent trace events, and (ii) a second counter indicating a cumulative amount of time between the respective and adjacent trace events; and determining one or more performance metrics associated with the respective and adjacent trace events in the time window using the first and second counters. | 2021-12-30 |
20210406150 | APPLICATION INSTRUMENTATION AND EVENT TRACKING - Described are system and method embodiments for live application instrumentation and event tracking. User interaction history is recorded on-device in a buffer to create a dataset of signals. The buffer of signals is then passed through various filter functions to qualify them as valuable data points or events for the application developers. Once qualified, events are passed onto an analytics system for further processing. Filter functions may be written in a single language, e.g., JavaScript, and deployed across multiple target platforms including web browsers and native mobile applications. The deployment of filter functions may be done over-the-air in real-time such that application developers do not have to rebuild and publish their applications. The combination of signals, buffer and filter functions backed by the infrastructure to deploy these filter functions to client-side applications on multiple platforms without rebuilding or redeploying applications contributes to this analytics instrumentation solution. | 2021-12-30 |
20210406151 | QUANTUM COMPUTE ESTIMATOR AND INTELLIGENT INFRASTRUCTURE - One example method includes evaluating code of a quantum circuit, estimating one or more runtime statistics concerning the code, generating a recommendation based on the one or more runtime statistics, and the recommendation identifies one or more resources recommended to be used to execute the quantum circuit, checking availability of the resources for executing the quantum circuit, allocating resources, when available, sufficient to execute the quantum circuit, and using the allocated resources to execute the quantum circuit. | 2021-12-30 |
20210406152 | Cloud Application to Automatically Detect and Solve Issues in a Set of Code Base Changes Using Reinforcement Learning and Rule-Based Learning - A method, system, and computer program product provide automatic resolution of coding issues by applying code modifications to an application to generate a modified application, and then applying static analytic tools to the modified application to identify coding problems in the modified application related to at least one code modification, where the coding problems are evaluated using a first machine learning model to identify a subset of coding problems meeting a first project relevancy requirement. | 2021-12-30 |
20210406153 | COMPUTER DEVICES AND COMPUTER IMPLEMENTED METHODS - A computer device processes frame data provided by running of a computer app, the frame data comprising a plurality of events occurring in the computer ap. A display displays information associated with one or more frames of the plurality of frames. At least one processor of the computer device determines a node graph, in response to input from a user, for one or more events associated with one or more frames from the frame data and that node graph is displayed. | 2021-12-30 |
20210406154 | IDENTIFYING DATA INCONSISTENCIES AND DATA CONTENTION BASED ON HISTORIC DEBUGGING TRACES - Based on replay of a thread, one implementation observes an influx of a value of a memory cell comprising an interaction between the thread and the value of the memory cell at an execution time point in the replaying, and determines whether the value of the memory cell observed from the influx is inconsistent with a prior value of the memory cell as known by the thread at the execution time point. If so, this implementation initiates an indication of a data inconsistency. Based on replay of a plurality of threads, another implementation identifies a memory cell that was accessed by a first thread while a thread synchronization mechanism was active on the first thread. Then, if there was another access to the memory cell by a second thread without use of the thread synchronization mechanism, this implementation initiates an indication of a potential data contention. | 2021-12-30 |
20210406155 | METHODS FOR CONFIGURING DEVICE DEBUGGING ENVIRONMENT AND CONFIGURATION SERVER - Embodiments of the present disclosure disclose methods for configuring device debugging environment and a configuration server. The specific implementation is: obtaining a test device identifier; sending a configuration confirming message to a test device corresponding to the test device identifier, such that the test device, in response to a confirming operation on the configuration confirming message of a developer, sends a request message of a debugging environment adapted to the test device to a configuration server; and in response to the request message, sending the debugging environment to the test device for configuration by the test device. | 2021-12-30 |
20210406156 | GENERATING AND AGGREGATING TEST RESULT DATA OF A DISTRIBUTED SYSTEM OF DEVICES INTO A TEST CASE RESULT FOR FACILITATING ACCESS OF THE TEST CASE RESULT VIA A SINGLE ITERATOR - Generating and aggregating test result data of a distributed system of devices into a test case result for facilitating access of the test case result via a single iterator is presented herein. A coordinator component of the distributed system of devices creates respective context identifiers for each unique phase of a test case of the distributed system of devices, and sends messages including the respective context identifiers to a producer component of the distributed system of devices. In this regard, the producer component includes producers having respective services executing processes corresponding to an execution of the test case. The messages instruct the respective services to associate the respective context identifiers with events representing result data of the processes, and the respective context identifiers facilitate respective accesses of the events representing the result data of the processes. | 2021-12-30 |
20210406157 | SOFTWARE DEFECT CREATION - A system and a method for creating a defect identified during a test case run. A bug is detected during an execution of the test case on a functionality of a software. The bug is detected by comparing an actual output of the functionality with an expected output of the functionality. A setup, indicating actions performed on the software, associated with the bug is identified. Further, a video snippet is generated from a video recording of the test case being executed. The video snippet depicts an execution of the bug caused due to the setup. Furthermore, the setup is analysed using AI and ML techniques to determine an exact location of the bug. Further, a screen, from the video snippet, indicating the exact location of the bug is automatically highlighted. In addition, a defect comprising a recommendation to resolve the bug is created. | 2021-12-30 |
20210406158 | SYSTEMS AND METHODS FOR AUTOMATED DEVICE TESTING - Systems and methods present practical applications to software design and testing by providing a driver or platform that implements a simplified testing process to automate test scripting and to create multiple environments to run software and device tests. The driver or platform may be modular such that a user may add more testing scripts to an environment without re-building the environment for every test, The platform may also allow the user to make changes to each script and perform tests with specific options (e.g,, testing synchronously or asynchronously, defining the number of test executions, etc.). The platform may be configured to set up devices for each test and initiate specific drivers for each test. In some embodiments, the platform may set up each device involved in a test, start the related drivers, and create different threads for executing different aspects of the test. | 2021-12-30 |
20210406159 | CLOUD LOAD ORCHESTRATOR - Testing methods and systems are provided for testing a resource manager of an application management system. The testing systems include a load orchestrator configured to obtain an artificial metric that is determined based on a utilization model (e.g., CPU usage, memory allocation, or disk usage, number of webserver sessions). The load orchestrator transmits the artificial metric to applications in a cluster of computing nodes. The applications transmit the artificial metric to the resource manager. In response, the resource manager generates control output for managing applications in the cluster based on the artificial metric (e.g., scaling, load balancing, application placement, failover of applications, or defragmenting data). The utilization model may include executable code for generating artificial metric values. The model may be received as a result of an API call. The load orchestrator may be instantiated in an orchestration framework or in each node of the cluster. | 2021-12-30 |
20210406160 | VERIFICATION DEVICE AND VERIFICATION METHOD - A system for accelerating testing of a software program includes a virtual computer and a test execution control computer. The virtual computer imitates a microcomputer equipped with a software program to be tested. The test execution control computer divides a plurality of test scenarios into common phases; to create and store a tree structure mapping out the plurality of test scenarios, the tree structure where the common phase is followed by a non-common phase, the common phase branched out into the non-common phases. The virtual computer executes the common phase in accordance with the tree structure, and stores as a snapshot a state of the virtual computer. The virtual computer to uses the snapshot to reproduce the state of the virtual computer that has executed the common and non-common phases, when the test execution control computer causes the virtual computer to execute a second test scenario. | 2021-12-30 |
20210406161 | METHOD AND COMPUTER PROGRAM FOR TESTING A TECHNICAL SYSTEM - A method for testing a, in particular safety-relevant, technical system, in particular encompassing software. The system is represented by a model encompassing at least two or more components. An assumption of a respective component regarding the safety-relevant system, and a guarantee of a respective component to the safety-relevant technical system, are specified by a safety contract. Executable program code is generated based on at least one assumption and based on at least one guarantee. The safety-relevant technical system is tested by executing the program code. | 2021-12-30 |
20210406162 | CODE TESTING METHOD, APPARATUS AND DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM - The invention discloses a code testing method. The method includes the following steps of: acquiring a code set to be tested; loading the code set to a corresponding operating chip, and executing the code set by using the operating chip; judging whether a target code subset which is not successfully executed exists in the code set; and if yes, performing an audit testing operation on the code set. The code testing method provided by the invention is simple and feasible to apply, which improves a testing reliability and reduces a testing cost. The invention also discloses a code testing apparatus and device, and a storage medium, which have corresponding technical effects. | 2021-12-30 |
20210406163 | IDENTIFYING SOFTWARE INTERDEPENDENCIES USING LINE-OF-CODE BEHAVIOR AND RELATION MODELS - Disclosed herein are techniques for identifying software interdependencies based on functional line-of-code behavior and relation models. Techniques include identifying a first portion of executable code associated with a first controller, accessing a functional line-of-code behavior and relation model representing functionality of the first portion of executable code and a second portion of executable code; determining, based on the functional line-of-code behavior and relation model, that the second portion of executable code is interdependent with the first portion of executable code; and generating, based on the determined interdependency, a report identifying the interdependent first portion of executable code and second portion of executable code. | 2021-12-30 |
20210406164 | METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS - Methods, apparatus, systems and articles of manufacture are disclosed for sparse tensor storage for neural network accelerators. An example apparatus includes sparsity map generating circuitry to generate a sparsity map corresponding to a tensor, the sparsity map to indicate whether a data point of the tensor is zero, static storage controlling circuitry to divide the tensor into one or more storage elements, and a compressor to perform a first compression of the one or more storage elements to generate one or more compressed storage elements, the first compression to remove zero points of the one or more storage elements based on the sparsity map and perform a second compression of the one or more compressed storage elements, the second compression to store the one or more compressed storage elements contiguously in memory. | 2021-12-30 |
20210406165 | Adaptive Context Metadata Message for Optimized Two-Chip Performance - Aspects of a storage device including a master chip controller and a slave chip processor and memory including a plurality of memory locations are provided which allow for simplified processing of descriptors associated with host commands in the slave chip based on an adaptive context metadata message from the master chip. When the controller receives a host command, the controller in the master chip provides to the processor in the slave chip a descriptor associated with a host command, an instruction to store the descriptor in the one of the memory locations, and the adaptive context metadata message mapping a type of the descriptor to the one of the memory locations. The processor may then process the descriptor stored in the one of the memory locations based on the message, for example, by refraining from identifying certain information indicated in the descriptor. Reduced latency in command execution may thereby result. | 2021-12-30 |
20210406166 | EXTENDED MEMORY ARCHITECTURE - Systems, apparatuses, and methods related to extended memory communication subsystems for performing extended memory operations are described. An example apparatus can include a plurality of computing devices. Each of the computing devices can include a processing unit configured to perform an operation on a block of data, and a memory array configured as a cache for each respective processing unit. The example apparatus can further include a first communication subsystem coupled to a host and to each of the plurality of communication subsystems. The example apparatus can further include a plurality of second communication subsystems coupled to each of the plurality of computing devices. Each of the plurality of computing devices can be configured to receive a request from the host, send a command to execute at least a portion of the operation, and receive a result of performing the operation from the at least one hardware accelerator. | 2021-12-30 |
20210406167 | NAMESPACE MANAGEMENT FOR MEMORY SUB-SYSTEMS - Methods, systems, and devices for clock domain crossing queue are described. A memory sub-system can generate a namespace map having a set of namespace blocks associated with a memory sub-system. The namespace blocks can include one or more logical block addresses associated with the memory sub-system. One namespace block of the set of namespace blocks can include an indication that can indicate that the namespace block and each namespace block following the namespace block are available for mapping. The memory sub-system can receive a request to create a namespace and sequentially map one or more available namespace blocks to the namespace according to the ordering of the namespace map, including the namespace block with the indication. | 2021-12-30 |
20210406168 | SMART FACTORY RESET PROCEDURE - Methods, systems, techniques, and devices for smart factory reset procedures are described. In accordance with examples as disclosed herein, a memory system may receive one or more commands associated with a reset procedure. The memory system may identify, in response to the one or more commands, a first portion of one or more memory arrays of the memory system as storing user data and a second portion of the one or more memory arrays as storing data associated with an operating system. The memory system may update a mapping of the memory system based on identifying the first portion and the second portion. The memory system may transfer the data associated with the operating system to a third portion of the one or more memory arrays and perform an erase operation on a subset of physical addresses of the set of physical addresses. | 2021-12-30 |
20210406169 | SELF-ADAPTIVE WEAR LEVELING METHOD AND ALGORITHM - A memory device is provided. The memory device comprises:
| 2021-12-30 |
20210406170 | Flash-Based Coprocessor - A processor corresponding to a core of a coprocessor, a cache used as a buffer of the processor, and a flash controller are connected to an interconnect network. The flash controller and a flash memory are connected to a flash network. The flash controller reads or writes target data of a memory request from or to the flash memory. | 2021-12-30 |
20210406171 | METHOD AND SYSTEM FOR IN-LINE ECC PROTECTION - A memory system having an interconnect configured to receive commands from a system to read data from and/or write data to a memory device. The memory system also has a bridge configured to receive the commands from the interconnect, to manage ECC data and to perform address translation between system addresses and physical memory device addresses by calculating a first ECC memory address for a first ECC data block that is after and adjacent to a first data block having a first data address, calculating a second ECC memory address that is after and adjacent to the first ECC block, and calculating a second data address that is after and adjacent to the second ECC block. The bridge may also check and calculate ECC data for a complete burst of data, and/or cache ECC data for a complete burst of data that includes read and/or write data. | 2021-12-30 |
20210406172 | ACTIVE INPUT/OUTPUT EXPANDER OF A MEMORY SUB-SYSTEM - A value setting associated with one or more parameters of a host-side interface and a memory-side interface of an input/output (I/O) expander is configured to enable Open NAND Flash Interface (ONFI)-compliant communications between a host system and a target memory die of a memory sub-system. The I/O expander processes one or more ONFI-compliant communications between the host system and the target memory die, wherein the one or more ONFI-compliant communications relate to execution of a memory access operation. | 2021-12-30 |