43rd week of 2021 patent applcation highlights part 44 |
Patent application number | Title | Published |
20210334128 | ASYNCHRONOUS QUANTUM INFORMATION PROCESSING - An asynchronous approach to implementing a quantum algorithm can reduce dead time of a quantum information processing unit (QIPU). Multiple parameter sets are determined for a quantum program by a controller and the QIPU is instructed to execute the quantum program for the parameter sets. Results from each program execution are returned to the controller. After one or more results are received, the controller determines an updated parameter set while the QIPU continues executing the quantum program for the remaining parameter sets. The QIPU is instructed to execute the quantum program for the updated parameter set (e.g., immediately, after a current program execution, or after the remaining parameter sets are processed). This asynchronous approach can result in the QIPU having little or no dead time, and thus can make more efficient use of the QIPU. | 2021-10-28 |
20210334129 | SYSTEMS AND METHODS FOR TASK PROCESSING IN A DISTRIBUTED ENVIRONMENT - Methods and apparatus for task processing in a distributed environment are disclosed and described. An example apparatus includes a task manager and a task dispatcher. The example task manager is to receive a task and create an execution context for the task, the execution context to associate the task with a routine for task execution. The example task dispatcher is to receive a report of task execution progress and provide an update regarding task execution progress, the task dispatcher, upon initiation of task execution, to facilitate blocking of interaction with a resource involved in the task execution. The example task dispatcher is to trigger an indication of task execution progress and, upon task finish, facilitate unblocking of the resource involved in the task execution. | 2021-10-28 |
20210334130 | NODE-LOCAL-UNSCHEDULER FOR SCHEDULING REMEDIATION - A system for scheduling remediation includes a memory, a processor in communication with the memory, a container scheduled on a first node, a scheduler executing on the processor, and a node-local-unscheduler (“NLU”). The scheduler has a watch module. The NLU executes on the processor to determine a status of the container as failing validation. The NLU has access to scheduling policies corresponding to the container and the first node. Responsive to determining the status of the container as failing validation, the NLU annotates the container and stops execution of the container. The watch module executes on the processor to detect the annotation associated with the container. Responsive to detecting the annotation, the container is rescheduled to a second node. | 2021-10-28 |
20210334131 | DEVICE AND PROCESSOR FOR IMPLEMENTING RESOURCE INDEX REPLACEMENT - Embodiments of the present disclosure provides a device for implementing resource index replacement, comprising an instruction scheduling unit configured to receive a first type resource index from a resource allocating unit and then issue an instruction to an instruction executing unit for execution, to receive a second type resource index from the resource allocating unit, to execute the instruction from the instruction scheduling unit, and to issue a result of the instruction execution and the second type resource index to a result storing unit. The result storing unit comprises a plurality of resource for storing instruction execution results and execution results. The result storing unit is configured to allocate the first type resource index to an instruction entering the instruction scheduling unit and to allocate the second type resource index to an instruction entering the instruction execution unit. The present disclosure also provides a processor comprising the above device for implementing resource index replacement. In addition, the present disclosure provides a method for implementing resource index replacement. | 2021-10-28 |
20210334132 | DATA SET SUBSCRIPTION TRACKING AND TERMINATION SYSTEM - A data set subscription tracking and termination system may include a distribute module, a publisher and a plurality of subscribers. The distribute module may receive a publication registration to register a publication. The distribute module may receive a subscription registration to register a subscription to the publication. The publication registration and/or subscription registration may include metadata relating to the publication and/or subscription. A metadata store, included in the distribute module, may store the publication registration and/or the subscription registration. The publisher may change the publication. The change to the publication may include adding and/or deleting rows and/or columns to, or from, the publication. The publisher may notify the distribute module of publication. The distribute module may transmit an alert to all subscribers notifying them of the publication changes. The distribute module may also terminate subscriptions that only include data elements that are deleted from the publication. | 2021-10-28 |
20210334133 | ADJUSTING A DISPATCH RATIO FOR MULTIPLE QUEUES - Provided are techniques for adjusting a dispatch ratio for dispatching tasks from multiple queues. The dispatch ratio is set for each queue of a plurality of queues. A number of Central Processing Unit (CPU) cycles used by tasks from each of the plurality of queues during the interval is tracked. A CPU high percentage is determined that indicates a percentage of CPU cycles used by high priority tasks. In response to determining that the CPU high percentage is below a high threshold, a new dispatch ratio is calculated that indicates an increased number of high priority tasks are to be dispatched, and the new dispatch ratio is based on the CPU high percentage, the high threshold, and a current dispatches high value. The increased number of high priority tasks are dispatched from the high priority queue based on the new dispatch ratio during a new interval. | 2021-10-28 |
20210334134 | Handling Multiple Graphs, Contexts and Programs in a Coarse-Grain Reconfigurable Array Processor - A processor includes a compute fabric and a controller. The compute fabric includes an array of compute nodes and interconnects that configurably connect the compute nodes. The controller is configured to receive a software program represented as a set of interconnected Data-Flow Graphs (DFGs), each DFG specifying code instructions that perform a respective portion of the software program, to schedule execution of the DFGs in time alternation, and, for each DFG being scheduled, to configure at least some of the compute nodes and interconnects in the compute fabric to execute the code instructions specified in the DFG, and send to the compute fabric multiple threads that each executes the code instructions specified in the DFG. | 2021-10-28 |
20210334135 | COMPUTING NODE JOB ASSIGNMENT USING MULTIPLE SCHEDULERS - A set of computing nodes may receive a corresponding set of heartbeat messages that originated at the set of computing nodes. The set of heartbeat messages may relate to selecting, among the set of computing nodes, a leader computing node to process a set of jobs. State information included in the heartbeat messages may be provided to a leader election algorithm that outputs information indicating one or more computing nodes that are most qualified to process the set of jobs based on processing capabilities of the computing nodes and processing constraints associated with the set of jobs. The computing node may select itself as the leader computing node to process the set of jobs based on determining, from the information output by the leader election algorithm, that the computing node is most qualified to process the set of jobs and no other computing nodes are processing the set of jobs. | 2021-10-28 |
20210334136 | PLATOONING OF COMPUTATIONAL RESOURCES IN AUTOMATED VEHICLES NETWORKS - Novel techniques are described for platooning of computational resources in automated vehicle networks. An on-board computational processor of an automated vehicle typically performs a large number of computational tasks, and some of those computational tasks can be computationally intensive. Some such tasks, referred to as platoonable tasks herein, are well-suited for parallel processing by multiple processors. Embodiments can detect one or more on-board computational processors in one or more automated vehicles that are likely, during the time window in which the platoonable task will be executed, to have available computational resources and to be traveling along respective paths that are close enough to each other to allow for ad hoc network communications to be established between the processors. In response to detecting such cases, embodiments can schedule and instruct shared execution of the platoonable tasks by the multiple processors via the ad hoc network. | 2021-10-28 |
20210334137 | DATA PROCESSING METHOD AND RELATED PRODUCTS - The present disclosure discloses a data processing method and related products, in which the data processing method includes: generating, by a general-purpose processor, a binary instruction according to device information of an AI processor, and generating an AI learning task according to the binary instruction; transmitting, by the general-purpose processor, the AI learning task to the cloud AI processor for running; receiving, by the general-purpose processor, a running result corresponding to the AI learning task; and determining, by the general-purpose processor, an offline running file according to the running result, where the offline running file is generated according to the device information of the AI processor and the binary instruction when the running result satisfies a preset requirement. By implementing the present disclosure, the debugging between the AI algorithm model and the AI processor can be achieved in advance. | 2021-10-28 |
20210334138 | TECHNOLOGIES FOR PRE-CONFIGURING ACCELERATORS BY PREDICTING BIT-STREAMS - Technologies for pre-configuring accelerators by predicting bit-streams include communication circuitry and a compute device. The compute device includes a compute engine to determine one or more bit-streams registered on each accelerator of multiple accelerators. The compute engine is further to predict a next job to be requested for acceleration from an application of at least one compute sled of multiple compute sleds, predict a bit-stream from a bit-stream library that is to execute the predicted next job requested to be accelerated, and determine whether the predicted bit-stream is already registered on one of the accelerators. In response to a determination that the predicted bit-stream is not registered on one of the accelerators, the compute engine is to select an accelerator from the plurality of accelerators that satisfies characteristics of the predicted bit-stream and register the predicted bit-stream on the determined accelerator. | 2021-10-28 |
20210334139 | COMPRESSION TECHNIQUES FOR ENCODING STACK TRACE INFORMATION - Embodiments provide a thread classification method that represents stack traces in a compact form using classification signatures. Some embodiments can receive a stack trace that includes a sequence of stack frames. Some embodiments may generate, based on the sequence of stack frames, a trace signature that represents the set. Some embodiments may receive one or more subsequent stack traces. For each of the one or more subsequent stack traces, some embodiments may determine whether a subsequent trace signature has been generated to represent the sequence of stack frames included within the subsequent stack trace. If not, some embodiments may generate, based on the trace signature and other subsequent trace signatures that were generated based on the trace signature, the subsequent trace signature to represent the subsequent sequence of stack frames. | 2021-10-28 |
20210334140 | MEMORY ALLOCATION METHOD AND MULTI-CORE CONCURRENT MEMORY ALLOCATION METHOD - A memory/multi-core concurrent memory allocation method, which is applied to an embedded system, wherein a kernel module and a plurality of application programs are provided. The memory allocation method comprises: acquiring first memory allocation requests of the plurality of application programs; the kernel module determining whether preset screening marks exist in the first memory allocation requests; when screening marks exist in the first memory allocation requests, prohibiting allocating memory for the current application program managed by a contiguous memory allocator. By adopting the memory allocation method, the application programs which occupy contiguous memory allocated by the continuous memory allocator for a long time can be screened and removed, then contiguous memory allocation can be provided for the drivers in a shorter time, and the corresponding contiguous continuous memory can be allocated for the drivers through a plurality of processing units at the same time with a higher efficiency. | 2021-10-28 |
20210334141 | CLASS-BASED DYNAMIC MEMORY SLOT ALLOCATION - A memory slot allocation request specifying a requested number of memory slots is received from a requestor assigned to a particular class among a plurality of classes. It is determined whether allocation of the requested number of memory slots to the requestor results in satisfaction of resource allocation constraints for the plurality of classes. The resource allocation constraints include a first and second threshold for each class that determine how many memory slots can be allocated to each class. Based on determining that allocation of the requested number of memory slots to the requestor results in satisfaction of the resource allocation constraints, the memory slot allocation request is granted. The granting of the memory slot allocation request includes allocating the requested number of memory slots to the requestor. | 2021-10-28 |
20210334142 | SYSTOLIC ARRAY-FRIENDLY DATA PLACEMENT AND CONTROL - The present disclosure relates to an accelerator for systolic array-friendly data placement. The accelerator may include: a systolic array comprising a plurality of operation units, wherein the systolic array is configured to receive staged input data and perform operations using the staged input to generate staged output data, the staged output data comprising a number of segments; a controller configured to execute one or more instructions to generate a pattern generation signal; a data mask generator; and a memory configured to store the staged output data using the generated masks. The data mask generator may include circuitry configured to: receive the pattern generation signal from the controller, and, based on the received signal, generate a mask corresponding to each segment of the staged output data. | 2021-10-28 |
20210334143 | SYSTEM FOR COOPERATION OF DISAGGREGATED COMPUTING RESOURCES INTERCONNECTED THROUGH OPTICAL CIRCUIT, AND METHOD FOR COOPERATION OF DISAGGREGATED RESOURCES - A system for cooperation of disaggregated computing resources interconnected through an optical circuit, and a method for cooperation of disaggregated resources are disclosed. Functional block devices such as a processor block, an accelerator block, and a memory fabric block exist at a remote location, and these three types of remote functional block devices are interconnected and interoperated in a specific program to perform a cooperative computation and processing process. Accordingly, the system shares data and information of a memory existing in each block through optical signal interconnection that provides low-latency, fast processing, and wide bandwidth, and maintains cooperation and memory coherency. | 2021-10-28 |
20210334144 | RESOURCE ALLOCATION - An apparatus may include first and second processors. A first user may be bound to the first processor such that processes of the first user execute on the first processor and do not execute on the second processor. A second user may be bound to the second processor such that processes of the second user execute on the second processor and do not execute on the first processor. | 2021-10-28 |
20210334145 | RESOURCE ALLOCATION DEVICE, RESOURCE MANAGEMENT SYSTEM, AND RESOURCE ALLOCATION PROGRAM - [Problem] To achieve resource allocation suitable for both a resource providing side and a using side. | 2021-10-28 |
20210334146 | PROVISIONING SET OF MULTI-TENANT CLOUD APPLICATIONS WITH UNIFIED SERVICE - According to some embodiments, methods and systems may be associated with a cloud computing environment. A unified provisioning service may include a plan configuration data store that contains information associated with a combined service representing a plurality of multi-tenant cloud applications. A cloud platform provisioning framework may receive an indication of a subscription request for the combined service from a consumer via a Software as a Service (“SaaS”) marketplace and access, responsive to the received indication, a dependent service framework of the unified provisioning service. The cloud platform provisioning framework may then receive dependency subscription data from the unified provisioning service, and, based on the dependency subscription data, arrange for the consumer to be subscribed to each of the plurality of multi-tenant cloud applications. | 2021-10-28 |
20210334147 | SYSTEM AND METHOD OF UPDATING TEMPORARY BUCKETS - An illustrative embodiment disclosed herein is an apparatus including a processor having programmed instructions that identify a temporary bucket linked to one or more objects of a main bucket, detect that an object is uploaded to the main bucket, determine whether the object has an object attribute satisfying an object attribute relationship, and responsive to determining that the object has the object attribute that satisfies the object attribute relationship, add, to the temporary bucket, a link to the object. | 2021-10-28 |
20210334148 | SYSTEM AND METHOD FOR PROVIDING A DECLARATIVE NON CODE SELF-LEARNING ADVISORY FRAMEWORK FOR ORCHESTRATION BASED APPLICATION INTEGRATION - In accordance with an embodiment, described herein are systems and methods for supporting a declarative non code self-learning advisory framework in an orchestration based application integration. The systems and methods can provide an advisory framework as a component of an integration platform which can allow declaratively defined recommendations, guidance, warnings etc. to be shown to the consumer of the platform on occurrence of certain events. The advisory framework can provide benefits such as: 1) allowing any entity to declaratively define/modify the rules and advices which will immediately get reflected across the customer fleet without dependency on product's release cadence; 2) where such updates to declaratively defined rules and advices does not involve any code changes to do the product; 3) comprises a structure which is generic and not component specific; and 4) can have self-learning capabilities from the generated product metrics. | 2021-10-28 |
20210334149 | API ADAPTER CREATION DEVICE, API ADAPTER CREATION METHOD, AND API ADAPTER CREATION PROGRAM - [Problem] An API adapter of a wholesale service provided in a coordination execution apparatus of a wholesale service. | 2021-10-28 |
20210334150 | COPYING AND PASTING METHOD, DATA PROCESSING APPARATUS, AND USER EQUIPMENT - A copying and pasting method includes: obtaining a first to-be-recognized fingerprint; selecting, if the first to-be-recognized fingerprint is a first preset fingerprint, copied content from a to-be-processed interface based on a touch operation acting on a touchscreen; obtaining a pasting instruction; and pasting the copied content into a target area according to the pasting instruction. This application further provides a data processing apparatus and user equipment that may implement the foregoing method. This application can improve copying and pasting efficiency. | 2021-10-28 |
20210334151 | DYNAMIC COMMUNICATIONS PATHS FOR SENSORS IN AN AUTONOMOUS VEHICLE - Dynamic communications paths for sensors in an autonomous vehicle, comprising: detecting a fault associated with a first sensor of a plurality of sensors associated with a same sensing space of the autonomous vehicle; severing, in response to detecting the fault, a first communications path in a switched fabric between a processing unit and the first sensor; and establishing, via the switched fabric, in response to detecting the fault, a second communications path between the processing unit and a second sensor of the plurality of sensors. | 2021-10-28 |
20210334152 | SEMICONDUCTOR DEVICE AND SYSTEM USING THE SAME - A semiconductor device has a timer unit and a processing unit. The timer unit includes a binary counter, a first converter that converts a first count value output from the binary counter to a gray code to output as first gray code data. The processing unit includes a first synchronizer that captures the first gray code data transferred from the timer unit in synchronization with the system clock signal and outputs the captured first gray code data as second gray code data, and a fault detection unit that generates a data for fault detection based on the first gray code data transferred from the timer unit and compares a second count value based on the second gray code data with a third counter value based on the data for fault detection. | 2021-10-28 |
20210334153 | REMOTE ERROR DETECTION METHOD ADAPTED FOR A REMOTE COMPUTER DEVICE TO DETECT ERRORS THAT OCCUR IN A SERVICE COMPUTER DEVICE - A remote error detection method is provided. A service computer stores error log collection (ELC) data that are related to the service computer device, and generates and transmits an alert signal to a remote computer device when a baseboard management controller (BMC) thereof determines that a predetermined trigger event has occurred. The remote computer device receives the error log collection data after the service computer device sends the alert signal to the remote computer device. | 2021-10-28 |
20210334154 | ENRICHED HIGH FIDELITY METRICS - A method including receiving events from different data sources for a service automatically executing in an enterprise system. A first event is enriched by providing the first event with first metadata that associates the first event with a first application used by the service. The first event is assigned to a time slice associated with the first application. A second event is enriched in a similar manner. A correlation graph of nodes and edges is built using the enriched events, with nodes representing the events and edges indicating relationships between the edges. A third event indicating a fault in the first application associated with the first node is received. The source of the error for the third event is identified using the second updated correlation graph and the time slice. The source of error is then mitigated. | 2021-10-28 |
20210334155 | AUTOMATED AGENT FOR PROACTIVELY ALERTING A USER OF L1 IT SUPPORT ISSUES THROUGH CHAT-BASED COMMUNICATION - An automated agent may communicate with a user via a chat channel to proactively alert the user of an L1 IT support issue. The L1 IT support issue may be determined based on monitoring indications of human-initiated activities maintained by a system of record, and may, prior to the automated agent's alert, be unknown to the user. In some instances, a natural language understanding (NLU) module may be used to identify an entity and intent from the indications of human-initiated activities, and the L1 IT support issue may be determined based on the determined entity and intent. After alerting the user of the L1 IT support issue, the automated agent may inform, via the chat channel, the user of a remediation step available to address the L1 IT support issue. Upon obtaining the user's permission, the automated agent may perform the remediation step to address the L1 IT support issue. | 2021-10-28 |
20210334156 | AUTOMATED AGENT FOR PROACTIVELY ALERTING A USER OF L1 IT SUPPORT ISSUES THROUGH CHAT-BASED COMMUNICATION - An automated agent may communicate with a user via a chat channel to proactively alert the user of an L1 IT support issue. The L1 IT support issue may be determined based on monitoring indications of human-initiated activities maintained by a system of record, and may, prior to the automated agent's alert, be unknown to the user. In some instances, a natural language understanding (NLU) module may be used to identify an entity and intent from the indications of human-initiated activities, and the L1 IT support issue may be determined based on the determined entity and intent. After alerting the user of the Ll IT support issue, the automated agent may inform, via the chat channel, the user of a remediation step available to address the L1 IT support issue. Upon obtaining the user's permission, the automated agent may perform the remediation step to address the L1 IT support issue. | 2021-10-28 |
20210334157 | RESILIENCY SCHEME TO ENHANCE STORAGE PERFORMANCE - A storage system has a resiliency scheme to enhance storage system performance. The storage system composes a RAID stripe. The storage system mixes an ordering of portions of the RAID stripe, based on reliability differences across portions of the solid-state memory. The storage system writes the mixed ordering RAID stripe across the solid-state memory. | 2021-10-28 |
20210334158 | METHOD AND APPARATUS FOR CACHING MTE AND/OR ECC DATA - A system and method for caching memory request verification data comprising a memory request generator configured to generate a memory request designating requested data and memory request verification data. A bus is configured to carry the memory request from the memory request generator to a cache memory that stores verification data, and upon receiving the memory request is configured to: retrieve stored verification data from the cache memory, compare the stored verification data to the memory request verification data, and responsive to a match between the stored verification data to the memory request verification data, designate a memory request validation. Also part of the system is a memory controller configured to, responsive to a memory request validation, retrieve data specified in the memory request from a main memory and provide the data to the memory request generator over the bus. A main memory configured to store the requested data. | 2021-10-28 |
20210334159 | METHODS FOR ERROR COUNT REPORTING WITH SCALED ERROR COUNT INFORMATION, AND MEMORY DEVICES EMPLOYING THE SAME - An apparatus comprising a memory array including a plurality of memory cells arranged in a plurality of columns and a plurality of rows is provided. The apparatus further comprises circuitry configured to perform an error detection operation on the memory array to determine a raw count of detected errors, to compare the raw count of detected errors to a threshold value to determine an over-threshold amount, to scale the over-threshold amount according to a scaling algorithm to determine a scaled error count, and to store the scaled error count in a user-accessible storage location. | 2021-10-28 |
20210334160 | POOLING BLOCKS FOR ERASURE CODING WRITE GROUPS - A technique provides efficient data protection, such as erasure coding, for data blocks of volumes served by storage nodes of a cluster. Data blocks associated with write requests of unpredictable client workload patterns may be compressed. A set of the compressed data blocks may be selected to form a write group and an erasure code may be applied to the group to algorithmically generate one or more encoded blocks in addition to the data blocks. Due to the unpredictability of the data workload patterns, the compressed data blocks may have varying sizes. A pool of the various-sized compressed data blocks may be established and maintained from which the data blocks of the write group are selected. Establishment and maintenance of the pool enables selection of compressed data blocks that are substantially close to the same size and, thus, that require minimal padding. | 2021-10-28 |
20210334161 | Storage Devices Hiding Parity Swapping Behavior - The present disclosure generally relates to methods of operating storage devices. The storage device comprises a controller comprising first random access memory (RAM | 2021-10-28 |
20210334162 | FPGA ACCELERATION SYSTEM FOR MSR CODES - According to one general aspect, an apparatus may include a host interface circuit configured to receive offloading instructions from a host processing device, wherein the offloading instructions instruct the apparatus to compute an error correction code associated with a plurality of data elements. The apparatus may include a memory interface circuit configured to receive the plurality of data elements. The apparatus may include a plurality of memory buffer circuits configured to temporarily store the plurality of data elements. The apparatus may include a plurality of error code computation circuits configured to, at least in part, compute the error correction code without additional processing by the host processing device. | 2021-10-28 |
20210334163 | PROCESSING-IN-MEMORY (PIM) DEVICES - A method of performing a MAC arithmetic operation includes detecting error correction capability for first data when a command has a logic level combination for performing the MAC arithmetic operation; correcting an error, included in the first data, when the number of erroneous bits included in the first data is equal to or less than the error correction capability; and outputting, to a PIM controller, MAC calculation result data generated by performing the MAC arithmetic operation on the error-corrected first data. | 2021-10-28 |
20210334164 | DEVICE RECOVERY MECHANISM - An apparatus and a method for recovering from a fault on a device, the method performed at the device comprising: initiating, with a bootloader, a recovery mechanism in response to detection of a fault with a first application, where the recovery mechanism comprises: obtaining, from storage on the device, location information identifying a first storage location for recovery software; obtaining, from the first storage location, the recovery software; obtaining, using the recovery software, a software update from a second storage location. | 2021-10-28 |
20210334165 | SNAPSHOT CAPABILITY-AWARE DISCOVERY OF TAGGED APPLICATION RESOURCES - Snapshot capability-aware discovery of tagged application resources is described. A backup server inputs an identifier of an application's resource from the application's host. If the backup server determines that the application resource identifier was input with a snapshot capable tag, and that the application's resource satisfies any of the snapshot policy rules, the backup server identifies the data protection policy for the satisfied snapshot policy rule. The backup server outputs a request to the application's host to use the identified data protection policy to create a snapshot of the application's resource that was input with any associated snapshot capable tag. | 2021-10-28 |
20210334166 | METHOD, DEVICE, AND COMPUTER STORAGE MEDIUM FOR MANAGING TRACE RECORD - Techniques manage tracking records in an application system which includes an active dump file and an inactive dump file. A set of tracking records indicating a state of the application system is received. The set of tracking records is added to the active dump file. A storage signal for storing the active dump file into a backup device associated with the application system is generated according to a determination that a size of the active dump file meets a predetermined size threshold and according to a determination that a state of the inactive dump file is a ready state. The ready state indicates that the inactive dump file is available for storing another set of tracking records to be received in the future. Accordingly, two dump files may alternately store tracking records, and copies of the dump files may be continuously stored into a backup device to improve reliability. | 2021-10-28 |
20210334167 | EFFICIENT MANAGEMENT OF POINT IN TIME COPIES OF DATA IN OBJECT STORAGE - A system, according to one embodiment, includes: a processor, as well as logic that is integrated with the processor, executable by the processor, or integrated with and executable by the processor. The logic is configured to: send, by the processor, point in time copies of data to an object storage system. A directive for manipulating the point in time copies of the data are also set to the object storage system by the processor. Moreover, the point in time copies of the data are manipulated by a storlet on the object storage system according to the directive. | 2021-10-28 |
20210334168 | COORDINATING BACKUP CONFIGURATIONS FOR A DATA PROTECTION ENVIRONMENT IMPLEMENTING MULTIPLE TYPES OF REPLICATION - Described is a system for coordinating backup configurations for a data protection environment that implements multiple types of replication. The system may provide an efficient coordination tool that discovers the types of replication implemented by storage arrays within a data protection environment and then processes backup data according to the types of replication. For example, as part of a data protection policy, a backup facility may continuously create point-in-time copies (e.g. snapshots) of storage resources by leveraging consistency groups. Accordingly, the system may categorize such consistency groups based on the type of replication implemented by associated storage arrays. Based on the categorization, the system coordinate a backup process for all storage arrays associated with the backup facility without requiring specialized hardware. | 2021-10-28 |
20210334169 | VENDOR-NEUTRAL MODELS OF VENDORS' APPLICATION RESOURCES - Vendor-neutral models of vendors' application resources are described. A host outputs capabilities of data protection operations which are specified by a vendor of an application that is installed on the host. The host inputs a vendor-neutral version of a data protection operation, based on any of the capabilities, for a resource of the application. The host uses a vendor-neutral model of the resource of the application to perform the vendor-neutral version of the data protection operation on the application resource. | 2021-10-28 |
20210334170 | AUTOMATION OF DATA STORAGE ACTIVITIES - A system receives data storage workflow activities that include computer-executable instructions for carrying out data storage workflow in a network data storage system. Once the workflow is received, the system deploys the workflow to one or more workflow engines that can execute the various data storage activities related to the workflow. Prior to executing a data storage activity, the system can determine which workflow engine to use based on an allocation scheme. | 2021-10-28 |
20210334171 | DISTRIBUTED CONTENT INDEXING ARCHITECTURE WITH SEPARATELY STORED FILE PREVIEWS - An improved content indexing (CI) system is disclosed herein. For example, the improved CI system may include a distributed architecture of client computing devices, media agents, a single backup and CI database, and a pool of servers. After a file backup occurs, the backup and CI database may include file metadata indices and other information associated with backed up files. Servers in the pool of servers may, in parallel, query the backup and CI database for a list of files assigned to the respective server that have not been content indexed. The servers may then request a media agent to restore the assigned files from secondary storage and provide the restored files to the servers. The servers may then content index the received restored files. Once the content indexing is complete, the servers can send the content index information to the backup and CI database for storage. | 2021-10-28 |
20210334172 | SYSTEMS AND METHODS FOR CONTINUOUS DATA PROTECTION WITH NEAR ZERO RECOVERY POINT - Example embodiments relate generally to systems and methods for continuous data protection (CDP) and more specifically to an input and output (I/O) filtering framework and log management system to seek a near-zero recovery point objective (RPO). | 2021-10-28 |
20210334173 | STORAGE DEVICE AND METHOD OF OPERATING THE SAME - The present technology relates to an electronic device. A storage device according to the present technology includes a plurality of memory devices and a memory controller. Each of the plurality of memory devices includes a plurality of memory blocks. The memory controller detects a defective memory device among the plurality of memory devices and allocates normal blocks included in the defective memory device to an over-provisioning area used to perform a background operation on the plurality of memory devices. | 2021-10-28 |
20210334174 | DYNAMICALLY ALLOCATING STREAMS DURING RESTORATION OF DATA - The systems and methods described herein dynamically allocate streams when restoring data from databases. In some embodiments, the system and methods restore data from a database by determining a number of streams to allocate to the database for restoring files of data from the database. The determined number of streams may be based on a total amount of data within the database, and/or may be based, at least in part, on the previous number of streams used during backup operations, in order to balance the benefit of allocating streams to a restoration of data with any detriments associated with changing the number of streams from the number used during previous backup operations. | 2021-10-28 |
20210334175 | RECOVERY MANAGEMENT SYSTEM AND METHOD FOR RESTORING A COMPUTING RESOURCE - Examples described herein relate a method, a system, and a non-transitory machine-readable medium for restoring a computing resource. The method may include determining whether the computing resource is required to be restored on a recovery node using a backup of the computing resource stored in a backup storage node. A resource restore operation may be triggered on the recovery node in response to determining that the computing resource is required to be restored. The resource restore operation include copying a subset of the objects from the backup to the recovery node to form, from the subset of objects, a partial filesystem instance of the computing resource on the recovery node that is operable as a restored computing resource on the recovery node. | 2021-10-28 |
20210334176 | High performance distributed system of record with Unspent Transaction Output (UTXO) database snapshot integrity - A method operative in association with a set of transaction handling computing elements that comprise a network core that receive and process transaction requests into an append-only immutable chain of data blocks, wherein a data block is a collection of transactions, and wherein presence of a transaction recorded within a data block is verifiable via a cryptographic hash, and wherein Unspent Transaction Output (UTXO) data structures supporting the immutable chain of data blocks are maintained in a UTXO database, wherein a UXTO is an output from a finalized transaction that contains a value. The technique herein includes periodically snapshotting a given portion of the UTXO database to generate a hash. The hash of the snapshot is recorded within the immutable chain of data blocks, and preferably within a given block header. In responsive to a receipt of a recovery request, and to facilitate recovery of the system to a provably-known state, a consensus algorithm is executed over the UXTO snapshot. | 2021-10-28 |
20210334177 | FLEXIBLE BYZANTINE FAULT TOLERANCE - A method and system for performing a flexible Byzantine fault tolerant (BFT) protocol. The method includes sending, from a client device, a proposed value to a plurality of replica devices and receiving, from at least one of the plurality of replica devices, a safe vote on the proposed value. The replica device sends the safe vote, based on a first quorum being reached, to the client device and each of the other replica devices of the plurality of replica devices. The method further includes determining that a number of received safe votes for the proposed value meets or exceeds a second quorum threshold, selecting the proposed value based on the determination, and setting a period of time within which to receive additional votes. The method further includes, based on the period of time elapsing without receiving the additional votes, committing the selected value for the single view. | 2021-10-28 |
20210334178 | FILE SERVICE AUTO-REMEDIATION IN STORAGE SYSTEMS - System and method for automatic remediation for a distributed file system uses a file system (FS) remediation module running in a cluster management server and FS remediation agents running in a cluster of host computers. The FS remediation module monitors the cluster of host computers for related events. When a first file system service (FSS)-impacting event is detected, a cluster-level remediation action is executed at the cluster management server by the FS remediation module in response to the detected first FSS-impacting event. When a second FSS-impacting event is detected, a host-level remediation action is executed at one or more of the host computers in the cluster by the FS remediation agents in response to the detected second FSS-impacting event. | 2021-10-28 |
20210334179 | NETWORK STORAGE FAILOVER SYSTEMS AND ASSOCIATED METHODS - Failover methods and systems for a networked storage environment are provided. A filtering data structure and a metadata data structure are generated before starting a replay of a log stored in a non-volatile memory of a second storage node, during a failover operation initiated in response to a failure at a first storage node. The second storage node operates as a partner node of the first storage node to mirror at the log one or more write requests received by the first storage node prior to the failure, and data associated with the one or more write requests. The filtering data structure identifies each log entry and the metadata structure stores a metadata attribute of each log entry. The filtering data structure and the metadata structure are used for providing access to a logical storage object during the log replay from the second storage node. | 2021-10-28 |
20210334180 | NETWORK STORAGE FAILOVER SYSTEMS AND ASSOCIATED METHODS - Failover methods and systems for a networked storage environment are provided. In one aspect, a read request associated with a first storage object is received, during a replay of entries of a log stored in a non-volatile memory of a second storage node for a failover operation initiated in response to a failure at a first storage node. The second storage node operates as a partner node of the first storage node. The read request is processed using a filtering data structure that is generated from the log prior to the replay and identifies each log entry. The read request is processed when the log does not have an entry associated with the read request, and when the filtering data structure includes an entry associated with the read request, the requested data is located at the non-volatile memory. | 2021-10-28 |
20210334181 | REMOTE COPY SYSTEM AND REMOTE COPY MANAGEMENT METHOD - A failure in a main site is recovered by operating in the same operational environment as a sub site. A remote copy system includes: a first storage system providing a main site; and a second storage system providing a sub site. A storage controller stores data and an operation processed in the main site as a main site journal, sends the main site journal to the sub site for sequential processing, stores data and an operation processed in the sub site as a sub site journal after a failover to the sub site is performed, and cancels an unreflected operation that is not processed in the sub site after being stored in the main site journal prior to the failover in the main site and sequentially processes the sub site journal in the main site, when a failback to the main site is performed. | 2021-10-28 |
20210334182 | NETWORK STORAGE FAILOVER SYSTEMS AND ASSOCIATED METHODS - Failover methods and systems for a networked storage environment are provided. A metadata data structure is generated, before starting a replay of entries at a log stored in a non-volatile memory of a second storage node, during a failover operation initiated in response to a failure at a first storage node. The second storage node operates as a partner node of the first storage node, and the metadata structure stores a metadata attribute of each log entry. Furthermore, the metadata attribute of each log entry is persistently stored. The persistently stored metadata attribute is used to respond to a read request received during the replay by the second storage node, while a write request metadata attribute of a write request is used for executing the write request received by the second storage node during the replay. | 2021-10-28 |
20210334183 | METHOD, DEVICE, AND STORAGE MEDIUM FOR MANAGING STRIPE IN STORAGE SYSTEM - When managing stripes in a storage system, based on a determination that a failed storage device appears in first storage devices, a failed stripe involving the failed storage device is determined in a first redundant array of independent disks (RAID). An idle space that can be used to reconstruct the failed stripe is determined in the first storage devices. The failed stripe is reconstructed to second storage devices in the storage system based on a determination that the idle space is insufficient to reconstruct the failed stripe, the second storage devices being storage devices in a second RAID. An extent in the failed stripe is released in the first storage devices. Accordingly, it is possible to reconstruct a failed stripe as soon as possible to avoid data loss, and further to provide more idle spaces in the first storage devices for future reconstruction. | 2021-10-28 |
20210334184 | RELIABILITY CODING WITH REDUCED NETWORK TRAFFIC - This disclosure describes techniques that include implementing network-efficient data durability or data reliability coding on a network. In one example, this disclosure describes a method that includes generating a plurality of data fragments from a set of data to enable reconstruction of the set of data from a subset of the plurality of data fragments; storing, across a plurality of nodes in a network, the plurality of data fragments, wherein storing the plurality of data fragments includes storing the first fragment at a first node and the second fragment at a second node; and generating, by the first node, a plurality of secondary fragments derived from the first fragment to enable reconstruction of the first fragment from a subset of the plurality of secondary fragments; and storing the plurality of secondary fragments from the first fragment across a plurality of storage devices included within the first node. | 2021-10-28 |
20210334185 | TASK BASED SERVICE MANAGEMENT PLATFORM - A service management platform can implement functionality for one or more services, each of which can be independently used by a plurality of clients of the services. To activate the functionality of the one or more of the services, a hub server of the service management platform can assign a set of tasks to individual node servers for execution. The hub server can operate in a “supervisor environment” distinct from the processing environment used to execute the computationally intensive portions of the tasks. A task received at a node server can be managed by a supervisor process within the supervisor environment and executed by a native process within a native operating system environment, where the native process executes the computationally intensive calculations of the task and supervisor process provides communications and data transfer between the native process and rest of the service management platform. | 2021-10-28 |
20210334186 | DIAGNOSING AND MITIGATING MEMORY LEAK IN COMPUTING NODES - The present disclosure relates to systems, methods, and computer readable media for diagnosing and mitigating memory impact events, such as memory leaks, high memory usage, or other memory issues causing a host node from performing as expected on a cloud computing system. The systems described herein involve receiving locally generated memory usage data from a plurality of host nodes. The systems described herein may aggregate the memory usage data and determine a memory impact diagnosis based on a subset of the aggregated memory usage data. The systems described herein may further apply a mitigation model for mitigating the memory impact event. The systems described herein provide an end-to-end solution for diagnosing and mitigating a variety of memory issues using a dynamic and scalable system that reduces a negative impact of memory leaks and other memory issues on a cloud computing system. | 2021-10-28 |
20210334187 | REAL-TIME POWER METER FOR OPTIMIZING PROCESSOR POWER MANAGEMENT - A scheme is provided for a processor to measure or estimate the dynamic capacitance (Cdyn) associated with an executing application and take a proportional throttling action. Proportional throttling has significantly less impact on performance and hence presents an opportunity to get back the lost bins and proportionally clip power if it exceeds a specification threshold. The ability to infer a magnitude of power excursion of a power virus event (and hence, the real Cdyn) above a set power threshold limit enables the processor to proportionally adjust the processor operating frequency to bring it back under the limit. With this scheme, the processor distinguishes a small power excursion versus a large one and reacts proportionally, yielding better performance. | 2021-10-28 |
20210334188 | Identification of Log Events for Computing Systems - Aspects of the disclosure relate to various systems and techniques that provide methods and systems for identifying log event for computing systems. For example, receiving a log event of an application and identifying at least one key word and determining a number of instances in which the computing device has received the log event based on the at least one key word. Further, determining a value for the leg event based on the determined number of instances where the value is representative of an inverse relationship between the number of instances of receipt of the log event and a criticality of that log event and initiating an action to address the event indicated by the log event based on a comparison between the determined value and a threshold. | 2021-10-28 |
20210334189 | PROGRESSIVE ERROR HANDLING - Systems and methods herein describe receiving identification from a data pipeline, accessing first data offset information for a first data origin and second data offset information for a second data origin, bisecting the first data origin using the first data offset information, processing the data pipeline with the bisected first data offset information and the second data offset information, receiving a notification indicating a data pipeline status, and causing presentation of the notification on a graphical user interface of a computing device. | 2021-10-28 |
20210334190 | HIERARCHICAL ATTENTION TIME-SERIES (HAT) MODEL FOR BEHAVIOR PREDICTION - Aspects of the present disclosure provide techniques for behavior prediction. Embodiments include receiving activity data of a user, identifying user sessions comprising sets of time-stamped actions in the activity data, and segmenting the activity data into subsets corresponding to the user sessions. Embodiments include providing the subsets as inputs to a hierarchical attention time-series (HAT) model comprising: a first layer that determines attention scores for respective time-stamped actions in the subsets; and a second layer that determines attention scores for the subsets based on aggregations of the attention scores for the respective time-stamped actions. Embodiments include receiving, as outputs from the HAT model in response to the inputs: a prediction based on the subsets, the attention scores for the respective time-stamped actions, and the attention scores for the subsets; and explanatory information based on the attention scores for the respective time-stamped actions and the attention scores for the subsets. | 2021-10-28 |
20210334191 | PRESCRIPTIVE ANALYTICS BASED MULTI-PARAMETRIC DATABASE-COMPUTE TIER REQUISITION STACK FOR CLOUD COMPUTING - A multi-layer tier requisition stack may generate prescriptive tier requisition tokens for controlling requisition of database-compute resources at database-compute tiers. The input layer of the tier requisition stack may obtain historical data and database-compute tolerance data. The coefficient layer may be used to determine activity coefficients for each data type within the historical data. The activity coefficients may then be combined to determine an overall activity factor. The tolerance layer may be used to select an initial database-compute tier based on the activity factor. The tolerance layer may then increase from the initial database compute tier to an adjusted database-compute tier while accommodating tolerances within the database-compute tolerance data. The requisition layer may generate a tier requisition token based on the adjusted database-compute tier and/or finalization directives obtained at the requisition layer. | 2021-10-28 |
20210334192 | METHOD FOR DETECTING MEMORY LEAK BASED ON LINUX KERNEL - A method for detecting a memory leak based on Linux kernel, applied to an detection of the memory leak, comprises: reading a node, acquiring the return addresses of the allocation functions of each of the plurality of memory pages and the number of the memory pages thereof; releasing the return addresses of the allocation functions and the number of the memory pages counted by the node; reading the node again, acquiring the return address of each of the allocation functions and the number of the memory pages thereof; comparing the number in each case to calculate a difference value, if the difference value is a positive value and monotonically increases, it's determined that the memory leak occurs in the memory pages allocated correspondingly by the allocation functions. During the detection of the memory leak, the detection method consumes less memory without affecting the efficiency in allocating and releasing the memory. | 2021-10-28 |
20210334193 | TEST AUTOMATION FOR ROBOTIC PROCESS AUTOMATION - Test cases for existing workflows (or workflows under test) may be created and executed. A test case may be created for a workflow in production or one or more parts of the workflow, and the created test case for the workflow, or the one or more parts of the workflow, may be executed to identify environmental and/or automation issues for the workflow. A failed workflow test may be reported when the environmental and/or automation issues are identified. | 2021-10-28 |
20210334194 | GENERATION OF MICROSERVICES FROM A MONOLITHIC APPLICATION BASED ON RUNTIME TRACES - Systems, computer-implemented methods, and computer program products to facilitate generation of microservices from a monolithic application based on runtime traces are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a model component that learns cluster assignments of classes in a monolithic application based on runtime traces of executed test cases. The computer executable components can further comprise a cluster component that employs the model component to generate clusters of the classes based on the cluster assignments to identify one or more microservices of the monolithic application. | 2021-10-28 |
20210334195 | VERIFYING A SOFTWARE OR FIRMWARE UPDATE USING A CONTAINER BEFORE DEPLOYING TO A CLIENT - In some examples, a server receives configuration data from a device. The server receives a software or firmware update from a vendor and determines, based on the configuration data, that the update is installable on the device. The server creates and configures a container, based on the configuration data, to create a replica of the device. The server installs the update in the replica and performs multiple tests that generate logs. If the logs indicate that the update caused no issues, the server sends the update to the device. If the logs indicate that the update caused an issue, the server sends the update to the vendor. In response, the server receives, from the vendor, a modified update that addresses the issue, installs the modified update in the replica, performs the tests, determines that the modified update causes no issues, and sends the modified update to the device. | 2021-10-28 |
20210334196 | TEST CYCLE TIME REDUCTION AND OPTIMIZATION - A system that automatically reduces test cycle time to save resources and developer time. The present system selects a subset of tests from a full test plan that should be selected for a particular test cycle, rather than running the entire test plan. The subset of tests is intelligently selected using metrics such as tests associated with changed code and new and modified tests. | 2021-10-28 |
20210334197 | BROWSER-BASED TESTS FOR HYBRID APPLICATIONS USING A LAUNCHER PLUG-IN - The present disclosure is directed to systems and methods for testing a hybrid application. For example a method may include: executing a plug-in on a computing device; in response to the executing the plug-in, generating an emulator for testing a hybrid application, the emulator simulating an operating system of a client device such that, during testing, the hybrid application replicates operations of a browser operating on the client device; installing the hybrid application in the emulator; notifying a server that the hybrid application is ready for testing; executing instructions received from the server for testing the hybrid application; and providing results from testing the hybrid application to the server. | 2021-10-28 |
20210334198 | PROVING WHETHER SOFTWARE FUNCTIONALITY HAS CHANGED FOLLOWING A SOFTWARE CHANGE - Disclosed herein are techniques for using a line-of-code behavior and relation model to determine software functionality changes. Techniques include identifying a first portion of executable code and a second portion of executable code; accessing a first line-of-code behavior and relation model representing execution of functions of the first portion of executable code; constructing, based on the second portion of executable code, a second line-of-code behavior and relation model representing execution of functions of the second portion of executable code; performing a functional differential comparison of the first line-of-code behavior and relation model to the second line-of-code behavior and relation model; determining, based on the functional differential comparison, a status of functional equivalence between the first portion of executable code and the code portion of executable code; and generating, based on the determined difference, a report identifying the status of functional equivalence. | 2021-10-28 |
20210334199 | GENERATING AND SIGNING A LINE-OF-CODE BEHAVIOR AND RELATION MODEL - Disclosed herein are techniques for generating and signing line-of-code behavior and relation models. Techniques include identifying executable code for a controller; performing a functional analysis of the executable code to determine a plurality of functions associated with the executable code and a plurality of relationships between the plurality of functions; generating, based on the determined plurality of functions and plurality of relationships, a line-of-code behavior and relation model for the executable code; performing a signature operation on the generated line-of-code behavior and relation model to produce a unique signature value associated with at least one of: the line-of-code behavior and relation model or a functional block of the line-of-code behavior and relation model; and linking the unique signature value to the line-of-code behavior and relation model. | 2021-10-28 |
20210334200 | STORING TRANSLATION LAYER METADATA IN HOST MEMORY BUFFER - An example method of storing translation layer metadata in a host memory buffer comprises: retrieving, from the first memory device, translation layer metadata comprising one or more logical-to-physical (L2P) records, wherein an L2P record of the one or more of L2P records maps a logical block address to a physical address identifying a memory block in the memory system; generating protection metadata for at least a portion of the translation layer metadata; and causing a host system connected to the memory system to store the portion of the translation layer metadata and the protection metadata in a host memory buffer residing on a second memory device of the host system. | 2021-10-28 |
20210334201 | Storage Devices Having Minimum Write Sizes Of Data - The present disclosure generally relates to methods of operating storage devices. The storage device comprises a controller and a storage unit divided into a plurality of streams. The storage unit comprises a plurality of dies, where each die comprises two planes. One erase block from each plane of a die is selected for stream formation. Each erase block comprises a plurality of wordlines. A stream comprises one or two dies dedicated to storing parity data and a plurality of dies dedicated to storing user data. The stream further comprises space devoted for controller metadata. The storage device restricts a host device to send write commands in a minimum write size to increase programming efficiency. The minimum write size equals one wordline from one erase block from each plane of each die in the stream dedicated to storing user data minus the space dedicated to metadata. | 2021-10-28 |
20210334202 | METHOD FOR ACCESSING FLASH MEMORY MODULE AND ASSOCIATED PACKAGE - The present invention provides a method for accessing a flash memory module is disclosed, wherein the flash memory module includes at least one flash memory chip, each flash memory chip includes a plurality of block, each block is implemented by a plurality of word lines, each word line corresponds to K pages, and each word line includes a plurality of memory cells supporting a plurality of states, and the method includes the steps of: receiving data from a host device; generating dummy data; and writing the data with the dummy data to a plurality of specific blocks, wherein for each of a portion of the word lines of the specific blocks, the dummy data is written into at least one of the K pages, and the data from the host device is written into the other page(s) of the K pages. | 2021-10-28 |
20210334203 | Condensing Logical to Physical Table Pointers in SSDs Utilizing Zoned Namespaces - The present disclosure generally relates to methods of operating storage devices. The storage device comprises a controller, random-access memory (RAM), and a NVM unit, where in the NVM unit comprises a plurality of zones. The RAM unit comprises a logical to physical address (L2P) table for the plurality of zones. The L2P table comprises pointers that are associated with a logical block address (LBA) and the physical location of the data stored in the NVM. The L2P table comprises one pointer per erase block or zone. When a command is received to read data within the NVM, the controller reads the L2P table to determine the LBA and associated pointer of the data. The controller can then determine which zone or erase block the data is stored in, and calculates various offsets of wordlines, pages, and page addresses to find the exact location of the data in the NVM. | 2021-10-28 |
20210334204 | MEMORY SYSTEM, MEMORY CONTROLLER AND METHOD FOR OPERATING MEMORY SYSTEM - A memory system, a memory controller, and an operating method therefor. The memory system includes a first processor configured to determine a processor, among multiple processor including the first processor, to process read operations on logical addresses indicated by read commands, and process a write operation on a logical address indicated by a write command; and a second processor, among the multiple processors, configured to process a read operation on a target logical address selected by the first processor among the logical addresses. The first processor searches for mapping information on a logical address corresponding to a read or write operation to be processed by the first processor, by using a first map search engine, and the second processor searches for mapping information on the target logical address by using a second map search engine. It is possible to improve the performance of searching for mapping information in a read operation. | 2021-10-28 |
20210334205 | HOST-RESIDENT TRANSLATION LAYER WRITE COMMAND - A processing device in a memory system receives, from a host system, a host-resident translation layer read command comprising a physical address of data to be read from a memory device, wherein the physical address is indicated in at least a portion of a translation layer entry previously provided to the host system with a response to a host-resident translation layer write command and stored in a host-resident translation layer mapping table. The processing device further performs a read operation to read the data stored at the physical address from the memory device and sends, to the host system, the data from the physical address of the memory device. | 2021-10-28 |
20210334206 | Optimized Inline Deduplication - Methods, computer systems, and computer readable medium are described. In a particular embodiment, a storage controller is configured to receive, from a host computing device, a request to perform a bulk array task and in response to receiving the request, store an indication relating old keys of a mapping table to new keys, wherein both the old keys and the new keys correspond to the request. The storage controller is also configured to convey a response indicating completing of the request without prior access of user data and update the mapping table to replace the old keys with the new keys. | 2021-10-28 |
20210334207 | INFORMATION PROCESSING DEVICE, NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM, AND INFORMATION PROCESSING SYSTEM - According to one embodiment, an information processing device includes a nonvolatile memory, assignment unit, and transmission unit. The assignment unit assigns logical address spaces to spaces. Each of the spaces is assigned to at least one write management area included in a nonvolatile memory. The write management area is a unit of an area which manages the number of write. The transmission unit transmits a command for the nonvolatile memory and identification data of a space assigned to a logical address space corresponding to the command. | 2021-10-28 |
20210334208 | ADJUSTMENT OF GARBAGE COLLECTION PARAMETERS IN A STORAGE SYSTEM - A system, method, and machine-readable storage medium for performing garbage collection in a distributed storage system are provided. In some embodiments, an efficiency level of a garbage collection process is monitored. The garbage collection process may include removal of one or more data blocks of a set of data blocks that is referenced by a set of content identifiers. The set of slice services and the set of data blocks may reside in a cluster, and a set of filters may indicate whether the set of data blocks is in-use. At least one parameter of a filter of the set of filters may be adjusted (e.g., increased or reduced) if the efficiency level is below the efficiency threshold. Garbage collection may be performed on the set of data blocks in accordance with the set of filters. | 2021-10-28 |
20210334209 | METHOD AND APPARATUS AND COMPUTER PROGRAM PRODUCT FOR MANAGING DATA STORAGE - The invention relates to a method, a non-transitory computer program product, and an apparatus for managing data storage. The method performed by a flash controller includes: obtaining information indicating a subregion to be activated, where the subregion is associated with a logical block address (LBA) range; triggering a garbage collection (GC) process being performed in background to migrate user data of all the or a portion of the LBA range associated with the subregion to continuous physical addresses in a flash device; and updating content of a plurality of entries associated with the subregion according to migration results, where each entry includes information indicating which physical address that user data of a corresponding logical address is physically stored in the flash device. | 2021-10-28 |
20210334210 | METHOD AND NETWORK DEVICE FOR PROCESSING SERVICE DATA - The present invention discloses a method and a network device for processing service data and relates to the technical field of virtualization. The method includes: calling a Virtio to establish a communication connection with a Vhost deployed on a virtual switch when a first processing progress starts; applying for a target storage space by the first processing progress and dividing the target storage space into a plurality of sub-storage spaces; and determining a target sub-storage space in the plurality sub-storage spaces and processing the service data based on the target sub-storage space when a second processing progress starts. The present disclosure may save processing resources of a network device and ensure the efficiency of processing the service data. | 2021-10-28 |
20210334211 | SLC CACHE ALLOCATION - Disclosed in some examples are memory devices which feature intelligent adjustments to SLC cache configurations that balances memory cell lifetime with performance. The size of the SLC cache can be adjusted during usage of the memory device based upon a write amplification (WA) metric of the memory device. In some examples, the size of the SLC cache can be adjusted during usage of the memory device based upon a write amplification (WA) metric of the memory device and a memory device logical saturation metric (percentage of valid user data written in the device of the total user size). | 2021-10-28 |
20210334212 | PROVIDING DATA VALUES USING ASYNCHRONOUS OPERATIONS AND BASED ON TIMING OF OCCURRENCE OF REQUESTS FOR THE DATA VALUES - A processing system server and methods for performing asynchronous data store operations. The server includes a processor which maintains a cache of objects in memory of the server. The processor executes an asynchronous computation to determine the value of an object. In response to receiving a request for the object occurring before the asynchronous computation has determined the value of the object, a value of the object is returned from the cache. In response to receiving a request for the object occurring after the asynchronous computation has determined the value of the object, a value of the object determined by the asynchronous computation is returned. The asynchronous computation may comprise at least one future, such as a ListenableFuture, or a process or thread. The asynchronous computation may determine the value of the object by querying at least one additional server. | 2021-10-28 |
20210334213 | Remap Address Space Controller - A Remap Address Space Controller controls access to an address space by selectively remapping a physical address of a transaction received from a controller to form a remapped physical address according to a current execution context of the controller. The selective remapping is based on a determination of whether the current execution context of the controller allows the transaction to access the address space. Remap Address Space Controller selectively provides the transaction with the remapped physical address to a memory bus based on the determination of whether the current execution context of the controller allows the transaction to access the address space. | 2021-10-28 |
20210334214 | VIRTUAL CACHE SYNONYM DETECTION USING ALIAS TAGS - A system and method of handling data access demands in a processor virtual cache that includes: determining if a virtual cache data access demand missed because of a difference in the context tag of the data access demand and a corresponding entry in the virtual cache with the same virtual address as the data access demand; in response to the virtual cache missing, determining whether the alias tag valid bit is set in the corresponding entry of the virtual cache; in response to the alias tag valid bit not being set, determining whether the virtual cache data access demand is a synonym of the corresponding entry in the virtual cache; and in response to the virtual access demand being a synonym of the corresponding entry in the virtual cache with the same virtual address but a different context tag, updating information in a tagged entry in an alias table. | 2021-10-28 |
20210334215 | METHODS FOR MANAGING INPUT-OUTPUT OPERATIONS IN ZONE TRANSLATION LAYER ARCHITECTURE AND DEVICES THEREOF - The disclosed technology relates to determining physical zone data within a zoned namespace solid state drive (SSD), associated with logical zone data included in a first received input-output operation based on a mapping data structure within a namespace of the zoned namespace SSD. A second input-output operation specific to the determined physical zone data is generated wherein the second input-output operation and the received input-output operation is of a same type. The generated second input-output operation is completed using the determined physical zone data within the zoned namespace SSD. | 2021-10-28 |
20210334216 | METHODS FOR MANAGING STORAGE SYSTEMS WITH DUAL-PORT SOLID-STATE DISKS ACCESSIBLE BY MULTIPLE HOSTS AND DEVICES THEREOF - Methods, non-transitory machine readable media, and computing devices that manage resources between multiple hosts coupled to dual-port solid-state disks (SSDs) are disclosed. With this technology, in-core conventional namespace (CNS) and zoned namespace (ZNS) mapping tables are synchronized by a host flash translation layer with on-disk CNS and ZNS mapping tables, respectively. An entry in one of the in-core CNS or ZNS mapping tables is identified based on whether a received storage operation is directed to a CNS or a ZNS of the dual-port SSD. The entry is further identified based on a logical address extracted from the storage operation. The storage operation is serviced using a translation in the identified entry for the logical address, when the storage operation is directed to the CNS, or a zone identifier in the identified entry for a zone of the ZNS, when the storage operation is directed to the ZNS. | 2021-10-28 |
20210334217 | SSD Address Table Cache Management based on Probability Distribution - Aspects of a storage device including a cache having a logical-to-physical (L2P) mapping table, a scratchpad buffer, and a controller are provided to optimize cache storage of L2P mapping information. A controller receives a random pattern of logical addresses and identifies each logical address within one of multiple probability distributions. Based on a frequency of occurrence of each logical address, the controller stores a control page including the logical address within either a partition of the L2P mapping table which is associated with the corresponding probability distribution, or in the scratchpad buffer. The frequency of occurrence of each logical address is determined based on whether the logical address is within one or more standard deviations from a mean of each probability distribution. As a result, frequently occurring control pages are stored in cache, while infrequently occurring control pages are stored in the scratchpad buffer. | 2021-10-28 |
20210334218 | MEMORY SYSTEM, MEMORY CONTROLLER, AND OPERATION METHOD OF MEMORY SYSTEM - Embodiments of the present disclosure relate to a memory system, a memory controller, and an operation method of a memory system. According to embodiments of the present disclosure, the memory system, before updating a mapping table which includes mapping information between logical addresses and physical addresses, may assign a portion of a map cache area for caching a plurality of map segments in the mapping table as a map update area for updating the mapping table, and may load a subset of the plurality of map segments to the map update area. Accordingly, it is possible to quickly update a mapping table and to optimize update performance for a mapping table within a limit that guarantees caching performance to a predetermined level or higher. | 2021-10-28 |
20210334219 | ACCELERATION CIRCUITRY FOR POSIT OPERATIONS - Systems, apparatuses, and methods related to acceleration circuitry for posit operations are described. A first operand formatted in a universal number or posit format can be received by a first buffer resident on acceleration circuitry. A second operand formatted in a universal number or posit format can be received by a second buffer resident on the acceleration circuitry. An arithmetic operation, a logical operation, or both can be performed using processing circuitry resident on the acceleration circuitry using the first operand and the second operand. A result of the arithmetic operation, the logical operation, or both can be received by a third buffer resident on the acceleration circuitry. | 2021-10-28 |
20210334220 | MEMORY ACCESS CONTROL - Apparatus comprises a multi-threaded processing element to execute processing threads as one or more process groups each of one or more processing threads, each process group having a process group identifier unique amongst the one or more process groups and being associated by capability data with a respective memory address range in a virtual memory address space; and memory address translation circuitry to translate a virtual memory address to a physical memory address by a processing thread of one of the process groups; the memory address translation circuitry being configured to associate, with a translation of a given virtual memory address to a corresponding physical memory address, permission data defining one or more process group identifiers representing respective process groups permitted to access the given virtual memory address, and to inhibit access to the given virtual memory address in dependence on the capability data associated with the process group of the processing thread requesting the memory access and a detection of whether the permission data defines the process group identifier of the process group of the processing thread requesting the memory access. | 2021-10-28 |
20210334221 | ETHERNET-ATTACHED SSD FOR AUTOMOTIVE APPLICATIONS - A data storage device includes: a housing integrating a control logic, a data protection logic, and a non-volatile storage; and a network interface connector integrated to the housing and is configured to be directly inserted into a network switch. The control logic is configured to store a vehicle data including a video stream in the non-volatile storage. The video stream is received from a video camera that is connected to the network switch. The data protection logic is configured to detect a vehicle event and change an operating mode of the data storage device to a read-only mode prohibiting the vehicle data stored in the non-volatile storage from being erased or tampered. | 2021-10-28 |
20210334222 | TRUSTED INTERMEDIARY REALM - Memory access circuitry controls access to memory based on ownership information defining, for a given memory region, an owner realm specified from among two or more realms, each realm corresponding to at least a portion of a software processes running on processing circuitry. The owner realm has a right to exclude other realms from accessing data stored within the given memory region. When security configuration parameters for a given realm specify that the given realm is associated with a trusted intermediary realm identified by the security configuration parameters, the trusted intermediary realm may be allowed to perform at least one realm management function for the given realm, e.g. provision of secret keys and/or saving/restoring of security configuration parameters. This can enable use cases where multiple instances of the same realm with common parameters need to be established on the same system at different times or on different systems. | 2021-10-28 |
20210334223 | DATA PROCESSING SYSTEM, CENTRAL ARITHMETIC PROCESSING APPARATUS, AND DATA PROCESSING METHOD - An optical line terminal (OLT) ( | 2021-10-28 |
20210334224 | PACKET PROCESSING SYSTEM, METHOD AND DEVICE UTILIZING A PORT CLIENT CHAIN - A packet processing system having each of a plurality of hierarchical clients and a packet memory arbiter serially communicatively coupled together via a plurality of primary interfaces thereby forming a unidirectional client chain. This chain is then able to be utilized by all of the hierarchical clients to write the packet data to or read the packet data from the packet memory. | 2021-10-28 |
20210334225 | LINK STARTUP METHOD OF STORAGE DEVICE, AND STORAGE DEVICE, HOST AND SYSTEM IMPLEMENTING SAME - A storage device capable of performing high-speed link startup and a storage system including the storage device are disclosed. A link startup method of the storage device includes receiving a line-reset signal from a host through a line connected to an input signal pin of the storage device, comparing a length of the received line-reset signal with a first reference time, and performing a link startup operation in a high-speed mode or a low-speed mode between the storage device and the host according to a comparing result. | 2021-10-28 |
20210334226 | SYSTEMS, METHODS AND APPARATUS FOR A STORAGE CONTROLLER WTIH MULTI-MODE PCIE FUNCTIONALITIES - A standalone Storage Controller with PCIe Multi-Mode capability that can be configured as PCIe Root-Complex (RC), an End-Point (EP) or a bridge (BR). In EP mode, the Storage Controller acts like a regular PCIe slaved controller which is connected to a PCIe Root-Complex provided by a Host via a PCIe port. While in RC mode, the Storage Controller acts as a PCIe configuration and management entity, a Host acting as a PCIe Root-Complex, which an add-in card or chip can attach to via a PCIe port that is provided by the Storage Controller, supporting any type of Network Device Interface, without an external Root-Complex. While in bridge mode, the Storage Controller can act as a transparent or non-transparent bridge with either a Root-Complex or End-Point port for the internal connection to the bridge. | 2021-10-28 |
20210334227 | SYSTEMS AND METHODS OF CONVEYING INFORMATION IN A VEHICLE ACCORDING TO A HIERARCHY OF PREFERENCES - Embodiments described herein relate to systems and methods for conveying vehicle information according to a hierarchy of output devices. In one embodiment, a vehicle includes one or more output devices and a computing device communicatively coupled thereto. The computing device includes a processor, a non-transitory computer readable memory communicatively coupled to the processor, and computer readable instructions stored in the non-transitory computer-readable memory that, when executed by the processor, cause the system to detect the one or more output devices for conveying information in the vehicle, determine the hierarchy of using the one or more output devices, and convey information through the one or more output devices according to the hierarchy. | 2021-10-28 |