44th week of 2021 patent applcation highlights part 46 |
Patent application number | Title | Published |
20210342219 | SEPARATING PARITY DATA FROM HOST DATA IN A MEMORY SUB-SYSTEM - A system includes a memory device including a first unit, and a processing device, operatively coupled to the memory device, to perform operations including identifying a set of parity data on a volatile memory, determining whether the set of parity data satisfies a condition pertaining to a size of the set of parity data, and responsive to determining that the set of parity data does not satisfy the condition, appending parity data to the set of parity data. The parity data is generated based on a set of host data written on the first unit. | 2021-11-04 |
20210342220 | GENERATING ERROR CHECKING DATA FOR ERROR DETECTION DURING MODIFICATION OF DATA IN A MEMORY SUB-SYSTEM - First and second data are identified, such that the second data is based on a modification operation performed on the first data. First error-checking data comprising a Cyclic Redundancy Check (CRC) value of the first data is identified. Incremental error-checking data is generated based on a difference between the first data and the second data. Updated first error-checking data is generated based on a combination of the first error-checking data and the incremental error-checking data. The updated first error-checking data is compared to second error-checking data generated from a CRC value of the second data to determine whether the second data contains an error. | 2021-11-04 |
20210342221 | OPTIMAL READ BIAS TRACKING SYSTEM AND METHOD THEREOF - A memory system includes a memory device including a plurality of cells associated with multiple pages and a controller. The controller selects a read bias set in response to a read address and modifies the read bias set to generate modified read bias set using one of a plurality of modification arrays. The controller performs a read operation on the select page using the modified read bias set. For a select read bias of the read bias set, the controller accumulates a fail bit count corresponding to a read operation using a select modified read bias of the modified read bias set into the plurality of accumulators using subtraction or addition. When an absolute value of a certain accumulator of the fail bit count is greater than a threshold, the controller shifts the select read bias in a correction direction by a specific magnitude corresponding to the accumulator sign. | 2021-11-04 |
20210342222 | DIRECT-INPUT REDUNDANCY SCHEME WITH ADAPTIVE SYNDROME DECODER - Methods, systems, and devices for operating memory cell(s) using a direct-input column redundancy scheme are described. A device that has read data from data planes may replace data from one of the planes with redundancy data from a data plane storing redundancy data. The device may then provide the redundancy data to an error correction circuit coupled with the data plane that stored the redundancy data. An output of the error correction circuit may be used to generate syndrome bits, which may be decoded by a syndrome decoder. The syndrome decoder may indicate whether a bit of the data should be corrected by selectively reacting to inputs based on the type of data to be corrected. For example, the syndrome decoder may react to a first set of inputs if the data bit to be corrected is a regular data bit, and react to a second set of inputs if the data bit to be corrected is a redundant data bit. | 2021-11-04 |
20210342223 | SYSTEMS AND METHODS FOR ADAPTIVE ERROR-CORRECTION CODING - A storage module is configured to store data segments, such as error-correcting code (ECC) codewords, within an array comprising a plurality of columns. The ECC codewords may comprise ECC codeword symbols. The ECC symbols of a data segment may be arranged in a horizontal arrangement, a vertical arrangement, a hybrid channel arrangement, and/or vertical stripe arrangement within the array. The individual ECC symbols may be stored within respective columns of the array (e.g., may not cross column boundaries). Data of an unavailable ECC symbol may be reconstructed by use of other ECC symbols stored on other columns of the array. | 2021-11-04 |
20210342224 | Failure Abatement Approach For A Failed Storage Unit - A method for execution by a vault management device of a storage network includes determining a failure impact level to vaults of the storage network based on a failed storage unit within the vaults, where the vaults include a first vault that is associated with a first set of storage units and a first decode threshold number, and a second vault that is associated with a second set of storage units and a second decode threshold number, and where the failure impact level is based on the number of non-failed storage units within each of the vaults. The method continues with determining a failure abatement approach based on the failure impact level. The method continues by with facilitating the failure abatement approach. | 2021-11-04 |
20210342225 | Data Reconstruction in Distributed Storage Systems - A method of operating a distributed storage system, the method includes identifying missing chunks of a file. The file is divided into stripes that include data chunks and non-data chunks. The method also includes identifying non-missing chunks available for reconstructing the missing chunks and reconstructing missing data chunks before reconstructing missing non-data chunks using the available non-missing chunks. | 2021-11-04 |
20210342226 | ITERATIVE INTEGER PROGRAMMING WITH LOAD BALANCE FOR CYCLIC WORKLOADS - A backup orchestrator for providing backup services to entities includes storage for storing recovery point objectives for the entities and a backup manager. The backup manager selects an optimization periodicity based a number of backups to be generated to meet a portion of the recovery point objectives; makes a determination that at least one of the portion of the recovery point objectives has a maximum allowable unbacked up period of time that is greater than the optimization periodicity; in response to the determination: load balances the number of backups across multiple optimization periods, based on the optimization periodicity, of a balanced backup schedule; selects a backup generation time for each of the to be generated backups in each of the optimization periods of the balanced backup schedule; and generates the number of backups using the balanced backup schedule. | 2021-11-04 |
20210342227 | STORING METADATA AT A CLOUD-BASED CENTER, AND RECOVERING BACKUP DATA STORED REMOTELY FROM THE CLOUD-BASED CENTER - A Remote Metadata Center provides Distaster Recovery (DR) testing and metadata backup services to multiple business organizations. Metadata associated with local data backups performed at business organizations is transmitetd to the Remote Metadata Center. Corresponding backup data is storeged in a data storage system that is either stored locally at the business organization or at a data storage facility that is at a different location than the Remote Metadata Center and the business organization. DR testing can be staged from the Remote Data Center using the metadata receievd and optionally with assistance from an operator at the business organization and/or the data storage facility. | 2021-11-04 |
20210342228 | Automated Data Restore - A method, apparatus, system, and computer program product for restoring data. The restoring of the data to a storage system from a storage medium is initiated by a computer system. Changes to an amount of space available in the storage system to restore the data are identifies by the computer system, while the data is being restored to the storage system. A restoring of the data to the storage system is placed on hold by the computer system when an amount of space needed to complete restoring the data is greater than the amount of space available to restore the data. | 2021-11-04 |
20210342229 | PERSISTENT MEMORY IMAGE CAPTURE - A memory image can be captured by generating metadata indicative of a state of volatile memory and/or byte-addressable PMEM at a particular time during execution of a process by an application. This memory image can be persisted without copying the in-memory data into a separate persistent storage by storing the metadata and safekeeping the in-memory data in the volatile memory and/or PMEM. Metadata associated with multiple time-evolved memory images captured can be stored and managed using a linked index scheme. A linked index scheme can be configured in various ways including a full index and a difference-only index. The memory images can be used for various purposes including suspending and later resuming execution of the application process, restoring a failed application to a previous point in time, cloning an application, and recovering an application process to a most recent state in an application log. | 2021-11-04 |
20210342230 | MEMORY IMAGE CAPTURE - A memory image can be captured by generating metadata indicative of a state of volatile memory and/or byte-addressable PMEM at a particular time during execution of a process by an application. This memory image can be persisted without copying the in-memory data into a separate persistent storage by storing the metadata and safekeeping the in-memory data in the volatile memory and/or PMEM. Metadata associated with multiple time-evolved memory images captured can be stored and managed using a linked index scheme. A linked index scheme can be configured in various ways including a full index and a difference-only index. The memory images can be used for various purposes including suspending and later resuming execution of the application process, restoring a failed application to a previous point in time, cloning an application, and recovering an application process to a most recent state in an application log. | 2021-11-04 |
20210342231 | HIGH PERFORMANCE PERSISTENT MEMORY - The embodiments described herein describe technologies for non-volatile memory persistence in a multi-tiered memory system including two or more memory technologies for volatile memory and non-volatile memory. | 2021-11-04 |
20210342232 | RECOVERING A VIRTUAL MACHINE AFTER FAILURE OF POST-COPY LIVE MIGRATION - Post-copy is one of the two key techniques (besides pre-copy) for live migration of virtual machines in data centers. Post-copy provides deterministic total migration time and low downtime for write-intensive VMs. However, if post-copy migration fails for any reason, the migrating VM is lost because the VM's latest consistent state is split between the source and destination nodes during migration. PostCopyFT provides a new approach to recover a VM after a destination or network failure during post-copy live migration using an efficient reverse incremental checkpointing mechanism. PostCopyFT was implemented and evaluated in the KVM/QEMU platform. Experimental results show that the total migration time of post-copy remains unchanged while maintaining low failover time, downtime, and application performance overhead. | 2021-11-04 |
20210342233 | ERROR DETECTION CIRCUIT - A circuit and method for verifying the operation of error checking circuitry. In one example, a circuit includes a memory, a first error checking circuit, a second error checking circuit, and a comparison circuit. The memory includes a data output. The first error checking circuit includes an input and an output. The input of the first error checking circuit is coupled to the data output of the memory. The second error checking circuit includes an input and an output. The input of the second error checking circuit is coupled to the data output of the memory. The comparison circuit includes a first input and a second input. The first input is coupled to the output of the first error checking circuit. The second input is coupled to the output of the second error checking circuit. | 2021-11-04 |
20210342234 | ERROR RECOVERY METHOD AND APPARATUS - An error recovery method and apparatus, and a system are disclosed. At least two CPUs in a lockstep mode can exit the lockstep mode when an error occurs in at least one CPU, and the CPU in which the error occurs and a type of the error are determined. When the error can be recovered, the CPU in which the error occurs can be recovered according to a correctly running CPU. This helps the at least two CPUs run again at a position at which a service program is interrupted. | 2021-11-04 |
20210342235 | METHOD, ELECTRONIC DEVICE AND COMPUTER PROGRAM PRODUCT FOR MANAGING STORAGE DISK - Techniques for managing a storage disk involve monitoring a duration of a fault of a faulted storage disk, wherein the faulted storage disk includes a first disk slice configured to store metadata and a second disk slice configured to store user data. The techniques further involve, in response to the duration reaching a first threshold value, replacing the first disk slice with a first available disk slice in a first non-faulted storage disk. The techniques further involve, in response to the duration reaching a second threshold value greater than the first threshold value, replacing the second disk slice with a second available disk slice in a second non-faulted storage disk. Accordingly, fault monitoring windows with different lengths are applied to disk slices for different logical tiers in the faulted storage disk. In this way, the reliability of data of a metadata tier can be effectively improved. | 2021-11-04 |
20210342236 | DATA RECOVERY WITHIN A MEMORY SUB-SYSTEM - A command to transfer data in a portion of a memory component to a recovery portion of a different memory component is received from a host system, wherein the portion of the memory component is associated with a portion of the memory component that has failed, and the data in the portion of the memory component is recovered and transferred to the recovery portion of the different memory component without moving or processing the data through the host system responsive to receipt of the command. | 2021-11-04 |
20210342237 | SNAPSHOT-BASED DISASTER RECOVERY ORCHESTRATION OF VIRTUAL MACHINE FAILOVER AND FAILBACK OPERATIONS - Snapshot-based disaster recovery (DR) orchestration systems and methods for virtual machine (VM) failover and failback do not require that VMs or their corresponding datastores be actively operating at the DR site before a DR orchestration job is initiated, i.e., before failover. An illustrative data storage management system deploys proprietary components at source data center(s) and at DR site(s). The proprietary components (e.g., storage manager, data agents, media agents, backup nodes, etc.) interoperate with each other and with the source and DR components to ensure that VMs will successfully failover and/or failback. DR orchestration jobs are suitable for testing VM failover scenarios (“clone testing”), for conducting planned VM failovers, and for unplanned VM failovers. DR orchestration jobs also handle failback and integration of DR-generated data into the failback site, including restoring VMs that never failed over to fully re-populate the source/failback site. | 2021-11-04 |
20210342238 | METHOD AND DEVICE FOR TESTING A TECHNICAL SYSTEM - A method for testing a technical system. The method includes: tests are carried out with the aid of a simulation of the system, the tests are evaluated with respect to a fulfillment measure of a quantitative requirement on the system and an error measure of the simulation, on the basis of the fulfillment measure and error measure, a classification of the tests as either reliable or unreliable is carried out, and a test database is improved on the basis of the classification. | 2021-11-04 |
20210342239 | METHOD AND DEVICE FOR TESTING A TECHNICAL SYSTEM - A method for testing a technical system. The method includes: tests are carried out with the aid of a simulation of the system, the tests are evaluated with respect to a fulfillment measure of a quantitative requirement on the system and an error measure of the simulation, on the basis of the fulfillment measure and error measure, a classification of the tests as either reliable or unreliable is carried out. | 2021-11-04 |
20210342240 | METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MONITORING STORAGE SYSTEM - The present disclosure relates to a method, an electronic device, and a computer program product for monitoring a storage system. For example, a method of monitoring a storage system is provided. This method may include setting a quota type of a folder to be monitored in the storage system to a monitored type. This method may further include acquiring quota monitoring data of which the quota type is the monitored type from a quota monitoring report associated with the storage system. In addition, this method may further include generating storage information of the folder based on the quota monitoring data. In this way, the time spent on monitoring the storage system can be shortened, the system resources can be saved, and ultimately, the user experience can be improved. | 2021-11-04 |
20210342241 | METHOD AND APPARATUS FOR IN-MEMORY FAILURE PREDICTION - A method and apparatus for predicting and managing a device failure includes responsive to a predicted failure of a memory device, the predicted failure based on sensor data associated with the memory device, determining a further action for the memory device. | 2021-11-04 |
20210342242 | MEMORY SYSTEM - A memory system includes a nonvolatile semiconductor memory, a controller that controls the nonvolatile semiconductor memory, and a temperature sensor that acquires an operating temperature of at least one of the nonvolatile semiconductor memory and the controller. The controller calculates a temperature parameter based on operating temperatures acquired by the temperature sensor over a period of time, and switches between a plurality of operation settings in which electric power consumptions of the memory system vary, based on the temperature parameter. | 2021-11-04 |
20210342243 | SYSTEMS AND METHODS FOR SYSTEM POWER CAPPING BASED ON COMPONENT TEMPERATURE MARGINS - A closed-loop control system may include an integrator configured to, based on an error between a setpoint temperature and a measured temperature, determine an integrated error indicative of a time-based integral of the error, a proportional-integral controller configured to, based on the integrated error and the error, generate a proportional-integral output driving signal, and control logic configured to control power consumption of a component based on the proportional-integral output driving signal. | 2021-11-04 |
20210342244 | DISTRIBUTED ARCHITECTURE FOR FAULT MONITORING - Systems and methods for detecting an anomaly in a power semiconductor device are disclosed. A system includes a server computing device and one or more local components communicatively coupled to the server computing device. Each local component includes sensors positioned adjacent to the power semiconductor device for sensing properties thereof. Each local component receives data corresponding to one or more sensed properties of the power semiconductor device from the sensors and transmits the data to the server computing device. The server computing device utilizes the data, via a machine learning algorithm, to generate a set of eigenvalues and associated eigenvectors and select a selected set of eigenvalues and associated eigenvectors. Each local component conducts a statistical analysis of the selected set of eigenvalues and associated eigenvectors to determine that the data is indicative of the anomaly. | 2021-11-04 |
20210342245 | Method and Apparatus for Adjusting Host QOS Metrics Based on Storage System Performance - A storage system has a QOS recommendation engine that monitors storage system operational parameters and generates recommended changes to host QOS metrics (throughput, bandwidth, and response time requirements) based on differences between the host QOS metrics and storage system operational parameters. The recommended host QOS metrics may be automatically implemented to adjust the host QOS metrics. By reducing host QOS metrics during times where the storage system is experiencing high volumes of workload, it is possible to throttle workload at the host computer rather than requiring the storage system to expend processing resources associated with queueing the workload prior to processing. This can enable the overall throughput of the storage system to increase. When the workload on the storage system is reduced, updated recommended host QOS metrics are provided to enable the host QOS metrics to increase. Historical analysis is also used to generate recommended host QOS metrics. | 2021-11-04 |
20210342246 | Software Conversion Downtime Prediction Tool - Downtime resulting from converting a software program from a source system to a target system, is forecast, explored, and optimized. A benchmark for a practice conversion test run (including a downtime component) is received as a first input and displayed for exploration. A second input is received comprising a statistic stored in a conversion database and reflecting a prior actual software conversion process run. The benchmark and the statistic are processed with reference to an expert rule set, to generate an optimized result comprising an updated benchmark having a changed downtime component. The updated benchmark including the changed downtime component, is displayed. Processing may occur in conjunction with further input comprising an acceptance of a generated recommendation, and/or changing a data volume of the practice conversion test run. Embodiments may feed back to the conversion database, statistics resulting from a formal software conversion run conducted according to the optimized result. | 2021-11-04 |
20210342247 | MATHEMATICAL MODELS OF GRAPHICAL USER INTERFACES - A graph model of a graphical user interface (GUI) may be generated by processing usage data of the GUI where the usage data comprises sequences of GUI pages and actions between GUI pages. The nodes of the graph model may be determined by obtaining GUI pages from the usage data, identifying dynamic GUI elements in the GUI pages, generating canonical GUI pages by modifying the GUI pages using the dynamic GUI elements, and creating graph nodes using the canonical GUI pages. The edges of the graph may be determined by processing actions from the GUI data that were performed by users to transition from one GUI page to another GUI page. The graph model of the GUI may be used for any appropriate application, such as determining statistics relating to the GUI or statistics relating to individual users of the GUI. | 2021-11-04 |
20210342248 | AN APPARATUS AND METHOD FOR MONITORING EVENTS IN A DATA PROCESSING SYSTEM - An apparatus and method are provided for monitoring events in a data processing system. The apparatus has first event monitoring circuitry for monitoring occurrences of a first event within a data processing system, and for asserting a first signal to indicate every m-th occurrence of the first event, where m is an integer of 1 or more. In addition second event monitoring circuitry is used to monitor occurrences of a second event within the data processing system, and to assert a second signal to indicate every n-th occurrence of the second event, where n is an integer of 1 or more. History maintenance circuitry then maintains event history information which is updated in dependence on the asserted first and second signals. Further, history analysis circuitry is responsive to an analysis trigger to analyse the event history information in order to detect a reporting condition when the event history information indicates that a ratio between occurrences of the first event and the occurrences of the second event is outside an acceptable range. The history analysis circuitry is then responsive to detection of the reporting condition to assert a report signal. This provides a particularly efficient and effective mechanism for monitoring ratios of events within a data processing system. | 2021-11-04 |
20210342249 | METHOD FOR DETECTING SAFETY-RELEVANT DATA STREAMS - Various embodiments of the present disclosure are directed to method for detecting safety-relevant data streams which occur in a hardware system during the execution of at least one data processing task. In one example embodiment, the method includes the steps of: defining critical data via an interface, mapping of the hardware system onto a simulation model capable of running in a simulation environment; executing the at least one data processing task as a simulation with the simulation model in the simulation environment, monitoring the creation, transmission and deletion of the critical data and instances of the critical data in the simulation model during the execution of the at least one data processing task, and identifying and logging the security-relevant data streams. | 2021-11-04 |
20210342250 | METHOD AND APARATUS FOR VERIFYING A SOFTWARE SYSTEM - A method and apparatus for verifying a software system are provided. A data processing apparatus includes a processing unit and a memory unit communicatively coupled to the processing unit. The memory unit includes a simulation module and a verification module. The simulation module is configured to perform simulation of the software system for a first set of steps based on a first set of input values. The verification module is configured to instantaneously determine a state of the software system is which verification of the software system is to be initiated. The verification module is configured to initiate verification of the software system at the determined state, perform verification of the software system for a second set of steps based on a second set of input values, and output results of the verification of the software system on a display unit. | 2021-11-04 |
20210342251 | REVIEW PROCESS FOR EVALUATING CHANGES TO TARGET CODE FOR A SOFTWARE-BASED PRODUCT - Systems and methods can implement a review process to evaluate changes to target code as part of development cycles for a continuous integration, continuous deployment pipeline for software-based products. The system can aggregate data and determine if the target code has been modified preliminarily and then intelligently determine where further review is needed before the changes are permanently implemented. To do this, a changeset including the preliminarily changed target code can be obtained from the aggregated data. The changeset can be tested with a prediction model based on feature data that characterizes aspects of a coding process carried out to generate the preliminary modification. The prediction model can provide an activation recommendation for the preliminary modification based on a plurality of risk factors determined from the testing. The prediction model can be trained, continuously, with training data that includes a plurality of data artifacts resulting from a code build processes. | 2021-11-04 |
20210342252 | DEBUGGING A NATIVE COMPILED APPLICATION FROM AN INTEGRATED DEVELOPMENT ENVIRONMENT - A system includes a memory and processor in communication with the memory. The processor is configured to receive a connection request at an emulation layer from an integrated development environment (IDE). The emulation layer connects, via a socket connection, with the IDE. Using the socket connection, the emulation layer receives a command. The command is decoded to retrieve a parameter and a reference to a native application. The command is mapped to a native debugger command and then used to debug the native application using the native debugger. | 2021-11-04 |
20210342253 | DELTA STATE TRACKING FOR EVENT STREAM ANALYSIS - Systems and methods for delta state tracking for event stream analysis. Events at a device are tracked and stored locally or forwarded to a server. The events collectively form an event stream. When an event of interest occurs, the precise configuration of a device at the time of the event of interest can be determined by applying the event stream in chronological or reverse chronological order to a snapshot of the device's configuration. Thus, the snapshot can be taken at any time. Tracking the deltas to the device's configuration enables the precise configuration at the time of the event of interest to be determined. | 2021-11-04 |
20210342254 | Automated electronics to difficult sail lowering in quickly changing wind spee - In this continuation in part, the characteristics of the reverse debugger and its ability to expedite the debugging of the 32-bit microcontroller are set forth; the algorithms and computer coding which make it work, and why its importance is critical to the implementation of safe operational maintenance of the 32 bit microcontroller aboard the sailboat at sea. This advanced debugger allows the user to perform all the usual operations for stepping code-in reverse. This time saving technology allows users to quickly hone in on errors in one debugging session. Interactive reverse debugging is often coupled with remote reverse debugging. In one iteration of remote debugging, reverse debugging is deployed on a different computer out to sea in a remote location. Thus the reverse debugger is available virtually in any location. | 2021-11-04 |
20210342255 | Performance Evaluation Based on Target Metrics - Persistent storage includes measurements relating to components of a managed network. One or more processors may be configured to: obtain, from the persistent storage, an attainment measurement, a years of data measurement, and a tenure measurement all relating to a particular component of the managed network; determine a normalized attainment based on the attainment measurement, a mean attainment over a set of the components, and a standard deviation of attainment over the set of the components; determine a normalized years of data based on the years of data measurement and a maximum years of data available for the set of the components; determine a normalized tenure based on the tenure measurement, the years of data measurement, and a function of the normalized attainment; and determine an output for the particular component based on a combination of the normalized attainment, the normalized years of data, and the normalized tenure. | 2021-11-04 |
20210342256 | SYSTEM AND METHOD FOR UNMODERATED REMOTE USER TESTING AND CARD SORTING - A system for performing remote usability testing of a website includes a module for generating particular tasks, a module for moderating a session with a number of participants, and a module for receiving usability data. The system further includes an analytics module for analyzing the received usability data. The module for generating the particular tasks includes a research server configured to interface with user experience researchers and storing multiple testing modules for selecting qualified participants from the number of participants and for generating the particular tasks having research metrics associated with a target web site. In an embodiment, the research server randomly assigns one or more of the multiple testing modules to one of the participants. The multiple testing modules may include card sorting studies for optimizing a web site's architecture or layout. | 2021-11-04 |
20210342257 | ENHANCED TESTING BY AUTOMATED REUSE OF TEST METRICS - Disclosed herein are system, apparatus, method, and computer program product embodiments for testing software in a continuous deployment pipeline. An embodiment operates by automatically deploying a second version of an application at an idle endpoint. The embodiment further operates by automatically testing the second version of the application by reusing test metrics associated with a first version of the application that is live at a live endpoint. The embodiment further operates by automatically determining whether the automatic testing of the second version of the application is successful and, if so, automatically setting live the second version of the application. For example, the embodiment can operate by automatically exchanging the live endpoint with the idle endpoint to set live the second version and set idle the first version, which then may be placed in termination. | 2021-11-04 |
20210342258 | AUTOMATED TESTING OF PROGRAM CODE UNDER DEVELOPMENT - Automated test failures that result from automated testing of program code under development are windows to include just the automated test failures occurring for a first time and that are due to automated test code defects or program code defects. The automated test failures that remain after winnowing are clustered into automated test failure clusters that each individually corresponding to a different automated test code defect or a different program code defect. The automated test failure clusters are window to include just the automated test failure clusters that each individually correspond to a different program code defect. The automated test failure clusters that remain after winnowing are output. | 2021-11-04 |
20210342259 | KEY-VALUE STORES WITH OPTIMIZED MERGE POLICIES AND OPTIMIZED LSM-TREE STRUCTURES - Embodiments of the invention utilize an improved LSM-tree-based key-value approach to strike the optimal balance between the costs of updates and lookups and storage space. The improved approach involves use of a new merge policy that removes merge operations from all but the largest levels of LSM-tree. In addition, the improved approach may include an improved LSM-tree that allows separate control over the frequency of merge operations for the largest level and for all other levels. By adjusting various parameters, such as the storage capacity of the largest level, the storage capacity of the other smaller levels, and/or the size ratio between adjacent levels in the improved LSM-tree, the improved LSM-tree-based key-value approach may maximize throughput for a particular workload. | 2021-11-04 |
20210342260 | INDIRECT INTERFACE FOR VIRTUAL MACHINE FREE PAGE HINTING - A system includes a memory, a processor in communication with the memory, a hypervisor, and a guest OS. The guest OS is configured to store a plurality of hints in a list at a memory location. Each hint includes an address value and the memory location of the list is included in one of the respective address values associated with the plurality of hints. The guest OS is also configured to pass the list to the hypervisor. Each address value points to a respective memory page of a plurality of memory pages including a first memory page and a last memory page. The hypervisor is configured to free the first memory page pointed to by a first hint of the plurality of hints and free the last memory page pointed to by a second hint of the plurality of hints. Additionally, the last memory page includes the list. | 2021-11-04 |
20210342261 | MEMORY DEVICE WITH DYNAMIC CACHE MANAGEMENT - A memory system includes a memory array having a plurality of memory cells; and a controller coupled to the memory array, the controller configured to: designate a storage mode for a target set of memory cells based on valid data in a source block, wherein the target set of memory cells are configured with a capacity to store up to a maximum number of bits per cell, and the storage mode is for dynamically configuring the target set of memory cells in as cache memory that stores a number of bits less per cell than the corresponding maximum capacity. | 2021-11-04 |
20210342262 | PERIODIC FLUSH IN MEMORY COMPONENT THAT IS USING GREEDY GARBAGE COLLECTION - A method includes identifying a first block of a plurality of blocks stored at a first memory based on an amount of valid data of the first block, and writing the valid data of the first block from the first memory to a second memory. The first memory has a first memory type and the second memory has a second memory type different from the first memory type. The method further includes identifying a second block of the plurality of blocks stored at the first memory based on an age of valid data of the second block, determining that the age of the valid data of the second block satisfies a threshold condition, and in response to determining that the age of the valid data of the second block satisfies the threshold condition, writing the valid data of the second block from the first memory to the second memory. | 2021-11-04 |
20210342263 | GARBAGE COLLECTION ADAPTED TO HOST WRITE ACTIVITY - Systems and methods for adapting garbage collection (GC) operations in a memory device to a host write activity are described. A host write progress can be represented by an actual host write count relative to a target host write count. The host write activity may be estimated in a unit time such as per day, or accumulated over a specified time period. A memory controller can adjust an amount of memory space to be freed by a GC operation according to the host write progress. The memory controller can also dynamically reallocate a portion of the memory cells between a single level cell (SLC) cache and a multi-level cell (MLC) storage according to the host write progress. | 2021-11-04 |
20210342264 | SCALABLE GARBAGE COLLECTION FOR DEDUPLICATED STORAGE - Systems and methods for cleaning a storage system. A deduplicated storage system is cleaned by identifying structures that include dead or unreferenced segments. This includes processing recipes to identify the segments that are no longer part of a live object recipe. Then, the dead segments are removed. This is accomplished by copying forward the live segments and then deleting, as a whole, the structure that included the dead segments. | 2021-11-04 |
20210342265 | DEVICE AND METHOD FOR ALLOCATING INTERMEDIATE DATA FROM AN ARTIFICIAL NEURAL NETWORK - According to one aspect, a method for determining, for a memory allocation, placements in a memory area of data blocks generated by a neural network, comprises a development of an initial sequence of placements of blocks, each placement being selected from several possible placements, the initial sequence being defined as a candidate sequence, a development of at least one modified sequence of placements from a replacement of a given placement of the initial sequence by a memorized unselected placement, and, if the planned size of the memory area obtained by this modified sequence is less than that of the memory area of the candidate sequence, then this modified sequence becomes the candidate sequence, the placements of the blocks for the allocation being those of the placement sequence defined as a candidate sequence once each modified sequence has been developed. | 2021-11-04 |
20210342266 | SYSTEM AND METHOD FOR TRACKING PERSISTENT FLUSHES - One embodiment can provide an apparatus. The apparatus can include a persistent flush (PF) cache and a PF-tracking logic coupled to the PF cache. The PF-tracking logic is to: in response to receiving, from a media controller, an acknowledgment to a write request, determine whether the PF cache includes an entry corresponding to the media controller; in response to the PF cache not including the entry corresponding to the media controller, allocate an entry in the PF cache for the media controller; in response to receiving a persistence checkpoint, identify a media controller from a plurality of media controllers based on entries stored in the PF cache; issue a persistent flush request to the identified media controller to persist write requests received by the identified media controller; and remove an entry corresponding to the identified media controller from the PF cache subsequent to issuing the persistent flush request. | 2021-11-04 |
20210342267 | HANDLING ASYNCHRONOUS POWER LOSS IN A MEMORY SUB-SYSTEM THAT PROGRAMS SEQUENTIALLY - A system includes a non-volatile memory (NVM), and a volatile memory to store: a zone map data structure (ZMDS) that maps a zone of a logical block address (LBA) space to a zone index; and a high frequency update table (HFUT). A processing device is to: write, within an entry of the HFUT, a value of a zone write pointer corresponding to the zone index for an active zone, wherein the zone write pointer includes a location in the LBA space for the active zone; write, within an entry of the ZMDS, a table index value that points to the entry of the HFUT; and journal metadata of the entry of one the ZMDS or the HFUT affected by a flush transition between the ZMDS and the HFUT. | 2021-11-04 |
20210342268 | PREFETCH STORE PREALLOCATION IN AN EFFECTIVE ADDRESS-BASED CACHE DIRECTORY - In at least one embodiment, a processing unit includes a processor core and a vertical cache hierarchy including at least a store-through upper-level cache and a store-in lower-level cache. The upper-level cache includes a data array and an effective address (EA) directory. The processor core includes an execution unit, an address translation unit, and a prefetch unit configured to initiate allocation of a directory entry in the EA directory for a store target EA without prefetching a cache line of data into the corresponding data entry in the data array. The processor core caches in the directory entry an EA-to-RA address translation information for the store target EA, such that a subsequent demand store access that hits in the directory entry can avoid a performance penalty associated with address translation by the translation unit. | 2021-11-04 |
20210342269 | LOW-POWER CACHED AMBIENT COMPUTING - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing a prefetch processing to prepare an ambient computing device to operate in a low-power state without waking a memory device. One of the methods includes performing, by an ambient computing device, a prefetch process that populates a cache with prefetched instructions and data required for the ambient computing device to process inputs to the system while in the low-power state, and entering the low-power state, and processing, by the ambient computing device in the low-power state, inputs to the system using the prefetched instructions and data stored in the cache. | 2021-11-04 |
20210342270 | VICTIM CACHE THAT SUPPORTS DRAINING WRITE-MISS ENTRIES - A caching system including a first sub-cache and a second sub-cache in parallel with the first sub-cache, wherein the second sub-cache includes a set of cache lines, line type bits configured to store an indication that a corresponding cache line of the set of cache lines is configured to store write-miss data, and an eviction controller configured to flush stored write-miss data based on the line type bits. | 2021-11-04 |
20210342271 | CACHE RETENTION FOR INLINE DEDUPLICATION BASED ON NUMBER OF PHYSICAL BLOCKS WITH COMMON FINGERPRINTS AMONG MULTIPLE CACHE ENTRIES - Techniques are provided for inline deduplication based on a number of physical blocks having common fingerprints among multiple entries of a buffer cache. One method comprises storing input/output operations in a first cache comprising a plurality of entries each corresponding to a physical storage entity comprising a plurality of physical blocks. A given entry is maintained in the first cache based on a first number of physical blocks of the given entry having a duplicate fingerprint with at least one physical block of another entry in the first cache. A second number is determined of the physical blocks of each entry having a fingerprint in a second cache, and a first ratio is determined for two entries in the first cache using the second number and the first number. A comparison of the first ratios can be performed to sort and possibly evict entries in the first cache based on the comparison. | 2021-11-04 |
20210342272 | MEMORY PAGE FAULT HANDLING FOR NETWORK INTERFACE DEVICES IN A VIRTUALIZED ENVIRONMENT - Systems and methods for supporting memory page fault handling for network devices are disclosed. In one implementation, a processing device may receive, at a network interface device of a host computer system, an incoming packet from a network. The processing device may also select a first buffer from a plurality of buffers associated with a receiving queue of the network interface device. The processing device may attempt to store the incoming packet at the first buffer of the plurality of buffers. Responsive to receiving a notification that attempting to store the incoming packet at the first buffer encountered a page fault, the processing device may assign the first buffer to a wait queue of the network interface device. The processing device may further store the incoming packet at a second buffer of the plurality of buffers associated with the receiving queue. | 2021-11-04 |
20210342273 | MAPPING VIRTUAL BLOCK ADDRESSES TO PORTIONS OF A LOGICAL ADDRESS SPACE THAT POINT TO THE VIRTUAL BLOCK ADDRESSES - An apparatus includes a processing device configured to generate log records each representing a pointer from a leaf page in a logical address space of a storage system to a virtual block address and comprising a leaf page address of the leaf page. The processing device is also configured to identifying a subset of the log records representing pointers to a given virtual block address to determine a first reference count, and to determine whether the first reference count is different than a second reference count obtained from a given virtual entry of a given virtual block structure that corresponds to the given virtual block address. The processing device is further configured, responsive to determining that the first and second reference counts are different, to modify pointers to the given virtual block address in leaf pages with associated leaf page addresses in the identified subset of the log records. | 2021-11-04 |
20210342274 | Memory Management Unit (MMU) for Accessing Borrowed Memory - Systems, methods and apparatuses to accelerate accessing of borrowed memory over network connection are described. For example, a memory management unit (MMU) of a computing device can be configured to be connected both to the random access memory over a memory bus and to a computer network via a communication device. The computing device can borrow an amount of memory from a remote device over a network connection using the communication device; and applications running in the computing device can use virtual memory addresses mapped to the borrowed memory. When a virtual address mapped to the borrowed memory is used, the MMU translates the virtual address into a physical address and instruct the communication device to access the borrowed memory. | 2021-11-04 |
20210342275 | INITIATING INTERCONNECT OPERATION WITHOUT WAITING ON LOWER LEVEL CACHE DIRECTORY LOOKUP - An upper level cache receives from an associated processor core a plurality of memory access requests including at least first and second memory access requests of differing first and second classes. Based on class histories associated with the first and second classes of memory access requests, the upper level cache initiates, on the system interconnect fabric, a first interconnect transaction corresponding to the first memory access request without first issuing the first memory access request to the lower level cache via a private communication channel between the upper level cache and the lower level cache. The upper level cache initiates, on the system interconnect fabric, a second interconnect transaction corresponding to the second memory access request only after first issuing the second memory access request to the lower level cache via the private communication channel between the upper level cache and the lower level cache and receiving a response to the second memory access request from the lower level cache. | 2021-11-04 |
20210342276 | DETECTING POTENTIALLY OVERLAPPING INPUT/OUTPUT QUEUES - A computer-implemented method, according to one embodiment, includes: receiving an I/O queue creation request, and identifying a first CPU core that can satisfy the I/O queue creation request. A determination is made as to whether the first CPU core already has an I/O queue formed thereon. In response to determining that the first CPU core already has an I/O queue formed thereon, a determination is made as to whether any CPU cores do not already have an I/O queue formed thereon. In response to determining that each CPU core already has an I/O queue formed thereon, the host is informed that satisfying the I/O queue creation request will cause an overlap with existing I/O queues. In response to receiving an indication from the host to satisfy the I/O queue creation request despite the overlap, instructions are sent to use the first CPU core to satisfy the I/O queue creation request. | 2021-11-04 |
20210342277 | CIRCUIT, CORRESPONDING DEVICE, SYSTEM AND METHOD - An embodiment circuit comprises a set of input terminals configured to receive input digital signals which carry input data, a set of output terminals configured to provide output digital signals which carry output data, and computing circuitry configured to produce the output data as a function of the input data. The computing circuitry comprises a set of multiplier circuits, a set of adder-subtractor circuits, a set of accumulator circuits, and a configurable interconnect network. The configurable interconnect network is configured to selectively couple the multiplier circuits, the adder-subtractor circuits, the accumulator circuits, the input terminals and the output terminals in at least two processing configurations. In a first configuration, the computing circuitry is configured to compute the output data according to a first set of functions, and, in a second configuration, the computing circuitry is configured to compute the output data according to a different set of functions. | 2021-11-04 |
20210342278 | MEMORY CARD FOR DATA TRANSFER SYSTEM, DATA STORAGE DEVICE, SYSTEM HOST, AND MEMORY CARD IDENTIFICATION METHOD - A memory card includes first and second interface units connected to a system host, a memory unit, and an additional information registration unit. The memory unit includes a first identifier storage unit that stores an identifier of the memory unit, a flash memory, and a memory controller that controls the first identifier storage unit and the flash memory via the first interface unit. The additional information registration unit includes a second identifier storage unit that stores an identifier same as the identifier of the memory unit, and an additional information notification unit that notifies the system host of the identifier in the second identifier storage unit and additional information via the second interface unit. When the memory card is connected to the system host, the memory unit and the additional information registration unit are associated with each other by the identifiers stored in the first and second identifier storage units. | 2021-11-04 |
20210342279 | NON-INTERRUPTING PORTABLE PAGE REQUEST INTERFACE - Systems and methods for memory management for virtual machines. An example method may include generating, by a Peripheral Component Interconnect (PCI) device comprising an input/output memory management unit (IOMMU), a first bit sequence and generating a second sequence by applying a predetermined transformation to the first bit sequence. The method may then write the second bit sequence to a memory buffer, read a first value from the memory buffer, write the first bit sequence to the memory buffer, and read a second value from the memory buffer. Responsive to determining that the second value does not match the first value, the method may associate a writable attribute with an IOMMU page table entry associated with the memory buffer. | 2021-11-04 |
20210342280 | PCIe LINK MANAGEMENT WITHOUT SIDEBAND SIGNALS - A system for controlling data communications, comprising an enclosure management processor configured to generate a peripheral component interconnect express reset command and a chip reset command. A re-timer configured to receive the peripheral component interconnect express reset command and the chip reset command and to control a communications port in response to the peripheral component interconnect express reset command and the chip reset command. The communications port configured to reset in response to a control signal from the re-timer. | 2021-11-04 |
20210342281 | SELF-CONFIGURING BASEBOARD MANAGEMENT CONTROLLER (BMC) - A Baseboard Management Controller (BMC) that may configure itself is disclosed. The BMC may include an access logic to determine a configuration of a chassis that includes the BMC. The BMC may also include a built-in self-configuration logic to configure the BMC responsive to the configuration of the chassis. The BMC may self-configure without using any BIOS, device drivers, or operating systems. | 2021-11-04 |
20210342282 | Systems and Methods for Arbitrating Traffic in a Bus - A system and method for efficiently arbitrating traffic on a bus. A computing system includes a fabric for routing traffic among one or more agents and one or more endpoints. The fabric includes multiple arbiters in an arbitration hierarchy. Arbiters store traffic in buffers with each buffer associated with a particular traffic type and a source of the traffic. Arbiters maintain a respective urgency counter for keeping track of a period of time traffic of a particular type is blocked by upstream arbiters. When the block is removed, the traffic of the particular type has priority for selection based on the urgency counter. When arbiters receive feedback from downstream arbiters or sources, the arbiters adjust selection priority accordingly. For example, changes in bandwidth requirement, low latency tolerance and active status cause adjustments in selection priority of stored requests. | 2021-11-04 |
20210342283 | SYSTEM ON CHIP HAVING SEMAPHORE FUNCTION AND METHOD FOR IMPLEMENTING SEMAPHORE FUNCTION - A system on chip, semiconductor device, and/or method are provided that include a plurality of masters, an interface, and a semaphore unit. The interface interfaces the plurality of masters with a slave device. The semaphore unit detects requests of the plurality of masters, controlling the salve device, about an access to the interface and assigns a semaphore about each of the plurality of masters by a specific operation unit according to the detection result. | 2021-11-04 |
20210342284 | Networked Computer With Multiple Embedded Rings - A network comprising interconnected first and second processors, each processor comprising one or more of: multiple processing units arranged on a chip configured to execute program code; an on-chip interconnect comprising groups of exchange paths connected to receive data from corresponding groups of the processing units; external interfaces configured to communicate data off-chip as packets, each having a destination address, external interfaces of the first and second processors being connected by an external link; multiple exchange blocks, each connected to groups of the exchange paths; a routing bus configured to route packets between the exchange blocks and the external interfaces. Processing units of the first processor generate off-chip packets such that the group of processing units serviced by the first exchange block on the first processor address off-chip packets to the group of processing units on the second processor serviced by the corresponding first exchange block of the second processor. | 2021-11-04 |
20210342285 | ENCODING OF SYMBOLS FOR A COMPUTER INTERCONNECT BASED ON FREQUENCY OF SYMBOL VALUES - Data are serially communicated over an interconnect between an encoder and a decoder. The encoder includes a first training unit to count a frequency of symbol values in symbol blocks of a set of N number of symbol blocks in an epoch. A circular shift unit of the encoder stores a set of most-recently-used (MRU) amplitude values. An XOR unit is coupled to the first training unit and the first circular shift unit as inputs and to the interconnect as output. A transmitter is coupled to the encoder XOR unit and the interconnect and thereby contemporaneously sends symbols and trains on the symbols. In a system, a device includes a receiver and decoder that receive, from the encoder, symbols over the interconnect. The decoder includes its own training unit for decoding the transmitted symbols. | 2021-11-04 |
20210342286 | CGRA ACCELERATOR FOR WEATHER/CLIMATE DYNAMICS SIMULATION - A coarse-grained reconfigurable array accelerator for solving partial differential equations for problems on a regular grid is provided. The regular grid comprises grid cells which are representative for a physical natural environment wherein a list of physical values is associated with each grid cell. The accelerator comprises configurable processing elements in an accelerator-internal grid connected by an accelerator-internal interconnect system and memory arrays comprising memory cells. The memory arrays are connected to the accelerator-internal interconnect system. Selected ones of the memory arrays are positioned within the accelerator corresponding to positions of the grid cells in the physical natural environment. Thereby, each group of the memory cells is adapted for storing the list of physical values of the corresponding grid cell of the physical natural environment. | 2021-11-04 |
20210342287 | BRIDGE CIRCUIT FOR PROVIDING CONVERSION BETWEEN PCIE-NVME PROTOCOL AND NVME-TCP PROTOCOL AND COMPUTER SYSTEM USING THE SAME - A bridge circuit includes an NVMe device controller, a network subsystem, and a data transfer circuit. The NVMe device controller is arranged to communicate with a host via a PCIe bus. The network subsystem is arranged to communicate with an NVMe-TCP device via a network. The data transfer circuit is coupled between the NVMe device controller and the network subsystem, and is arranged to deal with data transfer associated with the NVMe-TCP device without intervention of the host. | 2021-11-04 |
20210342288 | SERDES LINK TRAINING - Aspects of the embodiments are directed to systems and methods for performing link training using stored and retrieved equalization parameters obtained from a previous equalization procedure. As part of a link training sequence, links interconnecting an upstream port with a downstream port and with any intervening retimers, can undergo an equalization procedure. The equalization parameter values from each system component, including the upstream port, downstream port, and retimer(s) can be stored in a nonvolatile memory. During a subsequent link training process, the equalization parameter values stored in the nonvolatile memory can be written to registers associated with the upstream port, downstream port, and retimer(s) to be used to operate the interconnecting links. The equalization parameter values can be used instead of performing a new equalization procedure or can be used as a starting point to reduce latency associated with equalization procedures. | 2021-11-04 |
20210342289 | ANALOG PROCESSOR COMPRISING QUANTUM DEVICES - Analog processors for solving various computational problems are provided. Such analog processors comprise a plurality of quantum devices, arranged in a lattice, together with a plurality of coupling devices. The analog processors further comprise bias control systems each configured to apply a local effective bias on a corresponding quantum device. A set of coupling devices in the plurality of coupling devices is configured to couple nearest-neighbor quantum devices in the lattice. Another set of coupling devices is configured to couple next-nearest neighbor quantum devices. The analog processors further comprise a plurality of coupling control systems each configured to tune the coupling value of a corresponding coupling device in the plurality of coupling devices to a coupling. Such quantum processors further comprise a set of readout devices each configured to measure the information from a corresponding quantum device in the plurality of quantum devices. | 2021-11-04 |
20210342290 | TECHNIQUE SELECTION FOR FILE SYSTEM UTILIZATION PREDICTION - A trained classification model is executed, causing a classification of a first set of file system usage data into a set of categories comprising a trend category and a periodicity category. Responsive to the first set of file system usage data being classified into the trend category, a time series of the first set of file system usage data is generated. Responsive to the first set of file system usage data being classified into the periodicity category, using an anomaly detection model, an anomaly within the first set of file system usage data is detected. Responsive to predicting that the time series will exceed a threshold, a first reconfiguring of a file system resource is caused, altering a capacity of the file system. Responsive to detecting the anomaly, a second reconfiguring of the file system resource is caused, altering a capacity of the file system. | 2021-11-04 |
20210342291 | DATA ARCHIVE - An example operation includes one or more of identifying, by an archiving server node, a unique archival policy for each of a plurality of blockchain nodes, executing, by the archiving server node, a consensus mechanism to determine at least one block from a plurality of blocks of the plurality of the blockchain nodes to be archived, and running the unique archival policy to archive the at least one block from the plurality of the blocks. | 2021-11-04 |
20210342292 | DATA ARCHIVE RELEASE IN CONTEXT OF DATA OBJECT - The present disclosure provides a method, system, and device for generating and managing archived data. To illustrate, an archive request including an indication of a first set of files is received from an entity device. Archive information is generated based on the first set of files and stored at a first storage location and the first set of files are transmitted to an archival storage location. After the storage at the archival storage location, the archive information is accessed from the first storage location based on a retrieval request from the entity device and a request is transmitted to the archival storage location based on the archive information. The first set of files are received from the archival storage location and stored at a second storage location. A notification is sent to the entity device indicating the first set of files are available at the second storage location. | 2021-11-04 |
20210342293 | POINTER-BASED DYNAMIC DATA STRUCTURES IN KEY-VALUE STORES - A computer-implemented method includes receiving data structures in memory space and creating micro-heaps on a per-data structure basis. Each data structure is associated with a micro-heap allocator. The method also includes storing the data structures in a key-value store. Values of the key-value store are associated with the data structures. A computer program product includes one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media. The program instructions include program instructions to perform the foregoing method. A system includes a processor and logic integrated with the processor, executable by the processor, or integrated with and executable by the processor. The logic is configured to perform the foregoing method. | 2021-11-04 |
20210342294 | NON-DISRUPTIVE AND EFFICIENT MIGRATION OF DATA ACROSS CLOUD PROVIDERS - An index associates fingerprints of file segments to container numbers of containers within which the file segments are stored. At a start of migration, a boundary is created identifying a current container number. At least a subset of file segments at a source storage tier are packed into a new container to be written to a destination storage tier. A new container number is generated for the new container. The index is updated to associate fingerprints of the at least subset of file segments to the new container number. A request is received to read a file segment. The index is queried with a fingerprint of the file segment to determine whether the request should be directed to the source or destination storage tier based on a container number of a container within which the file segment is stored. | 2021-11-04 |
20210342295 | UNIBODY BYPASS PLUNGER AND VALVE CAGE - A bypass plunger combines a unitary or one-piece hollow body-and-valve cage, retains a dart valve within the valve cage portion of the hollow body using a threaded retaining nut secured by crimple detents. A series of helical grooves surround the central portion of the outer surface of the hollow body of the plunger to control spin during descent. A canted-coil-spring disposed within the retaining nut functions as a clutch. The valve cage includes ports that may be configured to control flow through the plunger during ascent. Other embodiments include clutch assemblies using canted-coil springs with split bobbins, and valve stems surfaced to achieve specific functions. Combinations of these features provide enhanced performance, durability and reliability at reduced manufacturing cost, due primarily to the simplicity of its design. | 2021-11-04 |
20210342296 | RETENTION MANAGEMENT FOR DATA STREAMS - The described technology is generally directed towards managing data retention policy for stream data stored in a streaming storage system. When a request to truncate a data stream from a certain position (e.g., from a request-specified stream cut) is received, an evaluation is made to determine whether the requested position is within a data retention period as specified by data retention policy. If any data prior to the stream cut position (corresponding to a stream cut time) is within the data retention period, the truncation request is blocked. Otherwise truncation from the stream cut point is allowed to proceed/is performed. Also described is handling automated (e.g., sized based) stream truncation requests with respect to data retention. | 2021-11-04 |
20210342297 | LIGHT-WEIGHT INDEX DEDUPLICATION AND HIERARCHICAL SNAPSHOT REPLICATION - A lightweight deduplication system can perform resource efficient data deduplication using an extent index and a content index. The extent index can store full fingerprints of data segments to be deduplicated and the content index can store shortened versions of the full fingerprints. The system can alternate between the extent and content indexes, and cache portions of the indices to perform lightweight data deduplication. Further, the system can be configured with an efficient heuristic approach for selecting content index data lookups for chains of volumes for deduplication, such as a long chain of snapshots. | 2021-11-04 |
20210342298 | FINDING STORAGE OBJECTS OF A SNAPSHOT GROUP POINTING TO A LOGICAL PAGE IN A LOGICAL ADDRESS SPACE OF A STORAGE SYSTEM - An apparatus comprises a processing device configured to generate a tree structure characterizing relationships between storage objects in a storage system represented as logical page nodes specifying respective logical page addresses, arrays of pointers to other logical page addresses, snapshot group identifiers, and logical extent offsets. The processing device is also configured to traverse the generated tree structure to identify (i) a given logical page node specifying a given logical page address, snapshot group identifier and logical extent offset from a query and (ii) other ones of the logical page nodes that specify the given snapshot group identifier and logical extent offset and comprise a pointer to the given logical page address in its associated array of pointers. The processing device is further configured to provide a response to the query specifying the given logical page node and the identified other ones of the logical page nodes. | 2021-11-04 |
20210342299 | VOLUME-LEVEL REPLICATION OF DATA BASED ON USING SNAPSHOTS AND A VOLUME-REPLICATING SERVER - Illustrative systems and methods use a special-purpose volume-replicating server(s) to offload client computing devices operating in a production environment. The production environment may remain relatively undisturbed while production data is replicated to a geographically distinct destination. Replication is based in part on hardware-based snapshots generated by a storage array that houses production data. The illustrative volume-replicating server efficiently moves data from snapshots on a source storage array to a destination storage array by transferring only changed blocks for each successive snapshot, i.e., transferring incremental block-level changes. Periodic restore jobs may be executed by destination clients to keep current with their corresponding source production clients. Accordingly, after the source data center goes offline, production data may be speedily restored at the destination data center after experiencing only minimal downtime of production resources. By employing block-level techniques, the disclosed solutions avoid the file-based data management approaches of the prior art. | 2021-11-04 |
20210342300 | DETERMINING A RELEVANT FILE SAVE LOCATION - For determining a relevant file save location, a processor acquires metadata for a new file. The processor further assigns content tags for the new file based on file content and the metadata. The processor calculates a location correlation to folders of a file system using a file system database. The processor further presents a ranked display of the folders based on the location correlation on a display. The processor moves the new file to a selected folder. | 2021-11-04 |
20210342301 | FILESYSTEM MANAGING METADATA OPERATIONS CORRESPONDING TO A FILE IN ANOTHER FILESYSTEM - Examples described herein relate to a computing system, a method and a non-transitory machine-readable medium for handling a request directed to a file in first filesystem having a filesystem instance being a content addressable storage objects. The computing system may also include a general-purpose second filesystem including its backing store within the filesystem instance of the first filesystem. Moreover, the computing system includes a first filesystem server to receive the request for an operation directed to the file in the first filesystem from an application. The first filesystem server may redirect the request to the second filesystem if the operation is a metadata operation; else redirect the request to the first filesystem. | 2021-11-04 |
20210342302 | HOST AND STORAGE SYSTEM FOR SECURELY DELETING FILES AND OPERATING METHOD OF THE HOST - An operating method of a host includes receiving a request for secure deletion of a first file stored in a storage system, providing an invalidation command to the storage system for invalidating data of the first file, providing an erase command to the storage system for erasing invalidated data included in the storage system, and performing a deletion operation, which is executable on an operating system of the host, on the first file which is deleted by the erase command. | 2021-11-04 |
20210342303 | ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF - An electronic apparatus is provided. The electronic apparatus includes a camera, a storage, and a processor configured to store an image photographed by the camera and metadata of the image in the storage, the processor is further configured to identify whether first information related to the image is obtainable, based on the first information not being obtainable, generate metadata related to the first information based on second information, and store the generated metadata as metadata of the image. | 2021-11-04 |
20210342304 | DYNAMICALLY CONFIGURING A PROXY SERVER USING CONTAINERIZATION FOR CONCURRENT AND/OR OVERLAPPING BACKUP, RESTORE, AND/OR TEST OPERATIONS - An illustrative data storage management system relies on a specially configured proxy server to operate software containers on a proxy server, maintain resources needed by the software containers, and interwork with other system components. Illustratively, a catalog service on the proxy server maintains a software cache according to maintenance rules and also maintains an associated cache catalog. The software containers are generally managed and operated by an illustrative container manager also hosted by the proxy server. The illustrative software cache comprises contents needed by the software containers, such as pre-configured container templates, DBMS software components, lightervisors representing target operating systems, and storage management software for performing test and storage operations. The maintenance rules govern when cache contents should be purged and moved into offline archive copies. | 2021-11-04 |
20210342305 | METHOD OF COMPRESSING AND DECOMPRESSING A FILE VIA CELLULAR AUTOMATA PRE-PROCESSING - A method for pre-processing files that can improve file compression rates of existing general-purpose lossless file compression algorithms, particularly for files on which traditional algorithms perform poorly. The elementary cellular automata (CA) pre-processing technique involves finding an optimal CA state that can be used to transform an original file into a format (i.e., an intermediary file) that is more amenable to compression than the original file format. This technique is applicable to multiple file types and may be used to enhance multiple compression algorithms. Evaluation on generated files, as well as samples selected from online text repositories, finds that the CA pre-processing technique improves compression rates by up to 4% and shows promising results for assisting in compressing data that typically induce worst-case behavior in standard compression algorithms. | 2021-11-04 |
20210342306 | Digital Image Suitability Determination to Generate AR/VR Digital Content - Techniques image suitability determination to generate augmented reality/virtual reality (AR/VR) digital content are described. A two-dimensional digital image is received. Using machine-learning, a determination as to whether an object captured by the two-dimensional digital image is suitable for generating AR/VR digital content for display in an AR/VR environment is made. If the object is suitable, an indication is provided and an option to view the object in an AR/VR environment is provided. If the object is not suitable, a suggestion indicating why the object as captured is not suitable and/or how to correct the capture of the object in a subsequent digital image such that it is suitable for generating AR/VR digital content. | 2021-11-04 |
20210342307 | TOKEN-BASED OFFLOAD DATA TRANSFER WITH SYNCHRONOUS REPLICATION - A method is provided, comprising: receiving, at a source system, a first copy instruction, the first copy instruction being associated with a token that represents one or more data items, the first copy instruction instructing the source system to copy the one or more data items from a first volume to a second volume; in response to the first copy instruction, retrieving one or more hash digests from a snapshot that is associated with the token, each of the one or more hash digests being associated with a different one of the one or more data items; and transmitting, to a target system, a second copy instruction that is associated with the one or more hash digests, the second copy instruction instructing the target system to copy the one or more data items to a replica of the second volume that is stored at the target system. | 2021-11-04 |
20210342308 | SYSTEM AND METHOD FOR PERFORMING CONTEXT AWARE OPERATING FILE SYSTEM VIRTUALIZATION - A method of accessing content in a virtualized file system is disclosed. A virtualization API is called to enumerate a list of files within a target directory. The calling of the virtualization API includes bypassing a calling of an operating system API included on a device. The enumeration includes reading from a local data structure on the device. A content data item within a file of the enumerated list of files is determined to be required for an application to perform an operation. A content identifier is determined for the content data item. The content identifier is stored in the local data structure for subsequent access to the content data item. | 2021-11-04 |
20210342309 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIUMS FOR PERFORMING METADATA-DRIVEN DATA COLLECTION - Methods, systems, and computer readable media for performing metadata-driven data collection are disclosed. In some examples, a method includes receiving a request for system status data for components of a distributed computing system while the distributed computing system is in operation. The request includes metadata specifying a data collection sequence for collecting component-level system status data. The components include compute components, network components, and storage components. The method includes obtaining, using the metadata, the component-level system status data by querying protocol-based data collectors in an order, one after the other, as specified by the data collection sequence specified by the metadata. The method includes assembling the component-level system status data into assembled status data and storing the assembled status data in memory and/or a repository. | 2021-11-04 |
20210342310 | Cognitive Method to Perceive Storages for Hybrid Cloud Management - A mechanism is provided in a data processing system for hybrid cloud management. The mechanism generates hybrid cloud storage features and hybrid cloud environment factors. The mechanism performs a dynamic confidence method on the hybrid cloud features based on the hybrid cloud environment factors using a deep learning model to generate a hybrid cloud storage profile. The mechanism performing model optimization on the deep learning model and generating a files-storage matrix. The mechanism generates a hybrid cloud file profile based on the hybrid cloud storage profile and the files-storage matrix. The mechanism generates a target file matrix based on the hybrid cloud storage profile and the hybrid cloud file profile. The mechanism stores files based on the target file matrix. | 2021-11-04 |
20210342311 | HYBRID CLIENT TRANSACTION MODE FOR KEY VALUE STORE - The present disclosure relates to a system and techniques for enabling data to be updated within a data store through concurrent operations. Embodiments of the system enables multiple client applications (e.g., implemented on a cloud platform) to update data concurrently. In some embodiments, operations may be determined to be either client-managed operations or service-managed operations. Client-managed operations may be performed by a client application, whereas the client application may pass service-managed operations to a service application. The service application may put each of the service-managed operations into a commit queue wherein each service-managed operation is committed only after the one put into the queue before it has been committed. | 2021-11-04 |
20210342312 | CONCURRENT ACCESS AND TRANSACTIONS IN A DISTRIBUTED FILE SYSTEM - According to one embodiment of the present disclosure, a first set of file system objects included in performing a requested file system operation is identified in response to a request to perform a file system operation. An update intent corresponding to the requested file system operation is inserted into a data structure associated with each identified file system object. Each file system object corresponding to the corresponding data structure is modified as specified by the update intent in that data structure. After modifying the file system object corresponding to the corresponding data structure, the update intent is removed from that data structure. | 2021-11-04 |
20210342313 | AUTOBUILD LOG ANOMALY DETECTION METHODS AND SYSTEMS - Computing systems, database systems, and related methods are provided for detecting anomalies within a log file. One method involves obtaining log data for test runs executed with respect to a compiled version of executable code for an application platform, filtering the log data based on one or more performance metrics to obtain reference log data, converting the reference log data to a corresponding numerical representation and generating a matrix of the numerical representation. For each line of test log data associated with an update to the executable code, the method converts the line into a numerical representation, determines a difference between the numerical representation and the matrix, and provides an indication of an anomaly when the difference is greater than a detection threshold. | 2021-11-04 |
20210342314 | APPARATUS, SYSTEMS, AND METHODS FOR ANALYZING MOVEMENTS OF TARGET ENTITIES - The present disclosure relates to apparatus, systems, and methods for providing a location information analytics mechanism. The location information analytics mechanism is configured to analyze location information to extract contextual information (e.g., profile) about a mobile device or a user of a mobile device, collectively referred to as a target entity. The location information analytics mechanism can include analyzing location data points associated with a target entity to determine features associated with the target entity, and using the features to predict attributes associated with the target entity. The set of predicted attributes can form a profile of the target entity. | 2021-11-04 |
20210342315 | METHOD AND APPARATUS FOR IMPLEMENTING A DATA BOOK APPLICATION MODULE - Various methods, apparatuses/systems, and media for implementing a data book application module is disclosed. The processor identifies an application that needs to be scanned through a data factory; receives inventories of all servers and databases associated with the data factory; scans the servers and databases for receiving inventories of schema, tables and columns associated with the application; and applies artificial intelligence (AI) and/or machine learning (ML) routines and matching algorithms for matching contents of columns to predefined logical terms. The processor also converts the contents of columns into taxonomies associated with the predefined logical terms; matches the taxonomies with the corresponding predefined logical terms; assigns a probability of accuracy value to the matched terms; and populates a data catalog with the matched terms when the assigned probability of accuracy value satisfies a predetermined threshold value. | 2021-11-04 |
20210342316 | SYSTEMS AND METHODS FOR EXTRACTING DATA IN COLUMN-BASED NOT ONLY STRUCTURED QUERY LANGUAGE (NOSQL) DATABASES - A method and/or system of extracting a table having data in a plurality of rows from a Not Only Structured Query Language (NoSQL) database to a different type of database that includes: scanning all the rows in a desired table in the NoSQL database and producing a list of column families and associated column names; creating a schema for a new table having a table catalog of new column names using a Java Script Object Notation (JSON) structure to extract the columns names from the list of column families; reading and extracting at least a portion of the data from the desired table in the NoSQL database into the new table having the table catalog of new columns names; associating a creation timestamp with the new table; and saving the new table having the table catalog of new column names to the different database. | 2021-11-04 |
20210342317 | TECHNIQUES FOR EFFICIENT MIGRATION OF KEY-VALUE DATA - The present disclosure relates to a system and techniques for enabling migration of data between data storage devices without disruption to an application that relies upon the data. In some embodiments, this may involve the insertion of a redirect command into a mutation log. Upon receiving a transaction that relates to a data value, a transactor host may access the mutation log. Upon detecting the redirect command, the transactor host may generate a new mutation log in a second memory location which includes a reference to the mutation log. New mutations generated by the mutation log are then written to the new mutation log. | 2021-11-04 |
20210342318 | DEDUPLICATION OF ENCRYPTED DATA - A data storage system configured to deduplicate and store sets of data is presented. The system comprises a computer readable storage device configured to store a plurality of sets of data for a plurality of hosts, wherein each sets of data of the plurality of sets of data corresponding to each host of the plurality of hosts is encrypted with one or more different encryption keys, and wherein at least one of the plurality of sets of data contains deduplicated data. The system also comprises a key translator configured to create at least one translation key based, at least in part, on the one or more different encryption keys and the deduplicated data, and wherein the at least one translation key is configured to translate from a first encryption key to a second encryption key of the one or more different encryption keys. | 2021-11-04 |