06th week of 2022 patent applcation highlights part 45 |
Patent application number | Title | Published |
20220043716 | METHOD AND SYSTEM FOR VIRTUAL MACHINE PROTECTION - A method and system for virtual machine protection. Specifically, the disclosed method and system dynamically protect virtual machine state from impactful events, such as accidental virtual machine deletions and shutdowns. The disclosed method and system work to stall the fulfillment of these impactful events while instigating the backup of protected virtual machine state, and thereafter, only permit these impactful events to proceed upon completion of the backup operation. | 2022-02-10 |
20220043717 | SYSTEM AND METHOD FOR A BACKUP DATA VERIFICATION FOR A FILE SYSTEM BASED BACKUP - A method for verifying data includes obtaining, by a backup agent, a backup verification trigger for a backup stored in a backup storage system, in response to the backup verification trigger, obtaining backup metadata associated with the backup, performing a hierarchical structure data mapping based on the backup metadata to obtain a hierarchical structure associated with the backup, performing, using the hierarchical structure, a backup verification to generate a backup health state of the backup, after the backup verification is generated: making a determination, based on the backup verification, that the backup health state is not in a healthy state, and in response to the determination, performing a remediation of the backup policies. | 2022-02-10 |
20220043718 | METHOD AND SYSTEM FOR GENERATING SYNTHETIC BACKUPS USING PSEUDO-ASSET BACKUPS - A method that is performed for backing up data. The method includes obtaining an incremental backup request; and in response to the incremental backup request: obtaining an asset and an asset entry associated with the incremental backup request; dividing the asset into pseudo-assets based on the asset entry; storing the pseudo-assets across backup storages to generate incremental pseudo-asset backups; initiating the merging of the incremental pseudo-asset backups to generate an incremental asset backup; and initiating the merging of the incremental asset backup with a previously generated full asset backup associated with the incremental backup request to generate a first synthetic full asset backup. | 2022-02-10 |
20220043719 | SYSTEMS AND METHODS FOR MULTIPLE RECOVERY TYPES USING SINGLE BACKUP TYPE - Embodiments described herein relate to a technique for performing an enhanced backup and restore for a computing device. The method may include: receiving, at a backup management device, a request to perform an enhanced backup operation; creating, by the backup management device, an enhanced recovery asset including computing device data items and stored on a backup storage device; performing a backup operation of a volume of the computing device to obtain a first backup container; performing a backup operation of system state information associated with the computing device to obtain a second backup container; associating the first backup container and the second backup container with the enhanced recovery asset; receiving, by the backup management device, a second request to perform a recovery operation; and performing, based on the second request, the recovery operation using at least a portion of the enhanced recovery asset. | 2022-02-10 |
20220043720 | SYSTEM AND METHOD FOR BACKING UP DATA IN A LOAD-BALANCED CLUSTERED ENVIRONMENT - Disclosed herein are systems and method for backing up data in a load-balanced clustered environment. A clustered resource to be backed up is selected, wherein the clustered resource is stored on a common storage system and operated on by a cluster-aware application executing on at least a first node and a second node of a computing cluster. A load-balanced application may migrate the clustered resource from the first node with a high-load consumption to the second node with low-load consumption. A list of changes made by both nodes are received and merged. A backup agent then generates a consistent incremental backup using data retrieved from the common storage system according to the merged list of changes to the clustered resource. | 2022-02-10 |
20220043721 | DYNAMICALLY SELECTING OPTIMAL INSTANCE TYPE FOR DISASTER RECOVERY IN THE CLOUD - The selection of an optimal restore instance type based on a customer's speed/cost tradeoff resolution is disclosed. An automated restore activity may be performed on a baseline test VM of a predefined size using different restore instance types. The number of calibration runs or evaluations needed to identify an optimal restore instance type in terms of performance and price, with respect to bandwidth or other constraining factor, is performed on less than all of the restore instance types. | 2022-02-10 |
20220043722 | METHOD AND SYSTEM FOR GENERATING BACKUPS USING PSEUDO-ASSET BACKUPS - A method that is performed for backing up data. The method includes obtaining an asset backup request; and in response to the asset backup request: obtaining an asset and an asset entry associated with the asset backup request; dividing the asset into pseudo-assets using the asset entry; storing the pseudo-assets across backup storages to generate pseudo-asset backups; initiating the merging of the pseudo-asset backups to generate an asset backup; and updating asset backup metadata based on the asset backup. | 2022-02-10 |
20220043723 | METHOD, ELECTRONIC DEVICE AND COMPUTER PROGRAM PRODUCT FOR STORAGE MANAGEMENT - Embodiments of the present disclosure relate to a method for storage management, an electronic device, and a computer program product. According to an example implementation of the present disclosure, a method for storage management is provided, which comprises receiving an access request for target metadata from a user at a node among a plurality of nodes included in a data protection system, wherein the access request includes an identification of the target metadata; based on the identification, acquiring target access information corresponding to the identification from a set of access information for the user, wherein the target access information records information related to access to the target metadata; and if the target access information is acquired, determining the target metadata based on the target access information. | 2022-02-10 |
20220043724 | Memory Device Providing Fast Data Recovery - A memory device includes a non-volatile memory chip, a connector and a memory controller. The non-volatile memory chip includes an access partition and a hidden partition. The memory controller is used to set first logical blocks mapping to mapping physical blocks in the access partition. The memory controller is used to maintain a first mapping table recording the first logical blocks and the mapping physical blocks. During backup, the memory controller is used to duplicate data in the mapping physical blocks to the hidden partition according to the first mapping table to form backup physical blocks, and establish a second mapping table setting second logical blocks to map to the backup physical blocks. During recovery, the memory controller is used to map the second logical blocks to the backup physical blocks according to the second mapping table for the host system to recover an environment set at the backup operation. | 2022-02-10 |
20220043725 | SMALL DATABASE PAGE RECOVERY - Recovery of an in-memory database is initiated. Thereafter, pages for recovery having a size equal to or below a pre-defined threshold are copied to a superblock. For each copied page, encryption information is added to a superblock control block for the superblock. The copied pages are encrypted within the superblock using the corresponding encryption information added to the super block control block. The superblock is then flushed from memory (e.g., main memory, etc.) of the database to physical persistence. | 2022-02-10 |
20220043726 | Method and System for Processing Email During an Unplanned Outage - The method and system of the present invention provides an improved technique for processing email during an unplanned outage. Email messages are redirected from the primary server to a secondary server during an unplanned outage such as, for example, a natural disaster. A notification message is sent to users alerting them that their email messages are available on the secondary server by, for example, Internet access. After the termination of the unplanned outage, email messages received during the unplanned outage are synchronized into the users standard email application. | 2022-02-10 |
20220043727 | ASSIGNING BACKUP RESOURCES IN A DATA STORAGE MANAGEMENT SYSTEM BASED ON FAILOVER OF PARTNERED DATA STORAGE RESOURCES - An illustrative data storage management system is aware that certain data storage resources for storing/serving primary data operate in a partnered configuration. Illustrative components of the data storage management system analyze the failover status of the partnered primary data storage resources to determine which is currently serving/storing primary data and/or snapshots targeted for backup. When detecting that a first partnered primary data storage resource has failed over to a second primary data storage resource, the example storage manager changes the assignment of backup resources that are pre-administered for the targeted data. Accordingly, the example storage manager assigns backup resources, including at least one media agent, that are associated with the second primary data storage resource, and which are “closer” thereto from a geography and/or network topology perspective, even if the pre-administered backup resources are available for backup. | 2022-02-10 |
20220043728 | METHOD, APPARATUS, DEVICE AND SYSTEM FOR CAPTURING TRACE OF NVME HARD DISC - A system for capturing a trace of an NVME hard disc can include a BMC, a BIOS, a protocol analysis instrument, and a fixture plate comprising a processor and a dial switch. The BIOS is configured to acquire register error information of the PCIe link when an error occurs to a PCIe link where the NVME hard disc is located, and send the register error information to the BMC, and the BMC is configured to send the received information to the fixture plate, and the fixture plate is configured to trigger the protocol analysis instrument to capture a PCIe trace of the NVME hard disc when a current error type corresponding to the dial switch is consistent with the error type of the register error information parsed by a processor of the fixture plate. | 2022-02-10 |
20220043729 | ELECTRONIC DEVICE HAVING INFRARED LIGHT-EMITTING DIODE FOR DATA TRANSMISSION - An electronic device may be provisioned with an infrared (IR) light-emitting diode (LED) configured to externally transmit identifying information that particularly identifies the device, such as the device serial number, to outside of the device. A companion portable IR LED reader may be used to systematically scan a row or shelf or rack of electronic devices to read the respective communication signals transmitted from each of the respective devices, thereby enabling quick and accurate physical identification of the devices in a system/datacenter and inhibiting the unnecessary removal of an incorrect or misidentified device for replacement. | 2022-02-10 |
20220043730 | PIPELINE MODELER SUPPORTING ATTRIBUTION ANALYSIS - Techniques are disclosed for attribution analysis in analytical workflows. A data processing system (DPS) obtains an overall model comprising one or more sub-models. The DPS selects an output variable of the overall model for which attribution of changes is to be performed, and a plurality of input variables against which changes are to be attributed to. The overall model is initially executed with respect to a data set of values for the plurality of input variables to generate a base result for the output variable. The overall model is iteratively executed based on a condition associated with the plurality of input variables to obtain a new result for the output variable. In each iteration, a value of an input variable is changed with respect to the data set of values and a change in the output variable with respect to the base result is attributed to the corresponding input variable. | 2022-02-10 |
20220043731 | PERFORMANCE ANALYSIS - Apparatuses, systems, and techniques to identify a cause of a performance regression in a web-based service. In at least one embodiment, a cause of a performance regression is identified by comparing performance metrics associated with a first group of user interactions with a web-based service to performance metrics associated with a second group of user interactions with the web-based service. | 2022-02-10 |
20220043732 | METHOD, DEVICE, AND PROGRAM PRODUCT FOR MANAGING COMPUTING RESOURCE IN STORAGE SYSTEM - The present disclosure relates to a method, a device, and a program product for managing a computing resource in a storage system. In one method, a processing request for processing a task using a computing resource is received. A length of time required for processing the task is acquired based on a usage state of the computing resource. A workload of the computing resource for processing a future data access request for the storage system within a future time period is determined based on a load model of the computing resource and a current workload of the computing resource. The load model describes an association relationship between a previous load and a subsequent load of the computing resource for processing a historical data access request for the storage system. A target time period matching the length of time is selected from the future time period based on the workload for processing the task. A corresponding device and a corresponding computer program product are provided. Available computing resources in the storage system can be fully utilized. By choosing a target time period with a relatively low workload, a task can be processed in a more efficient manner. | 2022-02-10 |
20220043733 | METHOD AND APPARATUS FOR ESTIMATING A TIME TO PERFORM AN OPERATION ON A PROSPECTIVE DATA SET IN A CLOUD BASED COMPUTING ENVIRONMENT - Estimating a time to perform an operation on a prospective data set of a selected size that includes a plurality of data entities and relationships between the data entities. A number of data sets of different size each comprising a number of like data entities and like relationships between the like data entities are received as input. A number of actions performed on a subset of the number of like data entities and like relationships between the like data entities that substantially comprise the operation are provided as output. For each of the number of data sets of different size, an elapsed time to perform a batch process for each of the number of actions on the subset of the number of like data entities and like relationships between the like data entities that comprise the operation is calculated. Finally, an elapsed time to perform the operation on the prospective data set based on its selected size and the elapsed times to perform, for each of the number of data sets of different size, the batch process for each of the number of actions on the subset of the number of like data entities and like relationships between the like data entities that comprise the operation is estimated, and provided as output. | 2022-02-10 |
20220043734 | USAGE PATTERN VIRTUAL MACHINE IDLE DETECTION - The detection of utilized virtual machines through usage pattern analysis is described. In one example, a computing device can collect utilization metrics from a virtual machine over time. The utilization metrics can be related to one or more processing usage, disk usage, network usage, and memory usage metrics, among others. The utilization metrics can be used to determine a number of clusters, and the clusters can be used to organize the utilization metrics into groups. Depending upon the number or overall percentage of the utilization metrics assigned to individual ones of the plurality of clusters, it is possible to determine whether or not the virtual machine is a utilized or an idle virtual machine. Once identified, utilized virtual machines can be migrated in some cases. Idle virtual machines can be shut down to conserve processing resources and costs in some cases. | 2022-02-10 |
20220043735 | DYNAMIC RISK BASED ANALYSIS MODEL - Embodiments of the present invention provide a computer system, a computer program product, and a method that comprises receiving and storing input data from at least two users; calculating a risk score for each identified risk in the received data based on priority risk factors affecting respectively identified risks; dynamically optimizing a risk analysis of the received input for multiple users within a user interface of a computing device by recalculating risk scores based on the received data and identified risks; and generating a notification for the user interface of the computing device based on the dynamic optimization of the risk analysis of the received input. | 2022-02-10 |
20220043736 | DYNAMICALLY ENHANCING THE PERFORMANCE OF A FOREGROUND APPLICATION - A performance enhancing solution can be executed on a computing device to detect changes in the foreground application. When the foreground application changes, the performance enhancing solution can adjust the allocation of system resources to running applications to thereby enhance the performance of the foreground application. | 2022-02-10 |
20220043737 | METHOD AND SYSTEM FOR MANAGING PERFORMANCE FOR USE CASES IN SOFTWARE APPLICATIONS - A method for managing a performance for at least one use case in a software application. The method includes: executing, for a first instance, a plurality of statements pertaining to a given use case on a target database, the plurality of statements being a part of the software application; collecting first performance metrics pertaining to the first instance of execution of the given use case; executing, for a second instance, the plurality of statements on the target database; collecting second performance metrics pertaining to the second instance of execution of the given use case; comparing the first performance metrics and the second performance metrics to determine difference therebetween; and executing at least one alarm action when the difference is greater than a predefined threshold. | 2022-02-10 |
20220043738 | AUTOMATED IDENTIFICATION OF POSTS RELATED TO SOFTWARE PATCHES - Operations may include obtaining a buggy code snippet of source code of a software program in which the buggy code snippet includes a particular error. The operations may also include determining a respective first similarity between the buggy code snippet and a plurality of bug patterns of previously identified bug scenarios. In addition, the operations may include selecting a particular bug pattern based on a determined particular first similarity between the particular bug pattern and the buggy code snippet. Moreover, the operations may include determining a respective second similarity between the particular bug pattern and example code snippets obtained from a plurality of posts. The operations may also include selecting a particular post as providing a potential solution to correct the particular error based on a determined particular second similarity between the particular bug pattern and a particular example code snippet of the particular post. | 2022-02-10 |
20220043739 | CLOUD APPLICATION ARCHITECTURE USING EDGE COMPUTING - Systems, methods, and computer program products are described for edge computing for cloud application development. Data having at least one image of a continuous integration system is received. The at least one image can be locally instantiated within a local container. Developmental code associated with an application can be retrieved from a code repository. The application is compiled, built, and tested within the local container based on the developmental code. The application is deployed to a production environment. | 2022-02-10 |
20220043740 | Test data generation for automatic software testing - An embodiment features a method of generating test data. An application-level schema corresponding to a source relational database is received. The schema defines constraints comprising one or more of inter-field, inter-record, and inter-object constraints between related data in the source relational database. A random walk is performed on a graph of nodes representing data in the source relational database. At respective ones of the nodes, corresponding ones of the data in the source relational database are selected along a path ordered in accordance with the constraints defined in the schema. Synthetic test data is generated based on one or more statistical models of the data selected from the source relational database. Data values are generated for respective fields of an object defined in the schema, and data values are generated for records related to the object based on one or more of the constraints defined in the schema. | 2022-02-10 |
20220043741 | COMPUTERIZED SYSTEMS AND METHODS FOR GENERATING AND MODIFYING DATA FOR MODULE IMPLEMENTATION - The present disclosure may be directed to a system for generating and modifying data for modules. The system may include receiving, from a user via a proxy server, a request and user information associated with the user; based on the determination that the request comprises a test, calling a mobile application programming interface. The mobile application programming interface may be configured to perform steps including retrieving data; performing the test on the module using the retrieved data; performing a verification on responses from the test to predetermined responses; and sending results of the performed verification to the user. The system may include implementing the module based on the performed verification. | 2022-02-10 |
20220043742 | Method and System for Digital Webpage Testing - A system and method for digitally testing a webpage are disclosed herein. The system receives, via one or more application programming interface (API) endpoints, user data. The user data includes one or more indications of one or more users interacting with the one or more variants of the webpage. The system inputs the one or more indications into a machine learning model. The machine learning model includes a Bayesian multi-arm bandit algorithm. The system generates, using the machine learning model, one or more results comprising causal performance estimates and a set of decision rules to adaptively design further testing experiments based on the causal performance estimates. The system generates a portal accessible to one or more end users. The portal includes the one or more results. | 2022-02-10 |
20220043743 | High-Capacity Storage of Digital Information in DNA - A method for storage of an item of information ( | 2022-02-10 |
20220043744 | High-Capacity Storage of Digital Information in DNA - A method for storage of an item of information (210) is disclosed. The method comprises encoding bytes (720) in the item of information (210), and representing using a schema the encoded bytes by a DNA nucleotide to produce a DNA sequence (230). The DNA sequence (230) is broken into a plurality of overlapping DNA segments (240) and indexing information (250) added to the plurality of DNA segments. Finally, the plurality of DNA segments (240) is synthesized (790) and stored (795). | 2022-02-10 |
20220043745 | DYNAMIC CONFIGURING OF RELIABILITY AND DENSITY OF NON-VOLATILE MEMORIES - Systems, methods, and devices dynamically configure non-volatile memories. Devices include non-volatile memories comprising a plurality of memory regions, each of the plurality of memory regions having a configurable bit density. Devices also include control circuitry configured to retrieve user partition configuration data identifying a plurality of bit densities for the plurality of memory regions, convert a received user address to a plurality of physical addresses based, at least in part, on the plurality of bit densities, compare the user address with the user partition configuration data, and select one of the plurality of physical addresses based, at least in part, on the comparison. | 2022-02-10 |
20220043746 | ASYNCHRONOUS POWER LOSS RECOVERY FOR MEMORY DEVICES - An example memory sub-system includes a memory device and a processing device, operatively coupled to the memory device. The processing device is configured to maintain a logical-to-physical (L2P) table, wherein a region of the L2P table is cached in a volatile memory; maintain a write count reflecting a number of bytes written to the memory device; maintain a cache miss count reflecting a number of cache misses with respect to a cache of the L2P table; responsive to determining that a value of a predetermined function of the write count and the cache miss count exceeds a threshold value, copy the region of the L2P table to a non-volatile memory. | 2022-02-10 |
20220043747 | REMAPPING TECHNIQUES FOR A RANGE OF LOGICAL BLOCK ADDRESSES IN A LOGICAL TO PHYSICAL TABLE OF NAND STORAGE - Devices and techniques are disclosed herein for remapping data of flash memory indexed by logical block addresses (LBAs) of a host device in response to re-map requests received at a flash memory system from the host device or in response to re-map requests generated at the flash memory system. | 2022-02-10 |
20220043748 | METHOD, APPARATUS, AND SYSTEM FOR RUN-TIME CHECKING OF MEMORY TAGS IN A PROCESSOR-BASED SYSTEM - A data processing system includes a store datapath configured to perform tag checking in a store operation to a store address associated with a cache line in a memory. The store datapath includes a cache lookup circuit configured to pre-load a store cache line that is to be updated in the store operation, wherein the store cache line comprises the cache line in the memory to be updated in the store operation. The store datapath also includes a tag check circuit configured to compare a store address tag associated with the store address to a store operation tag associated with the store operation. The data processing system may include a load datapath configured to perform tag checking in a load operation from a load cache line in the memory by comparing a load address tag associated with the load address to a load operation tag associated with the load operation. | 2022-02-10 |
20220043749 | SYSTEM AND METHOD FOR BROADCAST CACHE INVALIDATION - One embodiment of a cache invalidation method includes storing an invalidation status usable by a computing node to identify, from a broadcast cache invalidation queue, a last processed invalidation that was processed with respect to an object cache used by the node. The method further comprises the node determining a set of unprocessed invalidations from the broadcast cache invalidation queue that are subsequent to the last processed invalidation determined from the invalidation status. The node processes the set of unprocessed invalidations to clear cached objects from the object cache. Based on processing the set of unprocessed invalidations to clear cached objects from the object cache, the invalidation status is updated with an identifier corresponding to a last invalidation from the set of previously unprocessed invalidations. | 2022-02-10 |
20220043750 | OBTAINING CACHE RESOURCES FOR EXPECTED WRITES TO TRACKS IN A WRITE SET AFTER THE CACHE RESOURCES WERE RELEASED FOR THE TRACKS IN THE WRITE SET - Provided are a computer program product, system, and method for prefetching cache resources for a write request from a host to tracks in storage cached in a cache. Cache resources held for a plurality of tracks in a write set are released before expected writes are received for the tracks in the write set. Cache resources for tracks in the write set are obtained, following the release of the cache resources, to use for expected write requests to the tracks in the write set. | 2022-02-10 |
20220043751 | PROVIDING TRACK ACCESS REASONS FOR TRACK ACCESSES RESULTING IN THE RELEASE OF PREFETCHED CACHE RESOURCES FOR THE TRACK - Provided are a computer program product, system, and method for providing track access reasons for track accesses resulting in the release of prefetched cache resources for the track. A first request for a track is received from a process for which prefetched cache resources to a cache are held for a second request for the track that is expected. A track access reason is provided for the first request specifying a reason for the first request. The prefetched cache resources are released before the second request to the track is received. Indication is made in an unexpected released track list of the track and the track access reason for the first request. | 2022-02-10 |
20220043752 | INTELLIGENT CACHE WARM-UP ON DATA PROTECTION SYSTEMS - System identifies multiple data blocks in workload stored in slow access persistent storage, data blocks copied to fast access persistent storage, and, after speed of accessing workload satisfies threshold, copied data blocks that remained in fast access persistent storage. System annotates some remaining data blocks with cache label and derives features for some data blocks in workload, based on corresponding bits set and/or time stamp. System uses cache labels and features for some data blocks in workload to train machine-learning model to predict which data blocks in workload will remain in fast access persistent storage after workload access satisfies threshold. System derives features for data block requested from production workload. System copies requested data block to production fast access persistent storage if trained machine-learning model uses features for requested data block to predict requested data block will remain in production fast access persistent storage after production workload access satisfies threshold. | 2022-02-10 |
20220043753 | DYNAMIC ALLOCATION OF CACHE RESOURCES - Examples described herein include a cache controller and a cache device. In some examples, the cache controller is configured, when operational, to: during processor operation, dynamically adjust a maximum number of allocated pinned regions in the cache device based on usage of pinned regions. In some examples, the cache controller is to store an entry into a tag memory based on a number of pinned entries in the cache device not being exceeded. In some examples, the entry includes meta-data information indicative of whether the data is stored in the cache device. | 2022-02-10 |
20220043754 | EXECUTABLE MEMORY PAGE VALIDATION SYSTEM AND METHOD - An executable memory page validation system for validating one or more executable memory pages on a given endpoint, the executable memory page validation system comprising at least one processing resource configured to: obtain a plurality of vectors, each vector of the vectors being a bitmask indicative of valid hash values calculated for a plurality of executable memory pages available on the endpoint, the valid hash values being calculated using a respective distinct hash function; calculate one or more validation hash values for a given executable memory page to be loaded to a computerized memory of the endpoint for execution thereof, using one or more selected hash functions of the distinct hash functions; and determine that the given executable memory page is invalid, upon one or more of the validation hash values not being indicated as valid in the corresponding one or more vectors. | 2022-02-10 |
20220043755 | DELAYING SEGMENT GENERATION IN DATABASE SYSTEMS - A method for execution by a record processing and storage system includes receiving a plurality of records and generating a plurality of pages that include the plurality of records in accordance with a row-based format. The plurality of pages is stored via a page storage system. Segment generation determination data is generated based on storage utilization data of the page storage system. A plurality of segments is generated from the plurality of pages that include the plurality of records in a column-based format based on the segment generation determination data indicating segments be generated. The plurality of segments is stored via a segment storage system. | 2022-02-10 |
20220043756 | Flow Table Aging Optimized For Dram Access - A flow table management system can include a hardware memory module communicatively coupled to a network interface card. The hardware memory module is configured to store a flow table including a plurality of network flow entries. The network interface card further includes a flow table age cache configured to store a set of recently active network flows and a flow table management module configured to manage a duration for which respective network flow entries in the flow table stored in the hardware memory module remain in the flow table using the flow table age cache. In some implementations, age information about each respective flow in the flow table is stored in the hardware memory module in an age state table that is separate from the flow table. | 2022-02-10 |
20220043757 | System, Apparatus And Method For Page Granular,Software Controlled Multiple Key Memory Encryption - In one embodiment, an apparatus comprises a processor to read a data line from memory in response to a read request from a VM. The data line comprises encrypted memory data. The apparatus also comprises a memory encryption circuit in the processor. The memory encryption circuit is to use an address of the read request to select an entry from a P2K table; obtain a key identifier from the selected entry of the P2K table; use the key identifier to select a key for the read request; and use the selected key to decrypt the encrypted memory data into decrypted memory data. The processor is further to make the decrypted memory data available to the VM. The P2K table comprises multiple entries, each comprising (a) a key identifier for a page of memory and (b) an encrypted address for that page of memory. Other embodiments are described and claimed. | 2022-02-10 |
20220043758 | LOW LATENCY MEMORY ACCESS - A memory device includes receivers that use CMOS signaling levels (or other relatively large signal swing levels) on its command/address and data interfaces. The memory device also includes an asynchronous timing input that causes the reception of command and address information from the CMOS level receivers to be decoded and forwarded to the memory core (which is self-timed) without the need for a clock signal on the memory device's primary clock input. Thus, an activate row command can be received and initiated by the memory core before the memory device has finished exiting the low power state. Because the row operation is begun before the exit wait time has elapsed, the latency of one or more accesses (or other operations) following the exit from the low power state is reduced. | 2022-02-10 |
20220043759 | TECHNIQUES FOR AN EFFICIENT FABRIC ATTACHED MEMORY - Fabric Attached Memory (FAM) provides a pool of memory that can be accessed by one or more processors, such as a graphics processing unit(s) (GPU)(s), over a network fabric. In one instance, a technique is disclosed for using imperfect processors as memory controllers to allow memory, which is local to the imperfect processors, to be accessed by other processors as fabric attached memory. In another instance, memory address compaction is used within the fabric elements to fully utilize the available memory space. | 2022-02-10 |
20220043760 | METHOD FOR TUNING AN EXTERNAL MEMORY INTERFACE - A device and method are presented. Largest and smallest successful values of a receive clock delay and a transmit clock delay are determined. A first set of parameters for an SPI coupled to a DDR flash memory are set, including the largest successful values of the transmit clock delay and the receive clock delay, and a first value of a RD cycle. A second set of parameters for the SPI are set, including the smallest successful value of the transmit clock delay and receive clock delay, and a second value of the RD cycle. One of the first and second sets of parameters is selected based on whether the first or second set of parameters results in successfully reading from the DDR flash memory over a larger range of operating temperatures. The SPI is programmed using the selected one of the first and second sets of parameters. | 2022-02-10 |
20220043761 | COMMAND PACKETS FOR THE DIRECT CONTROL OF NON-VOLATILE MEMORY CHANNELS WITHIN A SOLID STATE DRIVE - Apparatuses and methods for providing and interpreting command packets for direct control of memory channels are disclosed herein. An example apparatus includes flash memories configured into channels and a controller coupled to the flash memories. The controller receives packets and interpret the packets based at least on a first protocol, and determines whether any packets are linked based on a link identifier included in a block of each packet. The controller arranges the subset of packets based on an index included in the block of each packet of the subset of packets, and the subset of packets are arranged in order based on the respective indexes. A target flash memory and a target channel are determined by the controller based on flash memory and channel identifiers included in the block of each of the packet of the subset of packets. | 2022-02-10 |
20220043762 | High-Performance, High-Capacity Memory Systems and Modules - Described are motherboards with memory-module sockets that accept legacy memory modules for backward compatibility or accept a greater number of configurable modules in support of increased memory capacity. The configurable modules can be backward compatible with legacy motherboards. Equipped with the configurable modules, the motherboards support memory systems with high signaling rates and capacities. | 2022-02-10 |
20220043763 | Method for data transmission and circuit arrangement thereof - The present invention describes a method for data transmission between an integrated circuit and an evaluation unit connected to an interrupt pin of the integrated circuit, characterized in that the data transmission is carried out by selectively triggering an atypical interrupt signal or a plurality of interrupt signals composed of regular and/or atypical interrupt signals. | 2022-02-10 |
20220043764 | MINIMIZING DELAY WHILE MIGRATING DIRECT MEMORY ACCESS (DMA) MAPPED PAGES - During a memory reallocation process, it is determined that a set of memory pages being reallocated are each enabled for a Direct Memory Access (DMA) operation. Prior to writing initial data to the set of memory pages, a pre-access delay is performed concurrently for each memory page in the set of memory pages. | 2022-02-10 |
20220043765 | METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MANAGING DATA TRANSFER - Embodiments of the present disclosure relate to a method, a device, and a computer program product for managing data transfer. A method for managing data transfer is provided, including: if determining that a request to transfer a data block between a memory and a persistent memory of a data storage system is received, obtaining a utilization rate of a central processing unit of the data storage system; and determining, from a first transfer technology and a second transfer technology and at least based on the utilization rate of the central processing unit, a target transfer technology for transferring a data block between the memory and the persistent memory, the first transfer technology transferring data through direct access to the memory, and the second transfer technology transferring data through the central processing unit. Therefore, the embodiments of the present disclosure can improve the data transfer performance of the storage system. | 2022-02-10 |
20220043766 | EXTENSION MODULE FOR INDEPENDENTLY STORING CALIBRATION DATA, COMPONENT, AND COMPONENT CALIBRATION METHOD - Provided is an extension module for independently storing calibration data, including: a first interface, adapted to receive a first external input signal; a second interface, adapted to output the first output signal of the extension module; a signal processing circuit, connected between the first interface and the second interface; and a first memory, the first memory storing first calibration data, and the first calibration data being associated with the extension module. Furthermore, also provided is a component using the above extension module, and a component calibration method. On the one hand, the extension module of an embodiment may share an ADC sampling circuit on a main module, so that the manufacturing cost of the extension module is reduced. On the other hand, an embodiment can facilitate the replacement of different extension modules for the main module without repeated calibration. | 2022-02-10 |
20220043767 | MULTI-PORT MAC WITH FLEXIBLE DATA-PATH WIDTH - Multi-port Media Control Channel (MAC) with flexible data-path width. A multi-port receive (RX) MAC block includes multiple RX ports and a plurality of RX circuit blocks comprising an RX MAC pipeline for performing MAC Layer operations on RX data received at the RX ports. The RX circuit blocks are connected with variable-width datapath segments, and the RX MAC block is configured to implement a multi-port arbitration scheme such as a TDM (Time-Division Multiplexed) scheme under which RX data received at a given RX port are forwarded over the variable-width datapath segments using datapath widths associated with that RX port. A multi-port transmit (TX) MAC block implementing a TX MAC pipeline comprising TX circuit blocks connected with variable-width datapath segments is also provided. The RX and TX MAC blocks include CRC modules configured to calculate CRC values on input data received over datapaths having different widths. | 2022-02-10 |
20220043768 | METHOD AND DEVICE FOR DETERMINING INFORMATION OF A BUS SYSTEM - A method, particularly a computer-implemented method, for determining information of a bus system that has a transmission medium via which signals are transmittable. The method includes: determining a first variable which characterizes a time difference between a first point in time and a second point in time, a signal output by a transmitter onto the transmission medium of the bus system reaching a first position relative to the transmission medium at the first point in time, and the signal output by the transmitter onto the transmission medium of the bus system reaching a second position relative to the transmission medium at the second point in time; evaluating the first variable, at least one time-to-digital converter device being used for determining the first variable. | 2022-02-10 |
20220043769 | Toroidal Systolic Array Processor for General Matrix Multiplication (GEMM) With Local Dot Product Output Accumulation - A toroidal systolic array processor for GEMM with local dot-product output comprises an array of processing elements (PEs) arranged in rows and columns. User input circuitry provides input arrays A and B (and optionally G) as initial first values and second values before the array operation begins. Then, for each step of the array operation, first values and second values are received from other PEs in the array in a toroidal fashion. Each PE performs a fused multiply-add (FMA) operation based upon first values and second values received, whether from the input circuitry or from other PEs. At the end of the array process, each PE provides and output, for example a | 2022-02-10 |
20220043770 | NEURAL NETWORK PROCESSOR, CHIP AND ELECTRONIC DEVICE - The embodiments of the present disclosure provide a neural network processor, a chip and an electronic device. The neural network processor includes a scalar processing unit, a general register and a data migration engine. The scalar processing unit includes a plurality of scalar registers. The data migration engine is coupled to the general register and at least one of the scalar registers. The data migration engine is configured to cause data interaction between the scalar processing unit and the general register. | 2022-02-10 |
20220043771 | GENERATING HEXADECIMAL TREES TO COMPARE FILE SETS - First and second trees having leaves identified by hexadecimal values are generated. First files from a first file set are allocated across the first tree based on hashes of the first files. The hashes of the first files are translated into first leaf index values. Second files from a second file set are allocated across the second tree based on hashes of the second files. The hashes of the second files are translated into second leaf index values. The first and second leaf index values are compared to identify leaves that are the same between the first and second trees. A similarity index indicating a degree of similarity between the first and second sets of files is created based on the comparison. | 2022-02-10 |
20220043772 | SYSTEM FOR ORGANIZING DOCUMENT DATA - To provide a system having a mechanism for viewing a plurality of electronic documents and adding notes at high speed and for preventing a plurality of users from accessing database files at the same time. The system for organizing document data includes a database program for managing one set of database files provided for each of users, a display program for generating data to visualize a part or all of a table where the one set of database files is described, and a viewer program for displaying the data generated by the display program on a screen of each user terminal. The database program has a function for loading a part or all of the one set of database files into a memory of the user terminal and for having a virtual database in the memory. | 2022-02-10 |
20220043773 | INFORMATION PROCESSING METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM - An information processing method, electronic device, and storage medium are provided, and relate to the technical field of big data. The method includes: acquiring meta information; wherein the meta information includes fields, corresponding to original network data, in a storage table, and is used to summarize a process of computing the original network data by an information processing job; the storage table is used to store results of the computing, of the information processing job, corresponding to respective fields; acquiring, according to the meta information, an association relationship between a data source of the original network data and the results of the computing, of the information processing job, corresponding to the respective fields; and returning the association relationship to a specified receiving address. | 2022-02-10 |
20220043774 | SYSTEMS, METHODS, AND STORAGE MEDIA FOR TRANSFERRING DATA FILES - Systems, methods, and storage media for transferring data files are disclosed. Exemplary implementations may: provide a location for saving transferred data files; identify a data file operating system type; identify a database system associated with the transferred data files; enter credential information into a credential user interface provided by a data file vendor; navigate to a data file repository within a data file system operated by the data file vendor; identify a first data file directory structure for the data file repository within the data file system operated by the data file vendor; create a second data file directory structure on a customer data file system; and copy the data files from the first data file directory structure within the data file system operated by the data file vendor to the second data file directory structure in the customer data file system. | 2022-02-10 |
20220043775 | METHOD AND SYSTEM FOR PARALLELIZING BACKUP GENERATION OPERATIONS USING PSEUDO-ASSET BACKUPS - A method that is performed for backing up data. The method includes obtaining a backup request; and in response to the backup request: obtaining an asset and an asset entry from a file system metadata repository associated with the backup request; identifying asset components of the asset using the asset entry; assigning asset components to backup threads to be backed up as pseudo-assets based on the asset entry; executing the backup threads to generate pseudo-asset backups; storing the pseudo-asset backups on backup storages; and updating asset backup metadata based on the pseudo-asset backups. | 2022-02-10 |
20220043776 | METADATA MANAGEMENT PROGRAM AND INFORMATION PROCESSING APPARATUS - A computer-readable storage medium stores a metadata management program that controls a distributed file system of metadata of a directory and a file. The program causes a computer to execute a process including, managing metadata of a first directory by a first metadata management device, and copying the metadata of the first directory from the first metadata management device to a second metadata management device in a case where the second metadata management device creates and manages metadata of a second directory or a first file under the first directory. | 2022-02-10 |
20220043777 | INOFILE MANAGEMENT AND ACCESS CONTROL LIST FILE HANDLE PARITY - Techniques are provided for inofile management and access control list file handle parity. For example, operations targeting a first storage object of a first node are replicated to a second storage object of a second node. A size of an inofile maintained by the second node is increased if an inode number to be allocated by the replication operation is greater than a current size of the inofile. Access control list file handle parity is achieved by maintaining parity between inode number and generation number pairings of the first node and the second node. | 2022-02-10 |
20220043778 | SYSTEM AND METHOD FOR DATA COMPACTION AND SECURITY WITH EXTENDED FUNCTIONALITY - A system and method for highly efficient encoding of data that includes extended functionality for asymmetric encoding/decoding and network policy enforcement. In the case of asymmetric encoding/decoding the original data is encoded by an encoder according to a codebook and sent to a decoder, but the output of the decoder depends on data manipulation rules applied at the decoding stage to transform the decoded data, into a different data set from the original data. In the case of network pokey enforcement, a behavior appendix into the codebook, such that the encoder and/or decoder at each node of the network comply with network behavioral rules, limits, and policies during encoding and decoding. | 2022-02-10 |
20220043779 | Concurrent Edit Detection - A heuristics-based concurrent edit detector (“ConE”) can notify collaborators about potential conflicts that may be caused by edits made by other collaborators. ConE may compare concurrent edits submitted by collaborators, calculate the extent of overlap between two sets of edits, apply one or more filters to balance recall versus precision, and decide whether to alert the collaborators about candidate potential conflicts. ConE may be light-weight and easily scalable to work in a very large environment with numerous collaborators. | 2022-02-10 |
20220043780 | INFORMATION PROCESSING APPARATUS, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND INFORMATION PROCESSING METHOD - An information processing apparatus includes a processor configured to receive an operation performed by a user on an image associated with a designated file, and in response to the operation is an instruction related to an execution of a coordinated function, display a list of candidates for the coordinated function which is executable by using the designated file and display a list of files, which are different from the file, used for executing the coordinated function along with the designated file. | 2022-02-10 |
20220043781 | SYSTEMS AND METHODS FOR GENERATING AND ASSIGNING METADATA TAGS - An information management system including at least one data storage device and at least one processor coupled to the at least one data storage device. The at least one processor is configured to receive at least one object, the object having a location in a hierarchical file organizational structure. The processor also generates at least one prospective keyword for the at least one object based upon the location of the object in the hierarchical organization structure, and associates the at least one object with at least one of the prospective keywords. | 2022-02-10 |
20220043782 | DISTRIBUTED WORK DATA MANAGEMENT - A device may receive, from a user device, a transaction request associated with a first entity and identify a distributed ledger associated with the first entity, the distributed ledger including a set of blocks recording work data associated with the first entity. The set of blocks may include: a first subset of blocks including data specifying work performed by the first entity, and a second subset of blocks including data verifying a portion of the work performed by the first entity and specified by the data included in the first subset of blocks. The device may determine that a transaction, associated with the transaction request, is associated with the first subset of blocks and the second subset of blocks. Based on predetermined instructions that correspond to the transaction and the distributed ledger, the device may perform the transaction. | 2022-02-10 |
20220043783 | METHOD FOR MANAGING VIRTUAL FILE, APPARATUS FOR THE SAME, COMPUTER PROGRAM FOR THE SAME, AND RECORDING MEDIUM STORING COMPUTER PROGRAM THEREOF - The present disclosure provides a virtual file management method and apparatus, and a computer-readable recording medium thereof, which may comprises obtaining an object identifier for distinguishing objects, wherein the object includes at least one of a virtual file or a virtual folder and obtaining the object based on the obtained object identifier. | 2022-02-10 |
20220043784 | AUTOMATIC GENERATIVE PROCESS BRIDGE BETWEEN ANALYTICS MODELS - Disclosed herein are system, method, and computer program product embodiments for generating a bridge between analytical models. In an embodiment, a server can extract a first variable dependency schema from a first model (e.g., predictive model or business intelligence report) and a second variable schema from a second model (e.g., predictive model or business intelligence report). The first variable dependency schema includes a first definition of a relationship between a first variable and a second variable. The server can compare the first variable dependency schema and the second variable dependency schema. Furthermore, the server can generate a modification to be made in the second variable dependency schema based on the first definition of the relationship between the first and second variable and outputs the modification to be made to the second variable dependency schema. | 2022-02-10 |
20220043785 | DATA STORE TRANSITION USING A DATA MIGRATION SERVER - Techniques are disclosed relating to transitioning between data stores using a data migration server. In some embodiments, the data migration server may be used to access data stored on a preexisting data store to service requests from a plurality of services. A dual-write operation mode for the data migration server may then be enabled such that, in response to a given write request, the data migration server writes a given data entry to both the preexisting data store and a replacement data store. Further, a dual-read operation mode may be enabled such that, in response to a given read request, the data migration server reads a corresponding data entry from both the preexisting and replacement data stores. Configuration settings for the data migration server may then be adjusted to designate the replacement data store as the primary data store to service requests from the services. | 2022-02-10 |
20220043786 | System and Method for Achieving Increased Accuracy of Extrapolated Vehicle Data - The present invention is a system and method for increasing the accuracy of insights derived from vehicle location data to the broader population using motor vehicle registration data. The process aligns at least two different data sets and normalizes the information by removing the bias from over-indexation within the data sets. In a particular implementation motor vehicle registration data is collected and the number of vehicles from a particular manufacturer is indexed against ZIP™ Code and third party data for each vehicle. The registration data is then increased or discounted to remove biases against greater numbers of vehicles from a single manufacturer over the average number registered in a ZIP™ Code, against commercial vehicles, and against newer vehicles over older vehicles. | 2022-02-10 |
20220043787 | RECORD DEDUPLICATION IN DATABASE SYSTEMS - A method for execution by a record processing and storage system includes receiving a plurality of records and corresponding row numbers. Pages are generated from the received records. Page metadata is generated for each page that includes row number span data based on row numbers of the records included in each page. Pairs of pages are identified in the plurality of pages based on having row number span data in their page metadata that include a row number span overlap. For each pair of pages, row number span data is updated for a first page in the pair by removing the row number span overlap with a second page in the pair. Reads of pages are performed based on their row number span data. Only records of each first page of each pair of pages having row numbers that are within the updated row number span data are read. | 2022-02-10 |
20220043788 | PREDICTING TYPES OF RECORDS BASED ON AMOUNT VALUES OF RECORDS - Some embodiments provide a non-transitory machine-readable medium that stores a program. The program queries a database for a subset of a plurality of records in the database. Each record in the plurality of records includes a value for a first field and a second value for a second field. The program further normalizes the first value of the first field of each record in the subset of the plurality of records. The program also divides the subset of the plurality of records into a plurality of groups of records based on the second values of the second field. The program further generates a function for predicting a type of a particular record based on the value of the field of the particular record. | 2022-02-10 |
20220043789 | DATA DEDUPLICATION IN DATA PLATFORMS - One embodiment of the invention provides a method for data deduplication storage management in a data platform including a plurality of data stores. The method comprises, for each data store of the plurality of data stores, determining a corresponding multi-level signature mapping data content of the data store into an ordered logical form comprising a plurality of data abstraction levels, determining a data similarity between the data store and each other data store of the plurality of data stores based on the multi-level signature corresponding to the data store and another multi-level signature corresponding to the other data store, and determining data usage of the data content of the data store. The method further comprises improving storage in the data platform by detecting duplicate data across the plurality of data stores based on each data similarity determined and each data usage determined. | 2022-02-10 |
20220043790 | Event-Driven Computer Modeling System for Time Series Data - A system stores instructions including, in response to receiving user input, identifying a first event type and a first security identifier and obtaining a first set of event dates from the event database and, for each event date of the first set of event dates, obtaining a corresponding event value on the corresponding event date of the first security identifier. The instructions include, for a first day related to each event date of the first set of event dates: obtaining a corresponding value on the first day of the first security identifier, determining a corresponding difference value between the corresponding event value and the corresponding value, and storing the corresponding difference value in a set of difference values. The instructions include calculating an average difference on the first day using the set of difference values and displaying the average difference and an event indicator corresponding to the first event type. | 2022-02-10 |
20220043791 | AUTOMATED LOG-BASED REMEDIATION OF AN INFORMATION MANAGEMENT SYSTEM - Systems and processes disclosed herein perform an automatic remediation process. The automatic remediation process may be a log-based remediation process. Systems disclosed herein may obtain log files from an information management system and determine the occurrence of errors at the information management system based on error codes included in the logs. Further, the systems may access a knowledgebase to determine whether solutions for the errors have been previously generated. The solutions may include patches or hotfixes that can be applied to the information management system without removing user-access or stopping execution of the information management system. The systems may automatically update the information management system to address the errors. Alternatively, or in addition, the systems may alert a user, such as an administrator, of the existence of a solution to the error, and whether the solution may be applied without interrupting service or access to the information management system. | 2022-02-10 |
20220043792 | SYSTEMS AND METHODS FOR STORING, UPDATING, SEARCHING, AND FILTERING TIME-SERIES DATASETS - A method includes generating from a time-series dataset multiple corresponding time-slice datasets. Each time-slice dataset has a corresponding time-slice time index and includes field-value data strings and associated field-value-time-index data strings, or pointers indicating the corresponding strings in an earlier time-slice dataset, that are the latest in the time-series dataset that are also earlier than the corresponding time-slice time index. A query of the time-series dataset for latest data records earlier than a given query time index is performed by using the time-slice datasets to reduce or eliminate the need to directly access or interrogate the time-series dataset. | 2022-02-10 |
20220043793 | MANIPULATION AND/OR ANALYSIS OF HIERARCHICAL DATA - Embodiments of methods, apparatuses, devices and/or systems for manipulating hierarchical sets of data are disclosed. In particular, methods, apparatus devices and or/or systems for analyzing hierarchical data are disclosed. | 2022-02-10 |
20220043794 | MULTIMODAL TABLE ENCODING FOR INFORMATION RETRIEVAL SYSTEMS - Multimodal table encoding, including: Receiving an electronic document that contains a table. The table includes multiple rows, multiple columns, and a schema comprising column labels or row labels. The electronic document includes a description of the table which is located externally to the table. Next, operating separate machine learning encoders to separately encode the description, schema, each of the rows, and each of the columns of the table, respectively. The schema, the rows, and the columns are encoded together with end-of-column tokens and end-of-row tokens that mark an end of each column and row, respectively. Then, applying a machine learning gating mechanism to the encoded description, encoded schema, encoded rows, and encoded columns, to produce a fused encoding of the table, wherein the fused encoding is representative of both a structure of the table and a content of the table. | 2022-02-10 |
20220043795 | INFORMATION PROCESSING APPARATUS AND DATA PROCESSING METHOD - A non-transitory computer-readable recording medium has stored therein a program that causes a computer to execute a process, the process including generating token information that indicates a characteristic of an associating relation between input data and output data obtained by converting the input data according to a predetermined rule, and when searching for one or more intermediate programs used to generate a conversion program for converting the input data into the output data among multiple intermediate programs stored in a storage, excluding an intermediate program that does not correspond to the token information from candidates of the one or more intermediate programs. | 2022-02-10 |
20220043796 | DISTRIBUTED PESSIMISTIC LOCK BASED ON HBASE STORAGE AND THE IMPLEMENTATION METHOD THEREOF - A distributed pessimistic lock based on HBase storage and a method for implementing a database pessimistic lock; the distributed pessimistic lock including a lock manager configured to be installed on a Region of a RegionServer node of a HBase system, the lock manager has a lock and unlock interface; and the distributed pessimistic lock, an operation transaction, and a lock holder form a cross linked list format; a horizontal dimension is an information of a current data row, a vertical dimension is an information of the operation transaction, and an intersection point between the horizontal dimension and the vertical dimension is the lock holder. By installing the lock manager on a node Region of a HBase storage system, the lock manager locks and unlocks a data operation of the HBase system with the distributed pessimistic lock. | 2022-02-10 |
20220043797 | VIRTUAL DATASET MANAGEMENT DATABASE SYSTEM - A virtual dataset may be created in a database system. The virtual dataset may include data items stored in a storage system that are each associated with a respective label. The virtual dataset may include a first changeset identifying the data items, and may be updated to include a second changeset identifying different data items later stored in the storage system and included in the virtual dataset. Access to a learning dataset that includes either the first changeset, the second changeset, or both, may be provided upon request. | 2022-02-10 |
20220043798 | SYSTEM AND METHOD FOR IMPROVING DATA VALIDATION AND SYNCHRONIZATION ACROSS DISPARATE PARTIES - Systems and methods allow for a variety of partners to store information in a database utilizing connected services to securely allow retrieval of such data by the partners. A collection of data points that make up a record allows for positive record matching. Individual data elements are generally stored for each partner connected to the record. Partners can only store data elements associated with a unique, known record. Numerous partners may contribute their data in the form of record components and each retains access rights to their own private data which is not shared within the platform. This allows for different data about the same record and data point to be stored by each party (partner). Partners can retrieve their own values should the need arise and also have access to the sureEcosystem Value for fields where the partner has contributed qualifying data. The sureEcosystem Value comes from an algorithm utilizing value frequency, submission dates, partner rankings, record owner input and other validation components in its analysis of contributed information to determine the value most likely accurate at any given time. | 2022-02-10 |
20220043799 | METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR METADATA COMPARISON - A method, a device, and a computer program product for metadata comparison are provided in embodiments of the present disclosure. A method for metadata comparison includes setting a source pointer to point to a first node in a first metadata tree corresponding to source data; if it is determined that the first node has at least one child node in the first metadata tree, reading a first child node set of the first node from a first storage system; if it is determined that a target pointer points to a second node in a second metadata tree corresponding to target data, determining a second child node set of the second node, wherein the target data is a replicated version of the source data, and the second node is the same as the first node; and determining a differential metadata tree of the first metadata tree with respect to the second metadata tree at least in part by determining a difference between the first child node set and the second child node set. | 2022-02-10 |
20220043800 | METHODS, DEVICES AND SYSTEMS FOR REAL-TIME CHECKING OF DATA CONSISTENCY IN A DISTRIBUTED HETEROGENOUS STORAGE SYSTEM - A computer-implemented method may comprise executing, by a first plurality of replicated state machines, a sequence of ordered agreements to make mutations to a data stored in a first data storage service of a first type and executing, by a second plurality of replicated state machines, the sequence of ordered agreements to make mutations to the data stored in a second data storage service of a second type. First metadata of the mutated data stored in the first data storage service may then be received and stored, as may second metadata of the mutated data stored in the second data storage service. A comparison of the stored first and second metadata may then be carried out when the data stored in the first data storage service that corresponds to the first metadata and the data stored in the second data storage service that corresponds to the second metadata have been determined to have settled according to the predetermined one of the sequence of ordered agreements. A selected action may then be carried out depending upon a result of the comparison. | 2022-02-10 |
20220043801 | INFORMATION PROCESSING DEVICE, AND NON-TRANSITORY STORAGE MEDIUM - The present disclosure is aimed at increasing security of a system that provides mediation in a procedure for updating personal information that is registered. An information processing device includes a storage unit configured to store personal information data including pieces of personal information that are collected on a per-user basis. Furthermore, the information processing device includes a controller configured to: update at least a piece of the personal information that is stored, based on a request acquired from a user, receive, from the user, selection of a service provider to which the personal information after update is to be transmitted, receive, from the user, selection of the personal information that is to be transmitted to the service provider that is selected, and transmit the personal information that is selected to the service provider that is selected, to update the personal information registered with the service provider that is selected. | 2022-02-10 |
20220043802 | METHOD AND SYSTEM FOR PERFORMING COMPUTATIONS IN A DISTRIBUTED SYSTEM - A data management system includes a first consistency zone, a second consistency zone, and a repository manager. The repository manager identifies a calculation event for a derived object of the second consistency zone, the derived object includes a cross-zone reference to the first consistency zone; and in response to identifying the calculation event: identifies an object in the first consistency zone associated with the cross-zone reference; sends a remote object request, to the first consistency zone, for the object with reference to an event of the first consistency zone specified by the cross-zone reference; obtains the object after sending the remote object request; and obtains a derived object instance based, at least in part, on a computation specification of the derived object and the object. | 2022-02-10 |
20220043803 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR DIGITAL CONTENT AUDITING AND RETENTION IN A GROUP BASED COMMUNICATION REPOSITORY - Embodiments of the present disclosure provide methods, systems, apparatuses, and computer program products for digital content auditing in a group based communication repository, where the group based communication repository comprises a plurality of enterprise-based digital content objects organized among a plurality of group-based communication channels. In one embodiment, a computing entity or apparatus is configured to receive an enterprise audit request, where the enterprise audit request comprises an audit credential and digital content object retrieval parameters. The apparatus is further configured to determine if the audit credential satisfies an enterprise authentication protocol. In circumstances where the audit credential satisfies the enterprise authentication protocol, the apparatus is configured to retrieve and output digital content objects based on the digital content object retrieval parameters, receive a violating digital content object identifier, and replace a violating digital content object with a temporary digital content object based on the violating digital content object identifier. | 2022-02-10 |
20220043804 | Evaluating Driving Data with A Modular and Configurable Evaluation Framework - In one embodiment, a method includes receiving a request for evaluating driving data included in a data log, accessing, based on the request, an evaluation configuration file that includes a metric-calculation configuration specifying one or more metric calculators configured to generate one or more output metrics from the driving data and a validation configuration configured to validate the one or more output metrics, instantiating the one or more metric calculators specified by the metric-calculation configuration included in the evaluation configuration file, determining particular driving data from the driving data included in the data log based on the one or more instantiated metric calculators, and generating the one or more output metrics for the particular driving data by using the instantiated one or more metric calculators. | 2022-02-10 |
20220043805 | DISTRIBUTED DATABASE ARCHITECTURE BASED ON SHARED MEMORY AND MULTI-PROCESS AND IMPLEMENTATION METHOD THEREOF - A distributed database architecture based on shared memory and multi-process includes a distributed database node. A system shared memory unit and a system process unit are built in a distributed database. The system shared memory unit includes a task stack information module and a shared cache module. A plurality of process tasks are built in the task stack information module. The process tasks include system information with various purposes in system process task information, and each system information corresponds to one process task. By using a system shared memory unit at a distributed database node, the number of user connections in the distributed database architecture does not have a corresponding relationship with the number of processes or threads. The number of processes or threads of the entire node does not increase as the number of user connections increases. | 2022-02-10 |
20220043806 | PARALLEL DECOMPOSITION AND RESTORATION OF DATA CHUNKS - A system for parallel decomposition and restoration of data chunks is provided, wherein a decomposable transformer service module analyzes and decomposes data into data chunks and transformations that restore the original data from the chunks, enabling efficient storage, modification, and restoration of program code across a number of target devices using a central repository. | 2022-02-10 |
20220043807 | EFFICIENT UPDATING OF JOURNEY INSTANCES DETECTED WITHIN UNSTRUCTURED EVENT DATA - Systems and methods are disclosed for efficiently storing information identifying journey instances within unstructured event data of a data intake and processing system. Each journey instance is illustratively associated with a series of events within the unstructured event data occurring over a journey duration. Because the unstructured event data may be constantly updated, any given inspection of the event data may yield both complete and incomplete instances. Storage of instance data over time can require updating of prior incomplete journey instances with complete versions of such instance detected at a later point in time. However, a data store of the unstructured event data may be unsuited for such updating, as the store may maintain version information for deleted data to reduce possibility of data loss. To address this issue, a separate structured data store, such as a columnar time series data store, is provided to efficiently store instance information. | 2022-02-10 |
20220043808 | ACCELERATION OF DATA QUERIES IN MEMORY - The present disclosure includes apparatuses, methods, and systems for acceleration of data queries in memory. An example host apparatus includes a controller configured to generate a search key, generate a query for particular data stored in an array of memory cells in a memory device, and send the query to the memory device. The query includes a command to search for the particular data. The query also includes a number of data fields for the particular data including a logical block address (LBA) for the particular data, an LBA offset for the particular data, and a parameter for an amount of bits in data stored in the memory device that do not match corresponding bits in the search key that would result in data not being sent to the host. | 2022-02-10 |
20220043809 | Method And System For Preview Of Search Engine Processing - Aspects of the disclosed technology include a method including receiving, from a user device, an identification of content; receiving, by a computing device, the identified content; accessing search engine processing logic; processing the received content using the subset of search engine processing logic, without indexing the received content to be accessed for responding to search queries from the search engine; generating a representation of a predicted search result of the received content based on the processing; and transmitting, to the user device, the representation of the predicted search result. | 2022-02-10 |
20220043810 | REINFORCEMENT LEARNING TECHNIQUES TO IMPROVE SEARCHING AND/OR TO CONSERVE COMPUTATIONAL AND NETWORK RESOURCES - Implementations are related to observing user interactions in association with searching for various files, and modifying a model and/or index based on such observations in order to improve the search process. In some implementations, a reinforcement learning model is utilized to adapt one or more search actions of the search process. Such search action(s) can include, for example, updating an index, reweighting terms in an index, modifying a search query, and/or modifying one or more ranking signal(s) utilized in raking search results. A policy of the reinforcement learning model can be utilized to generate action parameters that dictate performance of search action(s) for a search query, dependent on an observed state that is based on the search query. The policy can be iteratively updated in view of a reward function, and observed user interactions across multiple search sessions, to generate a learned policy that reduces duration of search sessions. | 2022-02-10 |
20220043811 | METHODS AND SYSTEMS FOR DETECTING ANOMALIES IN CLOUD SERVICES BASED ON MINING TIME-EVOLVING GRAPHS - A method for anomaly detection of cloud services based on mining time-evolving graphs includes steps of receiving tracing data for a plurality of micro-services of the deployed cloud service, wherein the tracing data defines relationships between the plurality of micro-services of the deployed cloud service at a plurality of different time intervals, computing a functional graph based on the tracing data for each of the plurality of different time intervals, wherein nodes of each functional graph include the plurality of micro-services and wherein links between the nodes represent relationships between the plurality of micro-services, comparing the functional graphs for each of the plurality of time intervals to determine an anomaly score for each of the functional graphs, and detecting a presence of one or more anomalies based on the anomaly scores. | 2022-02-10 |
20220043812 | METHOD AND DEVICE OF DETECTING FAULT IN PRODUCTION - According to the embodiments of the present disclosure, there is provided a method and device of detecting fault in production, and a computer readable storage medium. The method includes: determining whether a plurality of production paths in a production line are faultless in one or more production batches, based on production record data; and determining at least one of the plurality of production paths to be faulty, at least partially based on whether the plurality of production paths are faultless in the one or more production batches. | 2022-02-10 |
20220043813 | METHOD AND SYSTEM FOR ONTOLOGY DRIVEN DATA COLLECTION AND PROCESSING - Systems and method to aid in the collection, representation and mining of data are disclosed. More particularly, embodiments as disclosed may utilize a unifying format to represent data obtained or utilized by a system to facilitate linking between data from different sources and the commensurate ability to mine such data. Specifically, embodiments may represent data as graphs that comprise the concepts and relationships between those concepts. In this manner, concepts in graphs that represent distinct groupings of data may be mapped and knowledge mining with respect to these graphs facilitated. | 2022-02-10 |
20220043814 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM - An information processing device includes: a memory; and a processor coupled to the memory and configured to: manage a metatask that creates new metadata, in association with a task, on the basis of metadata set to data to be processed for new data obtained by executing the task on the data to be processed; execute the metatask managed in association with the task when the task is executed on a single or a plurality of pieces of data and create new metadata on the basis of metadata set to each of the single or the plurality of pieces of data; and set the new metadata to new data obtained by executing the task on the single or the plurality of pieces of data. | 2022-02-10 |
20220043815 | DYNAMIC PRESENTATION OF SEARCHABLE CONTEXTUAL ACTIONS AND DATA - Disclosed methods and systems allow a central server to monitor electronic units of work accessible to a group of computers and generate a nodal data structure representing the units of work. The server then uses various protocols, such as hashing algorithms and/or executing artificial intelligence and machine learning models to identify similar and/or related units of work. The server then merges/links the nodes corresponding to the similar/related units of work. The server also monitors all user activities. When a user or a software system/service accesses electronic content on his, her, or its electronic device, the server identifies a node corresponding to the accessed electronic content and associated unit(s) of work and presents searchable data and actions related to the identified node and any related/linked nodes. | 2022-02-10 |