10th week of 2022 patent applcation highlights part 47 |
Patent application number | Title | Published |
20220075690 | ERROR HANDLING OPTIMIZATION IN MEMORY SUB-SYSTEM MAPPING - A system including a memory device having blocks of memory cells and a processing device operatively coupled to the memory device. The processing device to perform operations comprising: detecting an error event triggered within a source block of the memory cells; reading data from the source block; writing the data into a mitigation block that is different than the source block; and replacing, in a mapping data structure, a first identifier of the source block with a second identifier of the mitigation block. | 2022-03-10 |
20220075691 | POOLING BLOCKS FOR ERASURE CODING WRITE GROUPS - A technique provides efficient data protection, such as erasure coding, for data blocks of volumes served by storage nodes of a cluster. Data blocks associated with write requests of unpredictable client workload patterns may be compressed. A set of the compressed data blocks may be selected to form a write group and an erasure code may be applied to the group to algorithmically generate one or more encoded blocks in addition to the data blocks. Due to the unpredictability of the data workload patterns, the compressed data blocks may have varying sizes. A pool of the various-sized compressed data blocks may be established and maintained from which the data blocks of the write group are selected. Establishment and maintenance of the pool enables selection of compressed data blocks that are substantially close to the same size and, thus, that require minimal padding. | 2022-03-10 |
20220075692 | Data Validation and Master Network Techniques - Disclosed herein are techniques and tools for verifying data for semantic correctness and/or verifying data for network correctness. In one respect, a method includes receiving input defining a validation point, the validation point comprising at least two or more validation functions applicable to (i) raw data and (ii) other data stored within a semantic network comprising nodes and links, importing source data; applying one or more transformations to the source data, populating the source data into one or more of the nodes and links comprising the semantic network, executing the validation point with respect the source data, based on the executing, determining one or more rules associated with the validation point are not satisfied, and based on the determining, revising either the source data or the other data stored withing the semantic network. | 2022-03-10 |
20220075693 | ESTIMATION APPARATUS, ESTIMATION METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - An estimation apparatus | 2022-03-10 |
20220075694 | AUTOMATIC RECLAMATION OF RESERVED RESOURCES IN A CLUSTER WITH FAILURES - When a failure occurs at a host in a cluster of hosts in a virtualized computing environment, virtualized computing instances that were running on the failed host are restarted on the active host(s) in the cluster. Resources to enable the restart of the virtualized computing instances are made available by powering off virtualized computing instances that are running on the active hosts. Determination of which virtualized computing instances to power off and to power on can be performed based on power off settings and restart priority levels that are configured for the virtualized computing instances. | 2022-03-10 |
20220075695 | BACKUP AND RECOVERY OF PRIVATE INFORMATION ON EDGE DEVICES ONTO SURROGATE EDGE DEVICES - A system and method for backing up critical data of edge devices includes originator, surrogate, and target edge devices as well as a vault-broker server. The critical data, encrypted, is transmitted to and stored by a surrogate. The association of originator and surrogate is managed by the vault-broker server. Encryption protects the data from recovery by unauthorized parties while allowing surrogate edge devices to determine if recovery attempts are made by authorized parties. | 2022-03-10 |
20220075696 | Application Exception Recovery - An application exception recovery method, an electronic device, a storage medium storing the recovery method, and a recovery apparatus, the method including, storing page information of an exception page, in response to that an exception occurs in at least one application installed on an electronic device, wherein the exception page is a page displayed by the at least one application in response to the exception occurring, displaying a mask, wherein the mask is a picture displayed on at least a window of the at least one application during restart of the at least one application, restarting the at least one application, wherein restarting the at least one application comprises creating the exception page, and removing the mask. | 2022-03-10 |
20220075697 | VEHICULAR APPARATUS - A vehicular apparatus is provided in which a plurality of operating systems each perform a display on a display device. The vehicular apparatus includes a controller unit. The controller unit is configured to implement a virtual environment to operate the plurality of operating systems. The controller is further configured to monitor and detect a malfunction in the display performed on the display device in the virtual environment, and to shield a display area where an incorrect display may be performed in response to the malfunction being detected. | 2022-03-10 |
20220075698 | Method and Apparatus for Redundancy in Active-Active Cluster System - A method is applied to a system including a host cluster and at least one pair of storage arrays. The host cluster includes a quorum host, which includes a quorum unit. The quorum host is an application host having a quorum function. A pair of storage arrays includes a first storage array and a second storage array. The quorum host receives a quorum request, temporarily stops delivering a service to the first storage array and the second storage array, determines, from the first storage array and the second storage array, which is a quorum winning storage array and which is a quorum losing storage array according to logic judgment, stops the service with the quorum losing storage array, sends quorum winning information to the quorum winning storage array, and resumes the delivered service between the host cluster and the quorum winning storage array. | 2022-03-10 |
20220075699 | Deallocation within a storage system - Failure information associated with a plurality of blocks of a solid-state storage device of a plurality of solid-state storage devices is received. One or more blocks of the plurality of blocks storing uncorrectable data are identified based on the received failure information. A partial deallocation of the one or more blocks of the plurality of blocks is issued, the partial deallocation indicating that the one or more blocks store uncorrectable data. A remedial action associated with the one or more blocks of the plurality of blocks is performed. | 2022-03-10 |
20220075700 | SYSTEMS AND METHODS FOR CLOUD-BASED TESTING OF POS DEVICES - A computer-implemented method for cloud-based testing of a payment network may include receiving a test configuration for testing a payment processing network, configuring a simulated worker generator for generating a plurality of simulated workers according to the received test configuration, reading commands to be executed by each simulated worker among the plurality of simulated workers from a command bank according to the received test configuration, configuring the plurality of simulated workers according to the commands and the received test configuration, starting a swarm test of the payment processing network by the plurality of simulated workers, reading results of the swarm test from the plurality of simulated workers, and saving the results to storage. | 2022-03-10 |
20220075701 | TERMINAL MONITORING METHOD, PROGRAM, AND TERMINAL MONITORING SYSTEM - A terminal monitoring system monitors a plurality of terminals connected to a network. The terminal monitoring system includes an acceptance processor, a setting processor, an acquisition processor, and a display processor. The acceptance processor accepts setting of a priority for each of a plurality of types of information to be acquired from each of the plurality of terminals. The setting processor sets, based on the priority, an acquisition schedule indicating a schedule for acquiring each of the plurality of types of information. The acquisition processor acquires each of the plurality of types of information from each of the plurality of terminals in accordance with the acquisition schedule. A display processor that causes a display to display the plurality of types of information acquired by the acquisition processor. | 2022-03-10 |
20220075702 | PROCESSING DEVICE, COMMUNICATION SYSTEM, AND NON-TRANSITORY STORAGE MEDIUM - A processing device includes: a first processor configured to execute a determination process; and a second processor configured to communicate with the first processor via an internal bus, wherein the determination process includes processes of determining that the abnormality occurs inside the processing device when first reference data transmitted to the second processor and first diagnostic data that is response data to the first reference data do not correspond to each other, and determining that the abnormality occurs in at least one of an external bus or an external device when the first reference data and the first diagnostic data correspond to each other and second reference data transmitted to the external device and second diagnostic data that is response data to the second reference data do not correspond to each other. | 2022-03-10 |
20220075703 | METHOD AND APPARATUS FOR GENERATING AN ARCHITECTURE WEIGHTED SCORING MODEL - Various methods, apparatuses/systems, and media for generating an architecture weighted scoring model are disclosed. A receiver receives a request from a user computing device to develop an application. A GUI displays a set of questions that are designed to gather meaningful information about the application to be developed by the user. A processor receives user input data on each answer to the set of questions; selects an architecture type by utilizing a first decision tree running on backend of a GUI; generates a second decision tree to select architecture layers; and generates a third decision tree to select product offerings; calculates a score for the selected architecture type based on analyzing aggregated information collected from data generated in response to user's answers and corresponding user's selection of each decision tree; checks that the architecture works; and generates an architecture weighted scoring model (AWSM) based on the calculated score. | 2022-03-10 |
20220075704 | PERFORM PREEMPTIVE IDENTIFICATION AND REDUCTION OF RISK OF FAILURE IN COMPUTATIONAL SYSTEMS BY TRAINING A MACHINE LEARNING MODULE - A machine learning module is trained by receiving inputs comprising attributes of a computing environment, where the attributes affect a likelihood of failure in the computing environment. In response to an event occurring in the computing environment, a risk score that indicates a predicted likelihood of failure in the computing environment is generated via forward propagation through a plurality of layers of the machine learning module. A margin of error is calculated based on comparing the generated risk score to an expected risk score, where the expected risk score indicates an expected likelihood of failure in the computing environment corresponding to the event. An adjustment is made of weights of links that interconnect nodes of the plurality of layers via back propagation to reduce the margin of error, to improve the predicted likelihood of failure in the computing environment. | 2022-03-10 |
20220075705 | PROCESS TREE DISCOVERY USING A PROBABILISTIC INDUCTIVE MINER - Systems and methods for generating a process tree of a process are provided. An event log of the process is received. It is determined whether a base case applies to the event log and, in response to determining that the base case applies to the event log, one or more nodes are added to the process tree. In response to determining that the base case does not apply to the event log, the event log is split into sub-event logs based on a frequency of directly follows relations and a frequency of strictly indirectly follows relations for pairs of activities in the event log and one or more nodes are added to the process tree. The steps of determining whether a base case applies and splitting the event log are repeatedly performed for each respective sub-event log using the respective sub-event log as the event log until it is determined that the base case applies to the event log. The process tree is output. The process may be a robotic process automation process. | 2022-03-10 |
20220075706 | PROCESS TREE DISCOVERY USING A PROBABILISTIC INDUCTIVE MINER - Systems and methods for generating a process tree of a process are provided. An event log of the process is received. It is determined whether a base case applies to the event log and, in response to determining that the base case applies to the event log, one or more nodes are added to the process tree. In response to determining that the base case does not apply to the event log, the event log is split into sub-event logs and one or more nodes are added to the process tree. The steps of determining whether a base case applies and splitting the event log are repeatedly performed for each respective sub-event log using the respective sub-event log as the event log until it is determined that the base case applies to the event log. The process tree is output. The process may be a robotic process automation process. | 2022-03-10 |
20220075707 | MANAGING DATA FROM INTERNET OF THINGS (IOT) DEVICES IN A VEHICLE - A method and system for communicating with IoT devices connected to a vehicle to gather information related to device operation or performance is disclosed. The system makes a copy of at least a portion of the device's non-volatile memory and/or receives IoT device data (e.g., sensor data and/or log files etc.) from an IoT device that recently failed. The system determines which log files and/or sensor data, for example, the IoT device created before and/or after a failure. After gathering this information, the system stores the information, sends it to a storage destination for further analysis and diagnostics to troubleshoot the failure and send a fix or software update to the IoT device. The information can also be placed into secondary storage to comply with regulatory, insurance, or legal purposes. | 2022-03-10 |
20220075708 | DEVICE VIRTUALIZATION AND SIMULATION OF A SYSTEM OF THINGS - Described herein is a system for device virtualization and simulation of the behaviour of a system of things for testing IoT applications. The aforementioned system comprises a modelling engine, a test suite designer, a simulation engine, and a reporting engine. The modelling engine defines the attributes, data transfer protocols and normal data behaviour of any physical device, and for defining virtual devices that can co-exist with actual devices and can be used to simulate both the things themselves as well as the gateway and the network. The test suite designer defines the device data behaviour under various test conditions. The simulation engine generates test data streams for various test scenarios as required by the software tester. The reporting engine generates various types of test reports. | 2022-03-10 |
20220075709 | METHOD FOR TESTING MOBILE APPLICATION AND ASSOCIATED APPARATUS AND SYSTEM - The application testing system and method provide an efficient and effective way to test multiple application variants of an application on at least one mobile device. The application testing system may cause a first application variant selection indication to be transmitted to at least one mobile device having the application. The first application variant selection indication may be configured to cause the mobile device to interact with the application according to a first application variant of the plurality of application variants. The application testing system may analyze data corresponding to the usage of the first application variant by the at least one mobile device, and cause a second application variant selection indication to be transmitted to the mobile device, wherein the second application variant selection indication is configured to cause the mobile device to interact with the application according to a second application variant of the plurality of application variants. | 2022-03-10 |
20220075710 | SYSTEM AND METHOD FOR IMPROVED UNIT TEST CREATION - System and method for creating unit tests include: constructing a CFG representation for a computer program; utilizing the CFG to identify different potential execution paths and to identify different formulas corresponding to the different potential execution paths; parsing the source code to generate an abstract syntax tree; analyzing the computer program to determine whether it provides capability to set each of the associated variables in each formula by utilizing the abstract syntax tree; translating variables, fields, and expressions of the source code represented in each formula into decision variables; computing a solution to the list of pre-conditions from each formula to one of the potential solutions that specifies values for decision variables; selecting a formula, from the plurality of formulas, with a fewest number of associated variables; and creating a unit test, based on the data and the list of pre-conditions collected and solved. | 2022-03-10 |
20220075711 | HORIZONTALLY SCALABLE DISTRIBUTED SYSTEM FOR AUTOMATED FIRMWARE TESTING AND METHOD THEREOF - A system and method for automated firmware testing. The system includes test stations for testing firmware products. The stations split into pools, with each pool including multiple test stations. The system also includes multiple execution instances, each execution instance for executing tests corresponding to the associated pool. Each of competing test stations delivers a test start event to a corresponding execution instance. The corresponding execution instance receives test start events from the competing test stations, and executes a run test command on a select test station among the competing test stations such that the select test station performs test execution based on a test sequence. | 2022-03-10 |
20220075712 | ALLOCATION OF MEMORY WITHIN A DATA TYPE-SPECIFIC MEMORY HEAP - One embodiment provides for a non-transitory machine-readable medium storing instructions to cause one or more processors to perform operations comprising receiving an instruction to dynamically allocate memory for an object of a data type and dynamically allocating memory for the object from a heap instance that is specific to the data type for the object, the heap instance including a memory allocator for the data type, the memory allocator generated at compile time for the instruction based on a specification of the data type for the heap instance. | 2022-03-10 |
20220075713 | PROCESSING-IN-MEMORY AND METHOD AND APPARATUS WITH MEMORY ACCESS - A processing-in-memory includes: a memory; a register configured to store offset information; and an internal processor configured to: receive an instruction and a reference physical address of the memory from a memory controller, determine an offset physical address of the memory based on the offset information, determine a target physical address of the memory based on the reference physical address and the offset physical address, and perform the instruction by accessing the target physical address. | 2022-03-10 |
20220075714 | DATA MERGE METHOD, MEMORY STORAGE DEVICE AND MEMORY CONTROL CIRCUIT UNIT - A data merge method for a rewritable non-volatile memory module including a plurality of physical units is provided. The method includes: selecting at least one first physical unit and at least one second physical unit from the physical units; reading first mapping information from the rewritable non-volatile memory module, and the first mapping information includes mapping information of the first physical unit and mapping information of the second physical unit; copying valid data collected from the first physical unit and valid data collected from the second physical unit to at least one third physical unit of the physical units according to the first mapping information; and when a data volume of valid data copied from the second physical unit to the third physical unit reaches a data volume threshold, stopping collecting valid data from the second physical unit, and continuing collecting valid data from the first physical unit. | 2022-03-10 |
20220075715 | MEMORY MANAGEMENT METHOD, MEMORY STORAGE DEVICE AND MEMORY CONTROL CIRCUIT UNIT - A data management method, a memory storage device and a memory control circuit unit. The method includes: executing one or more read commands, and recording a physical unit having a first variation of a read count greater than a read disturb threshold as a risk physical unit; and when a data merging process is performed, dividing valid data stored in the risk physical unit into a plurality of copies and copying the copies into a plurality of recycling units. | 2022-03-10 |
20220075716 | Zoned Namespace Limitation Mitigation Using Sub Block Mode - Aspects of a storage device including a memory and a controller are provided which reduces or eliminates garbage collection in zoned namespace (ZNS) architectures by mapping zones to sub-blocks of blocks of the memory. Each zone includes a plurality of logical addresses. The controller determines a number of open zones, and maps the open zones to the sub-blocks in response to the number of open zones meeting a threshold. Thus, larger numbers of open blocks typically present in ZNS may be reduced, and increased block sizes due to scaling may be accommodated in ZNS. In some aspects, the controller receives a request from a host device to write data associated with the zones in sub-blocks, and maps each of the zones to at least one of the sub-blocks in response to the request. The request may indicate zones are partially unused. Thus, out of zone conditions may also be avoided. | 2022-03-10 |
20220075717 | Wear Leveling in Non-Volatile Memory - A method, circuit, and system for managing wear levelling in non-volatile memory. First, an original physical block address (PBA) for a logical block address (LBA) of a write operation may be received. The original PBA is one of a set of PBAs for data blocks of a non-volatile memory array. Each of these PBAs may be uniquely mapped to a particular LBA using a multistage interconnection network (MIN). A swap PBA may next be determined for the LBA. The swap PBA may be selected from the set of PBAs uniquely mapped using the MIN. Then, the MIN may be configured to map the LBA to the swap PBA. Finally, data of a first data block stored at the original PBA may be swapped with data of a second data block stored at the swap PBA. | 2022-03-10 |
20220075718 | Keeping Zones Open With Intermediate Padding - The present disclosure generally relates to methods of operating storage devices. The storage device comprises a controller and a media unit divided into a plurality of zones. Data associated with one or more first commands is written to a first portion of a first zone. Upon a predetermined amount of time passing, dummy data is written to a second portion of the first zone to fill the first zone to a zone capacity. Upon receiving one or more second commands to write data, a second zone is allocated and opened, and the data associated with the one or more second commands is written to a first portion of the second zone. The data associated with the one or more first commands is then optionally re-written to a second portion of the second zone to fill the second zone to a zone capacity, and the first zone is erased. | 2022-03-10 |
20220075719 | SYNCHRONIZING GARBAGE COLLECTION AND INCOMING DATA TRAFFIC - The technology describes performing garbage collection while data writes are occurring, which can lead to a conflict in that a new reference to an otherwise non-referenced candidate object for garbage collection is written after the non-referenced candidate object is detected. In one example implementation, orphaned binary large objects (BLOBs) that are not referenced by a descriptor file and are beyond a certain age are detected and deleted via an object references table traversal as part of garbage collection. Before reclaiming a deleted BLOB's capacity, a background process operates to restore the deleted BLOB if a new descriptor file reference to the BLOB was written during the object references table traversal. Capacity is only reclaimed after the object references table traversal and the background processing completes, for those BLOBs that were deleted and had not been restored. | 2022-03-10 |
20220075720 | TRI-COLOR BITMAP ARRAY FOR GARBAGE COLLECTION - A first object at a memory address is identified. A first index location in a bitmap that corresponds to that memory address is calculated. A bit is set at the first index location. A pointer to a child object within the first object is detected. A memory address of that child object is identified using the pointer. A second index location in the bitmap that corresponds to that memory address is calculated. A bit is set at the second index location. A bit is also set at a third index location, which is adjacent to the first index location. | 2022-03-10 |
20220075721 | CONCURRENT MARKING GARBAGE COLLECTION - A computer-implemented method is provided for reducing Compare And Swap (CAS) operations in a concurrent marking Garbage Collection (GC) process that operates on objects corresponding to a bit map of multiple blocks. The method includes finding, from among the objects, live objects that belong to a same block in the bit map from among the multiple blocks when traversing object trees of the objects for GC marking. The method further includes loading a latest value of the same block from the bitmap, updating the latest value by setting corresponding marking bits in the bit map, and updating the same block in the bit map with a single CAS operation. | 2022-03-10 |
20220075722 | GARBAGE COLLECTION ADAPTED TO MEMORY DEVICE LIFE EXPECTANCY - Systems and methods for adapting garbage collection (GC) operations in a memory device to an estimated device age are discussed. An exemplary memory device includes a memory controller to track an actual device age, determine a device wear metric using a physical write count and total writes over an expected lifetime of the memory device, estimate a wear-indicated device age, and adjust an amount of memory space to be freed by a GC operation according to the wear-indicated device age relative to the actual device age. The memory controller can also dynamically reallocate a portion of the memory cells between a single level cell (SLC) cache and a multi-level cell (MLC) storage according to the wear-indicated device age relative to the actual device age. | 2022-03-10 |
20220075723 | TILE BASED INTERLEAVING AND DE-INTERLEAVING FOR DIGITAL SIGNAL PROCESSING - Tile based interleaving and de-interleaving of row-column interleaved data is described. In one example, the de-interleaving is divided into two memory transfer stages, the first from an on-chip memory to a DRAM and the second from the DRAM to an on-chip memory. Each stage operates on part of a row-column interleaved block of data and re-orders the data items, such that the output of the second stage comprises de-interleaved data. In the first stage, data items are read from the on-chip memory according to a non-linear sequence of memory read addresses and written to the DRAM. In the second stage, data items are read from the DRAM according to bursts of linear address sequences which make efficient use of the DRAM interface and written back to on-chip memory according to a non-linear sequence of memory write addresses. | 2022-03-10 |
20220075724 | MEMORY CONTROLLERS INCLUDING EXAMPLES OF CALCULATING HAMMING DISTANCES FOR NEURAL NETWORK AND DATA CENTER APPLICATIONS - Examples of systems and method described herein provide for the processing of image codes (e.g., a binary embedding) at a memory controller with various memory devices. Such images codes may generated by various endpoint computing devices, such as Internet of Things (IoT) computing devices, Such devices can generate a Hamming processing request, having an image code of the image, to compare that representation of the image to other images (e.g., in an image dataset) to identify a match or a set of neural network results. Advantageously, examples described herein may be used in neural networks to facilitate the processing of datasets, so as to increase the rate and amount of processing of such datasets. For example, comparisons of image codes can be performed “closer” to the memory devices, e.g., at the memory controller coupled to memory devices. | 2022-03-10 |
20220075725 | TRANSMITTERS FOR GENERATING MULTI-LEVEL SIGNALS AND MEMORY SYSTEM INCLUDING THE SAME - A multi-level signal transmitter includes a voltage selection circuit, which is configured to select one amongst a plurality of driving voltages, which have different voltage levels, in response to input data including at least two bits of data therein. A driver circuit is also provided, which is configured to generate an output data signal as a multi-level signal, in response to the selected one of the plurality of driving voltages. This selected signal is provided as a body bias voltage to at least one transistor within the driver circuit. This driver circuit may include a totem-pole arrangement of first and second MOS transistors having respective first and second body bias regions therein, and at least one of the first and second body bias regions may be responsive to the selected one of the plurality of driving voltages. | 2022-03-10 |
20220075726 | TRACKING REPEATED READS TO GUIDE DYNAMIC SELECTION OF CACHE COHERENCE PROTOCOLS IN PROCESSOR-BASED DEVICES - Tracking repeated reads to guide dynamic selection of cache coherence protocols in processor-based devices is disclosed. In this regard, a processor-based device includes processing elements (PEs) and a central ordering point circuit (COP). The COP dynamically selects, on a store-by-store basis, either a write invalidate protocol or a write update protocol as a cache coherence protocol to use for maintaining cache coherency for a memory store operation. The COP's selection is based on protocol preference indicators generated by the PEs using repeat-read indicators that each PE maintains to track whether a coherence granule was repeatedly read by the PE (e.g., as a result of polling reads, or as a result of re-reading the coherence granule after it was evicted from a cache due to an invalidating snoop). After selecting the cache coherence protocol, the COP sends a response message to the PEs indicating the selected cache coherence protocol. | 2022-03-10 |
20220075727 | PRE-FETCH FOR MEMORY SUB-SYSTEM WITH CACHE - Various embodiments described herein provide for a pre-fetch operation on a memory sub-system, which can help avoid a cache miss when the memory sub-system subsequently processes a read command from a host system. | 2022-03-10 |
20220075728 | ELECTRONIC DEVICE AND MAGNETIC DISK DEVICE - According to one embodiment, an electronic device includes an interface configured to carry out communication according to a predetermined protocol, and a control section configured to add a response frame to a response to a command to be received through the interface, and transmit the response to which the response frame is added through the interface. The control section includes a setting section configured to set an arbitrarily settable field included in the response frame to a plurality of sections. | 2022-03-10 |
20220075729 | HYBRID STORAGE DEVICE WITH THREE-LEVEL MEMORY MAPPING - A hybrid storage device with three-level memory mapping is provided. An illustrative device comprises a primary storage device comprising a plurality of primary sub-blocks; a cache memory device comprising a plurality of cache sub-blocks implemented as a cache for the primary storage device; and a controller configured to map at least one portion of one or more primary sub-blocks of the primary storage device stored in the cache to a physical location in the cache memory device using at least one table identifying portions of the primary storage device that are cached in one or more of the cache sub-blocks of the cache memory device, wherein a size of the at least one table is independent of a capacity of the primary storage device. | 2022-03-10 |
20220075730 | Concurrent Cache Lookups Using Partial Identifiers - To perform a lookup for a group of plural portions of data in a cache together, a first part of an identifier for a first one of the portions of data in the group is compared with corresponding first parts of the identifiers for cache lines in the cache, the first part of the identifier for the first one of the portions of data in the group is compared with the corresponding first parts of the identifiers for the remaining portions of data in the group of plural portions of data, and a remaining part of the identifier for each portion of data is compared with the corresponding remaining parts of identifiers for cache lines in the cache. It is then determined whether a cache line for any of the portions of data in the group is present in the cache, based on the results of the comparisons. | 2022-03-10 |
20220075731 | SYSTEM AND METHOD FOR DETERMINING CACHE ACTIVITY AND OPTIMIZING CACHE RECLAMATION - Methods for determining cache activity and for optimizing cache reclamation are performed by systems and devices. A cache entry access is determined at an access time, and a data object of the cache entry for a current time window is identified that includes a time stamp for a previous access and a counter index. A conditional counter operation is then performed on the counter associated with the index to increment the counter when the time stamp is outside the time window or to maintain the counter when the time stamp is within the time window. A counter index that identifies another counter for a previous time window where the other counter value was incremented for the previous cache entry access causes the other counter to be decremented. A cache configuration command to reclaim, or additionally allocate space to, the cache is generated based on the values of the counters. | 2022-03-10 |
20220075732 | DATA ALIGNMENT FOR LOGICAL TO PHYSICAL TABLE COMPRESSION - Methods, systems, and devices for data alignment for logical to physical table compression are described. A controller coupled with the memory array may receive a command to access a logical block address associated with a memory device. In some cases, a first portion of a physical address of the memory device associated with the logical block address may be identified. The controller may perform an operation on the logical block address included in the command and identify a second portion of the physical address based on performing the operation. The physical address of the memory device may be accessed based on identifying the first portion and the second portion. | 2022-03-10 |
20220075733 | MEMORY ARRAY PAGE TABLE WALK - An example memory array page table walk can include using an array of memory cells configured to store a page table. The page table walk can include using sensing circuitry coupled to the array. The page table walk can include using a controller coupled to the array. The controller can be configured to operate the sensing circuitry to determine a physical address of a portion of data by accessing the page table in the array of memory cells. The controller can be configured to operate the sensing circuitry to cause storing of the portion of data in a buffer. | 2022-03-10 |
20220075734 | Reducing Translation Lookaside Buffer Searches for Splintered Pages - Systems, apparatuses, and methods for performing efficient translation lookaside buffer (TLB) invalidation operations for splintered pages are described. When a TLB receives an invalidation request for a specified translation context, and the invalidation request maps to an entry with a relatively large page size, the TLB does not know if there are multiple translation entries stored in the TLB for smaller splintered pages of the relatively large page. The TLB tracks whether or not splintered pages for each translation context have been installed. If a TLB invalidate (TLBI) request is received, and splintered pages have not been installed, no searches are needed for splintered pages. To refresh the sticky bits, whenever a full TLB search is performed, the TLB rescans for splintered pages for other translation contexts. If no splintered pages are found, the sticky bit can be cleared and the number of full TLBI searches is reduced. | 2022-03-10 |
20220075735 | Limiting Translation Lookaside Buffer Searches Using Active Page Size - Systems, apparatuses, and methods for limiting translation lookaside buffer (TLB) searches using active page size are described. A TLB stores virtual-to-physical address translations for a plurality of different page sizes. When the TLB receives a command to invalidate a TLB entry corresponding to a specified virtual address, the TLB performs, for the plurality of different pages sizes, multiple different lookups of the indices corresponding to the specified virtual address. In order to reduce the number of lookups that are performed, the TLB relies on a page size presence vector and an age matrix to determine which page sizes to search for and in which order. The page size presence vector indicates which page sizes may be stored for the specified virtual address. The age matrix stores a preferred search order with the most probable page size first and the least probable page size last. | 2022-03-10 |
20220075736 | DYNAMIC APPLICATION OF SOFTWARE DATA CACHING HINTS BASED ON CACHE TEST REGIONS - A processor applies a software hint policy to a portion of a cache based on access metrics for different test regions of the cache, wherein each test region applies a different software hint policy for data associated with cache entries in each region of the cache. One test region applies a software hint policy under which software hints are followed. The other test region applies a software hint policy under which software hints are ignored. One of the software hint policies is selected for application to a non-test region of the cache. | 2022-03-10 |
20220075737 | Secure Flash Controller - A computing device includes a non-volatile memory (NVM) interface and a processor. The NVM interface is configured to communicate with an NVM. The processor is configured to store in the NVM Type-Length-Value (TLV) records, each TLV record including one or more encrypted fields and one or more non-encrypted fields, the non-encrypted fields including at least respective validity indicators of the TLV records, to read the TLV records that include the encrypted fields and the non-encrypted fields from the NVM, and to invalidate selected TLV records by modifying the respective validity indicators of the selected TLV records that are stored in the non-encrypted fields. | 2022-03-10 |
20220075738 | SYSTEMS, METHODS AND APPARATUS FOR LOW LATENCY MEMORY INTEGRITY MAC FOR TRUST DOMAIN EXTENSIONS - The disclosed embodiments generally relate to methods, systems and apparatuses to authenticate instructions on a memory circuitry. In an exemplary embodiment, the disclosure relates to a computing device (e.g., a memory protection engine) to protect integrity of one or more memory circuitry. The computing device may include: a key-hash operator configured to provide a Message Authentication Code (MAC) for a secure Hash Algorithm (SHA) as a function of a hash-key, MAC-key, metadata and data; a multi-round (MR) circuitry configured to receive the MAC from the key-hash operator and to compute substantially all SHA round-functions during each clock cycle, the multi-round circuitry further comprising combination logic to process all sub-round functions of the SHA function substantially simultaneously; and a Memory Integrity Pipeline (MIP) engine to compute a hash digest, the hash digest further comprising a MAC key, a metadata and the cache line data; the MIP further comprising an input prep logic, an SHA pipeline logic and an MAC validation logic. | 2022-03-10 |
20220075739 | MINIMIZING ENERGY CONSUMPTION BY PERIPHERAL MACHINES - A method of applying feedback control on peripheral units supplying energy to manufacturing units in a manufacturing facility uses a trained model, and includes:
| 2022-03-10 |
20220075740 | PARALLEL PROCESSING ARCHITECTURE WITH BACKGROUND LOADS - Techniques for task processing using a parallel processing architecture with background loads are disclosed. A two-dimensional array of compute elements is accessed. Each compute element is known to a compiler and is coupled to its neighboring compute elements. Operation of the array is paused. The pausing occurs while a memory system continues operation. A bus coupling the array is repurposed. The repurposing couples one or more compute elements in the array to the memory system. A memory system operation is enabled during the pausing. Data is transferred from the memory system to the array of compute elements using the bus that was repurposed. The data from the memory system is transferred to scratchpad memory in the one or more compute elements within the two-dimensional array. The scratchpad memory provides operand storage. The data is tagged. The tagging guides the transferring to a particular compute element. | 2022-03-10 |
20220075741 | MEMORY SUB-SYSTEM MANUFACTURING MODE - A method includes enabling a manufacturing mode at least partially based on a first signal provided via one of a number of reserved pins of an interface connector. The method can further include providing, in response to enabling the manufacturing mode, a second signal to a memory component coupled to the interface connector via a number of other pins of the interface connector. | 2022-03-10 |
20220075742 | PCIE CONTROLLER AND LOOPBACK DATA PATH USING PCIE CONTROLLER - A PCIe controller and a loopback path using the PCIe controller. The PCIe controller includes: a transport layer transmission module, a transport layer reception module, a memory access module, and a memory, wherein the transport layer transmission module includes a first loopback control module, the transport layer reception module includes a second loopback control module, and the first loopback control module is coupled to the second loopback control module; the memory access module is coupled to the transport layer transmission module and the transport layer reception module, and the memory access module is also coupled to the memory. | 2022-03-10 |
20220075743 | DATA TRANSMISSION METHOD AND EQUIPMENT BASED ON SINGLE LINE - Disclosed is a data transmission method. The method includes: sending, instruction information of data transmission to a slave node in a preset first cycle; judging, by the slave node, whether data should be sent according to the instruction information of data transmission; sending the data to the a master node if the slave node judges that the data needs to be sent; and sending, by other slave nodes, the data sequentially to the master node according to a preset slave node priority, an electric potential condition and a data state of the other slave nodes in a preset second cycle. In a preset first cycle, data is actively requested from a slave node, and in a preset second cycle, other slave nodes can actively send the data to a master node according to a preset slave node priority, an electric potential condition and a data state of the other slave nodes. | 2022-03-10 |
20220075744 | Computer System Communication Via Sideband Processor - Techniques are disclosed relating to a method that includes monitoring, by a sideband processor, a plurality of operating conditions of a computer system using a first set of commands. This first set of commands are sent utilizing a particular command protocol over a particular communication bus. In addition, the sideband processor may be modified to support a second set of commands. The sideband processor may receive data for a particular device in the computer system. The sideband processor may modify a first command of the first set of commands to include a second command of the second set of commands. This second command may include an address associated with the particular device and at least a portion of the data. The sideband processor may then send the modified first command to a controller hub using the particular command protocol over the particular communication bus. | 2022-03-10 |
20220075745 | DETECTION OF DISPLAYPORT ALTERNATE MODE COMMUNICATION AND CONNECTOR PLUG ORIENTATION WITHOUT USE OF A POWER DISTRIBUTION CONTROLLER - This disclosure generally relates to USB TYPE-C, and, in particular, DISPLAYPORT Alternate Mode communication in a USB TYPE-C environment. In one embodiment, a device determines a DISPLAYPORT mode and determines an orientation of a USB TYPE-C connector plug. A multiplexer multiplexes a DISPLAYPORT transmission based in part on the determined orientation of the USB TYPE-C connector plug. | 2022-03-10 |
20220075746 | INTERCONNECTED SYSTEMS FENCE MECHANISM - An apparatus to facilitate memory barriers is disclosed. The apparatus comprises an interconnect, a device memory, a plurality of processing resources, coupled to the device memory, to execute a plurality of execution threads as memory data producers and memory data consumers to a device memory and a system memory and fence hardware to generate fence operations to enforce data ordering on memory operations issued to the device memory and a system memory coupled via the interconnect. | 2022-03-10 |
20220075747 | SUPPORT FOR MULTIPLE HOT PLUGGABLE DEVICES VIA EMULATED SWITCH - A networking device, system, and method of operating a networking device are provided. The illustrative networking device is disclosed to include one or more physical ports, an emulated switch positioned between the one or more physical ports and a host device, and one or more emulated devices positioned between the emulated switch and the one or more physical ports. The one or more emulated devices may be configured to populate the one or more physical ports. | 2022-03-10 |
20220075748 | BASE MODULE OF A NETWORK ASSEMBLY AND METHOD FOR CONFIGURING AN EXTENSION MODULE OF THE NETWORK ASSEMBLY - A base module of a network assembly comprises a logic unit configured to be connected to a communication bus for providing communication between the logic unit and one or several extension modules, in particular one or several functional devices and/or communication modules, for function extension or function provision of the network assembly. A network assembly comprising the base module and methods for configuring an extension module of the network assembly are further provided. | 2022-03-10 |
20220075749 | SYSTEMS AND METHODS FOR DYNAMIC CONFIGURATION OF EXTERNAL DEVICES - A computer-implemented method includes connecting a target system to a development system via an application and connecting an external device to an interface of the target system. The method also includes obtaining information about the target system and transmitting the information to the development system and instructing initiation of a driver associated with the interface based on a command received from the development system. The method further includes receiving a command to define and store a communications bus associated with the driver from the development system, receiving information about the external device from the development system, and associating the information about external device with the communications bus. | 2022-03-10 |
20220075750 | ARTICLE, DEVICE, AND TECHNIQUES FOR SERVERLESS STREAMING MESSAGE PROCESSING - A non-transitory computer-readable storage medium may be executable by a processor to receive a designation of a message bus producer, a set of business logic to be stored in a set of containers, a designation of a message bus consumer, and a designation of a set of message-handling functions. The non-transitory computer-readable storage medium may generate a serverless application stack, based upon the message bus producer, the set of business logic, the message bus consumer, and the set of message-handling functions. The non-transitory computer-readable storage medium may cause the serverless application stack to receive a message stream from the message bus producer as streaming data, process the message stream according to at least one function, stored in the set of containers, perform at least one message-handling function of the set of message-handling functions on the message stream, and transport the set of messages to the message bus consumer. | 2022-03-10 |
20220075751 | MULTI-CHIP MODULE WITH CONFIGURABLE MULTI-MODE SERIAL LINK INTERFACES - A configurable serial link interface circuit includes a first transceiver for coupling to a first serial link. The first transceiver includes a first transmit circuit to selectively drive first transmit data along the first serial link and a first receive circuit. The first receive circuit selectively receives first receive data along the first serial link. The interface includes a second transceiver for coupling to a second serial link. The second transceiver includes a second transmit circuit to selectively drive second transmit data along the second serial link, a second receive circuit to selectively receive second receive data along the second serial link, and control circuitry to control the selectivity of the first transmit circuit, the second transmit circuit, the first receive circuit and the second receive circuit. For a first mode of operation, the control circuitry configures the first and second transceivers to define a dual-duplex architecture. For a second mode of operation, the control circuitry configures the first and second transceivers to define a single-duplex architecture. | 2022-03-10 |
20220075752 | PRECODING MECHANISM IN PCI-EXPRESS - In embodiments, an apparatus for serial communication includes a transceiver, to receive a precoding request from a downlink receiver across a serial communication link, and to transmit data bits to the downlink receiver over the serial communication link. In embodiments, the apparatus further includes a precoder, coupled to the transceiver, to: receive scrambled data bits of a subset of the data bits to be transmitted, from a coupled scrambler, and, in response to the request from the downlink receiver, precode the scrambled data bits, and output the precoded scrambled data bits to the transceiver, for transmission to the downlink receiver across the serial communication link together with other unscrambled data bits. | 2022-03-10 |
20220075753 | SINGLE-WIRE TWO-WAY COMMUNICATION CIRCUIT AND SINGLE-WIRE TWO-WAY COMMUNICATION METHOD - A single-wire two-way communication circuit includes two chips and a data transmission line coupled between the two chips. Each chip includes a random access memory, a data control module, a data line control module, and a data line monitoring module. The random access memory stores data. The data control module obtains data of a first address from the random access memory and stores data of a second address received from the other chip into a second address of the random access memory. The data line control module sends the obtained data of the first address to the other chip through the data transmission line to perform a write operation. The data line monitoring module receives the data of the second address sent by the other chip through the data transmission line to perform a read operation. | 2022-03-10 |
20220075754 | SYSTEMS AND METHODS FOR MONITORING SERIAL COMMUNICATION BETWEEN DEVICES - A system for monitoring inter-integrated circuit (12C) communication includes a power supply, a battery backup unit, an 12C serial clock line (SCL) coupled between the power supply and the battery backup unit, an 12C serial data line (SDA) coupled between the power supply and the battery backup unit, and a controller. A first monitor line is coupled between the controller and the 12C serial clock line, and a second monitor line is coupled between the controller and the 12C serial data line. The controller is configured to monitor a digital communication transmitted on the 12C serial clock and data lines between the power supply and the battery backup unit, interpret a message included in the monitored digital communication, and perform a control function according to the interpreted message. | 2022-03-10 |
20220075755 | METHOD FOR FORMING DATABASE FOR MEMORY TEST AND METHOD FOR TESTING MEMORY - The present application provides a method for forming a database for memory test and a method for testing a memory. The method includes: testing the memory multiple times at a set timestep taking a preset time value as a starting point under a memory parameter, a result including passed or failed; taking, as a subgroup, the testing results in which consecutive tests are passed under the memory parameter, the testing results being capable of forming at least one subgroup; taking the subgroup with the largest number of testing results as a calibration group, and acquiring a selected time value within a testing time range of the calibration group; taking a difference between the selected time value and the preset time value as a deviation value between a data strobe signal and a clock signal, which corresponds to the memory parameter; and changing the memory parameter and repeating the above steps. | 2022-03-10 |
20220075756 | DISTRIBUTED DIGITAL LEDGER TRANSACTION NETWORK FOR FLEXIBLE, LAZY DELETION OF DATA STORED WITHIN AN AUTHENTICATED DATA STRUCTURE - The present disclosure relates to systems, methods, and non-transitory computer readable storage media for implementing a scalable, secure, efficient, and adaptable distributed digital ledger transaction network. Indeed, the disclosed systems can reduce storage and processing requirements, improve security of implementing computing devices and underlying digital assets, accommodate a wide variety of different digital programs (or “smart contracts”), and scale to accommodate billions of users and associated digital transactions. For example, the disclosed systems can utilize a host of features that improve storage, account/address management, digital transaction execution, consensus, and synchronization processes. The disclosed systems can also utilize a new programming language that improves efficiency and security of the distributed digital ledger transaction network. | 2022-03-10 |
20220075757 | DATA READ METHOD, DATA WRITE METHOD, AND SERVER - This application provides a data read method. The data read method includes: A resource management server receives a data read request from a client. The data read request is used to request a plurality of files. The resource management server reads a replica of target data from a first data center. The target data includes data of different files among the plurality of files, the first data center is a data center with highest data locality among a plurality of data centers that store replicas of the target data, and data locality is used to indicate a degree of proximity between a replica of the target data stored in a data center and the target data. The resource management server sends, to the client, the replica of the target data read from the first data center. | 2022-03-10 |
20220075758 | OPERATIONS AND MAINTENANCE FILE PROTECTION PROCESSES - Operations and Maintenance Design drawing maintenance, As-Built drawing conformance, and Record drawing conformance processes for protecting the integrity of dynamically modified files through all phases of a drawing's lifecycle. | 2022-03-10 |
20220075759 | DISTRIBUTED LEDGER TECHNOLOGY PLATFORM - A distributed ledger system is described. The system includes a provider to provide a plurality of infrastructure resources, a client to access a first set of the plurality of resources; and an operator platform to facilitate access to the first set of resources from the providers, including a processor to generate one or more blocks of transaction data associated with each resource in the first set of resources to update chain code and measure of usage the first set of resources measure, wherein the chain code is stored in a distributed ledger database shared between the operator platform and the client. | 2022-03-10 |
20220075760 | SYSTEM TO SUPPORT NATIVE STORAGE OF A CONTAINER IMAGE ON A HOST OPERATING SYSTEM FOR A CONTAINER RUNNING IN A VIRTUAL MACHINE - Described herein are a system and method for forming a container image. The system and method include obtaining a first layer of a plurality of layers of the container image. The contents of the first layer are stored in a directory such that a first disk image layer file is mounted to the directory. A second layer of the plurality of layers is obtained, and the contents of the second layer are stored in the directory so that the first disk image layer includes contents of the first layer and the second layer. The first disk image layer is saved and is mountable and includes files of the container image. | 2022-03-10 |
20220075761 | PROCESSING LARGE MACHINE LEARNING DATASETS - Embodiments of the present invention provide methods, computer program products, and systems. Embodiments of the present invention can receive, by a computing device, a request to access a datapoint of a machine learning dataset contained in a database. Embodiments of the present invention can access, by the computing device, a virtual data frame that includes a schema which represents a structure of the machine learning dataset in the database. Embodiments of the present invention can retrieve, by the computing device, the datapoint of the machine learning utilizing the virtual data frame and return, by the computing device, the retrieved datapoint in response to the request. | 2022-03-10 |
20220075762 | METHOD FOR CLASSIFYING AN UNMANAGED DATASET - A computer implemented method for classifying at least one source dataset of a computer system. The method may include providing a plurality of associated reference tables organized and associated in accordance with a reference storage model in the computer system. The method may also include calculating, by a data classifier application of the computer system, a first similarity score between the source dataset and a first reference table of the reference tables based on common attributes in the source dataset and a join of the first reference table with at least one further reference table of the reference tables having a relationship with the first reference table. The method may further include classifying, by the data classifier application, the source dataset by determining using at least the calculated first similarity score whether the source dataset is organized as the first reference table in accordance to the reference storage model. | 2022-03-10 |
20220075763 | SYSTEMS AND METHODS FOR REDUCING DATA COLLECTION BURDEN - A system for reducing data collection burden, comprising: one or more programs including instructions for: receiving a first set of metrics for a plurality of facilities; receiving data associated with the first set of metrics from one or more facilities of the plurality of facilities; determining one or more anomalies in the received data; removing the determined one or more anomalies from the received data; selecting a second set of metrics from the first set of metrics, wherein a number of metrics of the second set is less than a number of metrics of the first set of metrics; and outputting a recommendation applicable to the plurality of facilities based on the second set of metrics. | 2022-03-10 |
20220075764 | COMPARISON OF DATABASE DATA - Embodiments of the present disclosure relate to a method, system, and computer program product for comparison of database data. According to the method, a first tree structure corresponding to first data segments of first database data and a second tree structure corresponding to second data segments of second database data are at least partially obtained. Each node of the first or second tree structure indicating a characteristic value of at least one of the first or second data segments, and nodes of the first or second tree structure are divided into a first or second plurality of branches from a first or second root node based on update frequencies of the first or second data segments. A difference between the first data segments and the second data segments is determined by at least comparing characteristic values indicated by nodes in the obtained parts of the first and second tree structures. | 2022-03-10 |
20220075765 | MANAGING A LSM TREE OF KEY VALUE PAIRS THAT IS STORED IN A NON-VOLATILE MEMORY - A method for managing a log structured merged (LSM) tree of key value (KV) pairs, the LSM tree is stored in a non-volatile memory, the method may include merging runs of the LSM tree to provide merged runs; writing merged runs to the non-volatile memory; adding new runs to the LSM tree, wherein the adding comprises writing runs to the non-volatile memory; and updating at least one management data structure (MDS) to reflect the merging and the adding; wherein an MDS of the at least one MDS stores a mapping between keys of the KV pairs of the LSM tree, fingerprints associated with the KV pairs of the LSM tree, and compressed run identifiers that identify runs of the LSM tree | 2022-03-10 |
20220075766 | Cuckoo hashing including accessing hash tables using affinity table - A hashing apparatus includes a memory and circuitry. The memory stores (i) multiple hash tables storing associative entries, each including at least one entry key and a respective value, the hash tables are associated with respective different hash functions, and an associative entry is accessible by applying the relevant hash function to a key matching an entry key in the associative entry, and (ii) an affinity table that stores table-selectors for selecting hash tables with which to start a key lookup. The circuitry receives a key, reads from the affinity table, by applying an affinity function to the key, a table-selector that selects a hash table, accesses in the selected hash table an associative entry by applying the hash function associated with the selected hash table to the key, and in response to detecting that the key matches an entry key in the associative entry, outputs the respective value. | 2022-03-10 |
20220075767 | SYSTEM AND METHOD FOR CLUSTERING DISTRIBUTED HASH TABLE ENTRIES - A distributed storage system may store data object instances in persistent storage and may store keymap information for those data object instances in a distributed hash table on multiple computing nodes. Each data object instance may include a composite key containing a user key. The keymap information for each data object instance may map the user key to a locator and the locator to the data object instance. A request to store or retrieve keymap information for a data object instance may be routed to a particular computing node based on a consistent hashing scheme in which a hash function is applied to a portion of the composite key of the data object instance. Thus, related entries may be clustered on the same computing nodes. The portion of the key to which the hash function is applied may include a pre-determined number of bits or be identified using a delimiter. | 2022-03-10 |
20220075768 | ONLINE REORGANIZATION OF DATABASE TABLES WITH CONCURRENT UPDATES - In an approach to online reorganization of database tables with concurrent updates, a second table is created, where the second table has the same schema as the first table. A union of the first table and the second table is projected to create a view, where the view allows table data to be queried and modified while the database table reorganization is performed. Responsive to one or more running replication transactions completing, the database table reorganization is executed. Responsive to receiving a query, the query is allowed to access the view. | 2022-03-10 |
20220075769 | LOGFILE COLLECTION AND CONSOLIDATION - Mechanisms for consolidating log information from remote computing devices are provided. Connections with a plurality of remote computing devices are established. Each remote computing device has a corresponding logfile. For a plurality of iterations, logfile contents from each logfile on each remote computing device are retrieved, and the logfile contents are sent to a centralized monitoring service. | 2022-03-10 |
20220075770 | DYNAMIC SELECTION OF SYNCHRONIZATION UPDATE PATH - A method comprises receiving a stream of change log records from a source database system; generating change statistics based on a number of pending changes per table partition according to the change log records; estimating, based on performance statistics, a first amount of time for applying the pending changes to a target database system using an incremental update path; estimating, based on the performance statistics, a second amount of time for applying the pending changes to the target database using a bulk update path; dynamically selecting, based on comparison of the first amount of time with the second amount of time, one of the incremental update path and the bulk update path for applying the pending changes to the target database system; and applying the pending changes to the target database system using the selected update path. | 2022-03-10 |
20220075771 | DYNAMICALLY DEPLOYING EXECUTION NODES USING SYSTEM THROUGHPUT - A method, system, and computer program product for using logical segments to help manage database tasks and performance for SQL parallelism. The method may include identifying a database object within a database resource of a database. The method may further include determining a database access method for the database object. The method may further include dividing data within the database object into a plurality of logical segments based on the database access method. The method may further include receiving a query for the database. The method may further include identifying the database object that corresponds to the query. The method may further include analyzing the logical segments of the database object. The method may further include distributing access to the database resource based on the logical segments of the database object. | 2022-03-10 |
20220075772 | MANAGING DATA OBJECTS FOR GRAPH-BASED DATA STRUCTURES - Various embodiments provide methods, systems, apparatus, computer program products, and/or the like for managing, ingesting, monitoring, updating, and/or extracting/retrieving information/data associated with an electronic record (ER) stored in an ER data store and/or accessing information/data from the ER data store, wherein the ERs are generated, updated/modified, and/or accessed via a graph-based domain ontology. | 2022-03-10 |
20220075773 | COMPUTER-READABLE RECORDING MEDIUM STORING DATA PROCESSING PROGRAM, DATA PROCESSING DEVICE, AND DATA PROCESSING METHOD - A non-transitory computer-readable recording medium stores a data processing program for causing a computer to execute processing including: specifying one of boundaries between two adjacent attributes in processing target table data on the basis of association information that indicates a combination of two associated attributes among a plurality of attributes generated by analyzing analysis target table data that includes an attribute value of each of the plurality of attributes; and outputting boundary information that indicates the one of boundaries. | 2022-03-10 |
20220075774 | EXECUTING CONDITIONS WITH NEGATION OPERATORS IN ANALYTICAL DATABASES - Embodiments of the present invention provide a method and system for processing a query on a set of data blocks in analytical databases. The query is on a set of data blocks, having at least one attribute and specifies at least one selection condition on the attribute. The selection condition is associated with at least one selection expression. Attribute value information on each attribute is generated for each data block. Next, a condition is generated on each attribute to negate the selection expression, if the selection expression has a negation operator. Additional conditions are generated for each selection expression that does not contain a negation operation. The attribute value is used to select the positive and negative subsets of data blocks for each condition. Next, a negative subset that does not require processing to evaluate the query is skipped and the positive subsets and the non-skipped negative subsets are processed. | 2022-03-10 |
20220075775 | SYSTEMS AND METHODS FOR GROUPING SEARCH RESULTS FROM MULTIPLE SOURCES - Systems and methods are described for presenting search results from multiple sources by grouping the results from some of the multiple sources, ranking each of the multiple sources and groups of sources, and not presenting duplicate results from lower ranked sources. In this way, the user is provided with search results that are distinct as opposed to presenting the same result multiple times when it is available from different sources. | 2022-03-10 |
20220075776 | SYSTEMS AND METHODS FOR PRUNING EXTERNAL DATA - Disclosed herein are systems and methods for pruning external data. In an embodiment, a database platform receives a query directed at least in part to external data in an external table on an external data storage platform. The external table includes partitions that correspond to storage locations in a source directory of the external data storage platform. The storage locations contain files that contain the external data. The database platform identifies, from external-table metadata that is stored by the database platform and that maps the partitions of the external table to the storage locations in the source directory, a subset of the partitions as including data that potentially satisfies the query. The database platform identifies data that satisfies the query by scanning the identified subset of the partitions, and responds to the query at least in part with the identified data that satisfies the query. | 2022-03-10 |
20220075777 | FUZZY SEARCHING AND APPLICATIONS THEREFOR - A method, system and computer program product is disclosed for fuzzy searching. The method, which may be performed by one or more processors, may comprise providing a first prefix tree data structure representing a first data set comprising a first plurality of strings, and providing a second prefix tree data structure representing a second data set comprising a second plurality of strings. The first and second prefix tree data structures may each comprise nodes representing each character and edges connecting prefix nodes to one or more suffix nodes to represent each subsequent character in the string. A search may be performed to identify all matches between the first and second plurality of strings and also approximate matches between the first and second plurality of strings within a maximum distance k, wherein the search comprises traversing the first prefix tree data structure using a depth-first search algorithm to identify matches and approximate matches in the second prefix tree data structure. | 2022-03-10 |
20220075778 | TRANSFORMING OPERATIONS OF A COMPUTER PROGRAM FOR EXECUTION AT A DATABASE - A method includes executing a program that specifies operations and accessing a translation file that includes instructions for translating the language of the program into a language of a database. The translation file specifies operations in the language of the program that are supported by the database and the semantic meaning of the supported operations in the language of the database. Operations of the program that are unsupported by the database are processed by the program. Operations of the program that are supported by the database are determined from the translation file, and a portion of the program representing the supported operations is translated, using the translation file, into the language of the database and transmitted to the database. Data resulting from execution, within the database, of the translated portion of the program representing the operations that are supported by the database is received by the program. | 2022-03-10 |
20220075779 | SYSTEMS AND METHODS FOR REDUCING DATABASE QUERY LATENCY - A system for reducing database query latency, the system comprising: a memory storing instructions; and at least one processor configured to execute the instructions to perform operations comprising: receiving data reflecting performance of a role on a virtual server; identifying tokens associated with terms in the received data; mapping an index comprising the tokens and the terms; storing the mapped index in a first database; storing a key-value pair in a second database, the key corresponding to the mapped index, and the value corresponding to a portion of the received data; receiving a query; optimizing the query to reduce query processing time; constructing a search key based on results obtained by running the optimized query against the first database; retrieving a result value from the second database corresponding to the search key. | 2022-03-10 |
20220075780 | MULTI-LANGUAGE FUSION QUERY METHOD AND MULTI-MODEL DATABASE SYSTEM - A fusion query method and a multi-model database (MMDB) framework are provided, to add a capability of extending a foreign engine in a relational database engine and manage metadata of the foreign extensible engine by using a user table. This minimizes intrusion to the relational database engine, and implements dynamic loading and unloading of the foreign engine during runtime. Therefore, a maintenance interface and uniform data access to a multi-model database such as a relational database, a graph database, or a time series database are provided for a user, so that learning and use costs of operation and maintenance personnel and application development personnel are reduced, and security of data use is improved. | 2022-03-10 |
20220075781 | ROBUSTNESS METRICS FOR OPTIMIZATION OF QUERY EXECUTION PLANS - A method may include responding to a query to retrieve data from a database by identifying a plurality of query execution plans. An overall robustness value may be determined for each query execution plan. The overall robustness value of a query execution plan may correspond to a sum of individual robustness values for each operator included in the query execution plan. Each operator may have an individual robustness value that corresponds to a first change in a total cost of a query execution plan including the operator relative to a second change in an output cardinality of the operator. One of the plurality of query execution plans may be selected based on the overall robustness value of each of the plurality of query execution plans. The query may be executed by performing a sequence of operators included in the selected one of the plurality of query execution plan. | 2022-03-10 |
20220075782 | ASYNCHRONOUS DATABASE CACHING - Provided is a method to obtain a first query indicating a set of fields of a record of a first data store and generate a plurality of queries based on the first query, where the plurality of queries includes a shared set of parameters. The method includes storing a plurality of query values identifying the plurality of queries in an in-memory data store, determining a second query comprising the shared set of parameters based on the plurality of queries, and locking a set of values of the record based on the second query. The method includes retrieving a query response comprising the set of values with the second query, fulfilling the plurality of queries stored in the in-memory data store based on the set of values, and generating a response to the first query based on the fulfilled plurality of queries. | 2022-03-10 |
20220075783 | DYNAMICALLY DETECTING AND CORRECTING ERRORS IN QUERIES - A computer-implemented method dynamically detects and corrects an error in a query. The method includes identifying an error in a first query. The method further includes generating a set of alternate execution structures for the first query. The method includes running each of the alternate execution structures, including generating a set of results corresponding to each set of alternate execution structure, comparing each of the set of results against each other of the set of results, and storing each of the set of alternate execution structures to include a result of the set of results, for each alternate structure. The method further includes selecting, from the set of alternate execution structures, a first alternate execution structure based on a predetermined criteria, and implementing the first alternate structure in place of the first query. | 2022-03-10 |
20220075784 | SEGMENTING A PARTITION OF A DATA SET BASED ON A CODING SCHEME - A method includes receiving, by a first computing entity of a database system, a data set that is organized in rows and columns. The method further includes determining whether to partition the data set based on a parameter associated with the data set. When determining to partition the data set, the method includes determining partitioning parameters for the data set, and partitioning the data set into a plurality of data partitions in accordance with the partitioning parameters. The method further includes determining a first coding scheme for a first data partition and determining a first number of first raw data segments for a first segment group of the first partition based on the coding scheme. The method further includes dividing the first partition to produce the first number of first raw data segments for storage in the database system. | 2022-03-10 |
20220075785 | COMPUTER-READABLE RECORDING MEDIUM STORING DATA PROCESSING PROGRAM, DATA PROCESSING DEVICE, AND DATA PROCESSING METHOD - A non-transitory computer-readable recording medium stores a data processing program for causing a computer to execute processing including: obtaining a similarity between each of a plurality of attributes included in first table data and each of a plurality of attributes included in second table data; and associating each of the plurality of attributes included in the first table data with any attribute of the plurality of attributes included in the second table data on the basis of the similarity, an order of the plurality of attributes included in the first table data, and an order of the plurality of attributes included in the second table data. | 2022-03-10 |
20220075786 | DYNAMIC DATABASE QUERY PROCESSING - Computation engines and methods for dynamically computing results in response to a database request indicating a search parameter. Based on an initial result database, an initial incomplete result set with a number of results which include static data pieces that correspond to the search parameter is determined. A dynamic data piece for each result in the initial incomplete result set is determined based on a number of dynamic computation rules, thereby obtaining an intermediate completed result set. Each result of the intermediate completed result set includes the static data piece and the computed dynamic data piece. An adjustment of the dynamic data piece is computed for a sub-set of the intermediate completed result set based on a number of adjustment computation rules, thereby obtaining a finalized completed result set, and at least a subset of the finalized completed result set is returned to the client. | 2022-03-10 |
20220075787 | CONTEXTUAL SEARCH ON MULTIMEDIA CONTENT - Techniques for contextual search on multimedia content are provided. An example method includes extracting entities associated with multimedia content, wherein the entities include values characterizing one or more objects represented in the multimedia content, generating one or more query rewrite candidates based on the extracted entities and one or more terms in a query related to the multimedia content, providing the one or more query rewrite candidates to a search engine, scoring the one or more query rewrite candidates, ranking the scored one or more query rewrite candidates based on their respective scores, rewriting the query related to the multimedia content based on a particular ranked query rewrite candidate and providing for display, responsive to the query related to the multimedia content, a result set from the search engine based on the rewritten query. | 2022-03-10 |
20220075788 | BOOTSTRAPPED RELEVANCE SCORING SYSTEM - In accordance with one example method, a computing system may determine that first user profile data of a first user of a relevance scoring service is similar to second user profile data of a second user of the relevance scoring service, where the relevance scoring service is configured to assign first relevance scores to first information chunks to be presented to the first user based at least part on at least a first portion of first stored behavior data of the first user, and where the first stored behavior data is indicative of the first user's interactions with second information chunks previously presented to the first user. In response to determining that the first user profile data is similar to the second user profile data, the relevance scoring service may be configured to assign second relevance scores to third information chunks to be presented to the second user based at least in part on at least a second portion of the first stored behavior data. | 2022-03-10 |
20220075789 | PATENT MAPPING - A system and computer implemented method are provided. The method comprises maintaining a database of patent portfolios and a database of patents, with each patent stored in the database of patents being associated with one or more patent portfolios stored in the database of patent portfolios. The method includes receiving a search query associated with a first patent portfolio; searching the first portfolio as a function of the search query; generating a seed set of search results including one or more patent claims associated with the search query, the patent claims including terms from the search query; automatically generating an expanded set of search results including one or more patent claims further associated with the search query or associated with the patent claims in the seed set of search results; and mapping the one or more patent claims to a patent concept. | 2022-03-10 |