07th week of 2022 patent applcation highlights part 40 |
Patent application number | Title | Published |
20220050749 | ERROR COALESCING - A programmable crossbar matrix or an array of steering multiplexors (MUXs) coalesces (i.e., routes) the data values from multiple known “bad” bit positions within multiple symbols of a codeword, to bit positions within a single codeword symbol. The single codeword symbol receiving the known “bad” bit positions may correspond to a check symbol (vs. a data symbol). Configuration of the routing logic may occur at boot or initialization time. The configuration of the routing logic may be based upon error mapping information retrieved from system non-volatile memory (e.g., memory module serial presence detect information), or from memory tests performed during initialization. The configuration of the routing logic may be changed on a per-rank basis. | 2022-02-17 |
20220050750 | Prioritizing Locations For Error Scanning In A Storage Network - A method includes obtaining, by a computing device of a storage network, provenance information for data associated with a set of storage units of the storage network, where the data is error encoded into a set of encoded data slices, in accordance with error encoding parameters, for storage in the set of storage units. The method further includes determining, by the computing device, probable error locations associated with the set of storage units based on the provenance information. The method further includes scanning, by the computing device, the probable error locations to determine whether an error exists for the set of encoded data slices. | 2022-02-17 |
20220050751 | FALLBACK ARTIFICIAL INTELLIGENCE SYSTEM FOR REDUNDANCY DURING SYSTEM FAILOVER - There are provided systems and methods for a fallback artificial intelligence (AI) system for redundancy during system failover. A service provider may provide AI systems for automated decision-making, such as for risk analysis, marketing, and the like. An AI system may operate in a production computing environment in order to provide AI decision-making based on input data, for example, by providing an output decision. In order to provide redundancy to the production AI system, the service provider may train a fallback AI system using the input/output data pairs from the production AI system. This may utilize a deep neural network and a continual learning trainer. Thereafter, when a failover condition is detected for the production AI system, the service provider may switch from the production AI system to the fallback AI system, which may provide decision-making operations during failure of within the production computing environment. | 2022-02-17 |
20220050752 | DATA STORAGE GEOGRAPHIC LOCATION COMPLIANCE AND MANAGEMENT - A method includes automatically associating a dataset with a first tag indicating that the dataset is subject to a data compliance law, automatically associating the dataset with a second tag indicating a geographic location of a source of the dataset, selecting a remote backup destination for the dataset that is compliant by comparing the first tag and the second tag to a compliance policy, and transmitting a replica of the dataset to the remote backup destination. | 2022-02-17 |
20220050753 | SYSTEM AND METHOD FOR CLONING AS SQL SERVER AG DATABASES IN A HYPERCONVERGED SYSTEM - A system and method include creating, by an Availability Group (“AG”) controller in a virtual computing system, a first AG clone from a source database. The source database is stored on a primary replica node of an AG of the virtual computing system. The system and method also include creating, by the Controller, a second AG clone from the first AG clone and storing, by the Controller, the second AG clone on a secondary replica node of the AG. The second AG clone has a size of substantially zero. | 2022-02-17 |
20220050754 | METHOD TO OPTIMIZE RESTORE BASED ON DATA PROTECTION WORKLOAD PREDICTION - An intelligent method of selecting a data recovery site upon receiving a data recovery request. The backup system collects historical activity data of the storage system to identify work load of every data recovery site. A predicted activity load for each data recovery site is then generated using the collected data. When a request for data recovery is received, the system first identifies which data recovery site has copies of the files to be recovered. Then it uses the predicted work load for these data recovery sites to determine whether to use a geographically local site or a site that may be remote geographically, but has a lower work load. | 2022-02-17 |
20220050755 | HYBRID NVRAM LOGGING IN FILESYSTEM NAMESPACE - In one example, a method for writing data includes receiving a write request and performing a first type of logging process in connection with the write request, and creating a corresponding first logging record. Additionally, a second type of logging process is performed in connection with the write request, and a corresponding second logging record created, where the second type of logging process is different from the first type of logging process. Next, a determination is made, as between the two logging records, which of the logging records requires the least amount of non-volatile random access memory (NVRAM), and the logging record that requires the least amount of NVRAM is written to the NVRAM. | 2022-02-17 |
20220050756 | PRESERVING DATA INTEGRITY DURING CONTROLLER FAILURE - Systems and processes are disclosed to preserve data integrity during a storage controller failure. In some examples, a storage controller of an active-active controller configuration can back-up data and corresponding cache elements to allow a surviving controller to construct a correct state of a failed controller's write cache. To accomplish this, the systems and processes can implement a relative time stamp for the cache elements that allow the backed-up data to be merged on a block-by-block basis. | 2022-02-17 |
20220050757 | STORAGE SYSTEM AND STORAGE CONTROL METHOD - Two or more nodes respectively provided with two or more storage control programs constituting each redundantization group maintain redundantization of metadata at the two or more nodes. When a node failure occurs, a failover from the corresponding active storage control program to a standby storage control program is performed. As regarding at least one standby storage control program, a node with the standby storage control program arranged therein compresses a target metadata portion including a metadata portion capable of being accessed after the failover, of metadata existing in the node as regarding the corresponding redundantization group, and stores the same in a memory of the node. | 2022-02-17 |
20220050758 | CLOSING BLOCK FAMILY BASED ON SOFT AND HARD CLOSURE CRITERIA - A system includes a memory device and a processing device, operatively coupled to the memory device. The processing device is to perform operations, including initializing a block family associated with the memory device and initializing a timer associated with the block family. Responsive to beginning to program a block residing on the memory device, the processing device associates the block with the block family. In response to the timer reaching a soft closure value, the processing device performs a soft closure of the block family; continues to program data to the block; and performs a hard closure of the block family in response to one of the timer reaching a hard closure value or the block family satisfying a hard closure criteria. | 2022-02-17 |
20220050759 | THRESHOLD VOLTAGE DISTRIBTUTION ADJUSTMENT FOR BUFFER - A method includes writing received data sequentially to a particular location of a cyclic buffer of a memory device according to a first set of threshold voltage distributions. The method further includes performing a touch up operation on the particular location by adjusting the first set of threshold voltage distributions of the data to a second set of threshold voltage distributions in response to a determination that a trigger event has occurred. The second set of threshold voltage distributions can have a larger read window between adjacent threshold voltage distributions of the second set than that of the first set of threshold voltage distributions. | 2022-02-17 |
20220050760 | POWER CONSUMPTION ESTIMATION METHOD, POWER CONSUMPTION SUPPRESSION METHOD, ENVIRONMENTAL CONTRIBUTION ESTIMATION METHOD AND POWER CONSUMPTION CONTROL APPARATUS - A power consumption estimation method is provided, which is performed by a power consumption control device including a correlation database that stores data indicating a correlation between an operation state and power consumption of at least one household information communication device. The power consumption estimation method includes an operation state information acquisition step of acquiring operation state information from each household information communication device, a power consumption acquisition step of acquiring power consumption of each household information communication device by referring to the correlation database by using the operation state information, and a presenting step of presenting power consumption information by function for the at least one household information communication device, based on the power consumption of each household information communication device. | 2022-02-17 |
20220050761 | LOW OVERHEAD PERFORMANCE DATA COLLECTION - Systems and methods for collecting performance data in high performance computing systems are disclosed. To prevent massive amounts of performance data from overwhelming the system and negatively impacting performance, collected performance data may be processed into two databases: (i) an aggregate database, and (ii) a time-series database holding the newest information for real time performance analysis. Storage space may be saved by using a FIFO buffer to store collected performance data. A real-time performance collection engine may adjust the performance sampling interval used and the particular performance counters used based on measured system impact and feedback from other system modules consuming the performance data. | 2022-02-17 |
20220050762 | JOB PERFORMANCE BREAKDOWN - A system and method for processing application performance using application phase differentiation and detection is disclosed. Phase detection may be accomplished in a number of different ways, including by using a deterministic algorithm that looks for changes in the computing resource utilization patterns (as detected in the performance data collected). Machine learning (ML) and neural networks (e.g. sparse auto encoder SAE) may also be used. Performance data is aggregated according to phase and stored in a database along with additional application and computing system information. This database may then be used to find similar applications for performance prediction. | 2022-02-17 |
20220050763 | DETECTING REGIME CHANGE IN TIME SERIES DATA TO MANAGE A TECHNOLOGY PLATFORM - A system and method are provided for detecting a significant change in the character of a time series collected from a technology platform. A system is disclosed that includes a memory; and a processor coupled to the memory and configured to process time series data for a set of resources according to a method that includes: collecting time series data associated with resources in a technology platform; analyzing each of a plurality of time series to determine whether a regime change occurred, and in response to a detected regime change in a time series, truncating the time series to generate a revised time series; and utilizing the revised time series to facilitate management or control the technology platform. | 2022-02-17 |
20220050764 | SAMPLING ACROSS TRUSTED AND UNTRUSTED DISTRIBUTED COMPONENTS - Techniques are described for sampling across trusted and untrusted distributed components. In accordance with embodiments, a first computing device receives a request from a second computing device, the first request including an operation identifier (ID) and a sampling ID that was generated by transforming a telemetry scope ID from a first value in a first domain to a second value in a second domain. The transformation may serve to anonymize and compress the telemetry scope ID. The first computing device determines whether or not to sample by comparing a ratio between the sampling ID and a size of the second domain with a sampling rate associated with the first computing device. The first computing device records telemetry about its processing of the first request in response to determining to sample and does not record any telemetry about its processing of the first request in response to determining not to sample. | 2022-02-17 |
20220050765 | METHOD FOR PROCESSING LOGS IN A COMPUTER SYSTEM FOR EVENTS IDENTIFIED AS ABNORMAL AND REVEALING SOLUTIONS, ELECTRONIC DEVICE, AND CLOUD SERVER - A method for processing events logged as abnormal in a computer log includes collecting logs as to abnormal events of an electronic device, comparing such log with log data on a cloud server, determining whether the log data matches events logged as abnormal, and obtains log data if such log data matches information as to logged events identified as abnormal. The disclosure also provides an electronic device and a cloud server applying the method. | 2022-02-17 |
20220050766 | SYSTEM AND METHOD FOR AUTOMATING TESTING OF NONFUNCTIONAL REQUIREMENTS - Various methods, apparatuses/systems, and media for implementing an automation testing module are disclosed. A processor creates a plurality of production robots each configured to validate a particular nonfunctional requirement (NFR) among a plurality of NFRs of an application during a development environment of the application. The processor identifies a tool specific for testing the particular NFR from the plurality of production robots; and implements the identified tool's application programming interface (API) to automatically execute a test scenario to validate the particular NFR. The test scenario is selected from a plurality of test scenarios to be executed and tested by the production robots to validate each NFR during the development phase and to determine that the application is stable and ready for production based on validation of the plurality of NFRs prior to entering into a production phase of the application. | 2022-02-17 |
20220050767 | BUILD PROCESS FOR APPLICATION PERFORMANCE - Systems and methods for building applications by automatically incorporating application performance data into the application build process are disclosed. By capturing build settings and performance data from prior applications being executed on different computing systems such as bare metal and virtualized cloud instances, a performance database may be maintained and used to predict build settings that improve application performance (e.g., on a specific computing system or computing system configuration). | 2022-02-17 |
20220050768 | ENGINE MODEL CONSTRUCTION METHOD, ENGINE MODEL CONSTRUCTING APPARATUS, AND COMPUTER-READABLE RECORDING MEDIUM - An engine model construction method includes generating test patterns in which a plurality of manipulated variables used for an engine test are changed with time, correcting the test patterns based on first coverage of a first space of manipulated variables are allowed to take and second coverage of a second space of change rate values of the manipulated variables are allowed to take, acquiring pieces of time series data of operation amounts of the manipulated variables and controlled amounts with respect to the manipulated variables by performing an engine test using the corrected test patterns, and constructing a first engine model by performing machine learning on training data in which the corrected test patterns are adopted as input and the pieces of time series data are adopted as correct answers, by a processor. | 2022-02-17 |
20220050769 | PROGRAM TESTING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM - A program testing method is provided. The method includes receiving a test account adding instruction the test account adding instruction identifying a second test account, acquiring a first target code corresponding to a target program in response to the test account adding instruction, the first target code corresponding to a first page, a first test account being logged in on the first page, and the first page being generated according to the first target code, generating a second page corresponding to the second test account according to the first target code, and interacting through the first page and the second page to test the target program. Apparatus and computer-readable medium counterpart embodiments are also provided. | 2022-02-17 |
20220050770 | METHOD AND SYSTEM FOR PERFORMING READ/WRITE OPERATION WITHIN A COMPUTING SYSTEM HOSTING NON-VOLATILE MEMORY - A method for performing a write operation includes selecting, by a host, at least a free write buffer from a plurality of write buffers of a shared memory buffer (SMB) by accessing a cache structure within the SMB for tracking the free write buffer; sending, by the host, at least a logical address accessed from the cache structure with respect to the selected write buffer to issue a write-command to a non-volatile memory; receiving a locking instruction of the selected write buffer from the non-volatile memory; updating a status of the selected write buffer within the cache structure based on the received locking instruction; and allowing the non-volatile memory to extract contents of one or more locked write buffers including the selected write buffer. | 2022-02-17 |
20220050771 | OPERATING METHOD OF STORAGE DEVICE - An operation method a storage device including a nonvolatile memory and a memory controller configured to control the nonvolatile memory is provided. The operation method includes erasing memory cells of the nonvolatile memory using the memory controller and prohibiting an erase of the erased memory cells for a critical time using the memory controller. | 2022-02-17 |
20220050772 | DATA BLOCK SWITCHING AT A MEMORY SUB-SYSTEM - Incoming host data is programmed to a first set of data blocks indicated by a first cursor of a memory sub-system. The first set of blocks is associated with a first write mode. A determination is made that a second set of blocks associated with a second write mode is available to store the incoming host data prior to closing one or more of the first set of blocks. The incoming host data is programmed to the second set of blocks in view of a second cursor of the memory sub-system. A media management operation is performed to close the one or more of the first set of blocks. | 2022-02-17 |
20220050773 | MEMORY SYSTEM WHICH STORES A PLURALITY OF WRITE DATA GROUPED INTO A TRANSACTION - A memory system may include: a nonvolatile memory device; a volatile memory suitable for storing write data; and a controller suitable for: allocating a normal write buffer in the volatile memory when normal write data are inputted, allocating a first write buffer in the volatile memory when first write data, which are grouped into a first transaction and first total size information on a total size of the first transaction, are inputted, allocating a second write buffer in the volatile memory when second write data, which are grouped into a second transaction and second total size information on a total size of the second transaction, are inputted, managing sizes of the first and second write buffers to change them in response to the first and second total size information, respectively, and managing a size of the normal write buffer to fix it to a set size. | 2022-02-17 |
20220050774 | Virtual splitting of memories - A system includes a memory, including a plurality of memory locations having different respective addresses, and a processor. The processor is configured to compute one of the addresses from (i) a first sequence of bits derived from a tag of a data item, and (ii) a second sequence of bits representing a class of the data item. The processor is further configured to write the data item to the memory location having the computed address and/or read the data item from the memory location having the computed address. Other embodiments are also described. | 2022-02-17 |
20220050775 | DISASSOCIATING MEMORY UNITS WITH A HOST SYSTEM - A command pertaining to a non-volatile memory device on a memory sub-system is received from a host system. A portion of the non-volatile memory device has an association with the host system. In response to determining that the command is a dissociate instruction to dissociate the portion of the non-volatile memory device on the memory sub-system with the host system, remove the association of the portion of the non-volatile memory device on the memory sub-system with the host system. | 2022-02-17 |
20220050776 | CONTENT-ADDRESSABLE MEMORY FOR SIGNAL DEVELOPMENT CACHING IN A MEMORY DEVICE - Methods, systems, and devices related to content-addressable memory for signal development caching are described. In one example, a memory device in accordance with the described techniques may include a memory array, a sense amplifier array, and a signal development cache configured to store signals (e.g., cache signals, signal states) associated with logic states (e.g., memory states) that may be stored at the memory array (e.g., according to various read or write operations). The memory device may also include storage, such as a content-addressable memory, configured to store a mapping between addresses of the signal development cache and addresses of the memory array. In various examples, accessing the memory device may include determining and storing a mapping between addresses of the signal development cache and addresses of the memory array, or determining whether to access the signal development cache or the memory array based on such a mapping. | 2022-02-17 |
20220050777 | WRITE DATA FOR BIN RESYNCHRONIZATION AFTER POWER LOSS - A system includes a memory device and a processing device, operatively coupled to the memory device. The processing device is to perform operations including detecting a voltage of a power source for the memory device has dropped below a threshold voltage indicative of an imminent power loss and writing data to the memory device in response to the detecting. The operations further include measuring a characteristic of the data in response to detecting a power on of the memory device; determining an estimated amount of time for which the memory device was powered off based on results of the measuring; and in response to the estimated amount of time satisfying a first threshold criterion, updating a value for a temporal voltage shift of a block family of programmed data based on the estimated amount of time. | 2022-02-17 |
20220050778 | PROVIDING A DYNAMIC DIGITAL CONTENT CACHE - One or more embodiments of a thumbnail caching system dynamically provide a thumbnail cache of digital content items (e.g., photos, videos, audio) to a user on a client device. In particular, the thumbnail caching system provides a thumbnail cache of a digital content collection to a client device such that the thumbnail cache does not exceed a threshold storage limit for the client device. In addition, the thumbnail caching system intelligently adjusts the thumbnails within the thumbnail cache to keep the size of the thumbnail cache within the threshold storage limit irrespective of the number of digital content items stored or added to the digital content collection. Further, the thumbnail caching system can dynamically adjust the size of the thumbnail cache in response to a user adding or removing external data to the client device. | 2022-02-17 |
20220050779 | MEMORY DISPOSITION DEVICE, MEMORY DISPOSITION METHOD, AND RECORDING MEDIUM STORING MEMORY DISPOSITION PROGRAM - A memory disposition device of a computer system in which a plurality of nodes exists, each of the nodes including a pair of a processor and a memory, the memory disposition device includes: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: determine a node in which a memory area to be mapped is disposed; and duplicate the memory area and disposing the memory area, based on a determination result, in a local memory of a node in which a process operates, wherein the at least one processor is configured to invalidate maintenance of cache coherency between the nodes and invalidates access to a remote memory for the process. | 2022-02-17 |
20220050780 | SYSTEM AND METHOD FOR FACILITATING HYBRID HARDWARE-MANAGED AND SOFTWARE-MANAGED CACHE COHERENCY FOR DISTRIBUTED COMPUTING - A node controller is provided to include a first interface to interface with one or more processors, a second interface including a plurality of ports to interface with node controllers within a base node and other nodes in the cache-coherent interconnect network. The node controller can further include a third interface to interface with a first plurality of memory devices and a cache coherence management logic. The cache coherence management logic can maintain, based on a first circuitry, hardware-managed cache coherency in the cache-coherent interconnect network. The cache coherence management logic can further facilitate, based on a second circuitry, software-managed cache coherency in the cache-coherent interconnect network. | 2022-02-17 |
20220050781 | COMMAND PROCESSOR PREFETCH TECHNIQUES - Techniques for prefetching are provided. The techniques include receiving a first prefetch command; in response to determining that a history buffer indicates that first information associated with the first prefetch command has not already been prefetched, prefetching the first information into a memory; receiving a second prefetch command; and in response to determining that the history buffer indicates that second information associated with the second prefetch command has already been prefetched, avoiding prefetching the second information into the memory. | 2022-02-17 |
20220050782 | METHOD FOR PROCESSING DATA, ELECTRONIC DEVICE, AND STORAGE MEDIUM FOR METHOD - A method of processing and storing general data by means of hardware obtains initial unhashed data and a fixed value. If a value of the initial unhashed data is greater than the fixed value, the initial data is divided into N sub-data or segments. A size of each sub-data is not more than the fixed value, N being an integer greater than 1. The collection of sub sets of data is input into a memory of the electronic device after hashing. | 2022-02-17 |
20220050783 | Controlling Cache Size and Priority Using Machine Learning Techniques - Techniques are disclosed relating to controlling cache size and priority of data stored in the cache using machine learning techniques. A software cache may store data for a plurality of different user accounts using one or more hardware storage elements. In some embodiments, a machine learning module generates, based on access patterns to the software cache, a control value that specifies a size of the cache and generates time-to-live values for entries in the cache. In some embodiments, the system evicts data based on the time-to-live values. The disclosed techniques may reduce cache access times and/or improve cache hit rate. | 2022-02-17 |
20220050784 | METHOD AND SYSTEM FOR LOGICAL TO PHYSICAL (L2P) MAPPING FOR DATA-STORAGE DEVICE COMPRISING NON-VOLATILE MEMORY - The present disclosure provides a method of logical to physical mapping for a data-storage device comprising a non-volatile memory device. The method comprises maintaining a first type of information representing at least a part of a logical-to-physical address translation map. Further, the method comprises maintaining a second type of information pertaining to the logical-to-physical translation map as a part of a physical page. Further, the method comprises completing a logical-to-physical mapping based on the first and second type of information to thereby determine a physical location, within one or more of the physical pages, of the data stored in each logical page. | 2022-02-17 |
20220050785 | SYSTEM PROBE AWARE LAST LEVEL CACHE INSERTION BYPASSING - Systems, apparatuses, and methods for employing system probe filter aware last level cache insertion bypassing policies are disclosed. A system includes a plurality of processing nodes, a probe filter, and a shared cache. The probe filter monitors a rate of recall probes that are generated, and if the rate is greater than a first threshold, then the system initiates a cache partitioning and monitoring phase for the shared cache. Accordingly, the cache is partitioned into two portions. If the hit rate of a first portion is greater than a second threshold, then a second portion will have a non-bypass insertion policy since the cache is relatively useful in this scenario. However, if the hit rate of the first portion is less than or equal to the second threshold, then the second portion will have a bypass insertion policy since the cache is less useful in this case. | 2022-02-17 |
20220050786 | INTER-DEVICE PROCESSING SYSTEM WITH CACHE COHERENCY - The devices within an inter-device processing system maintain data coherency in the last level caches of the system as a cache line of data is shared between the devices by utilizing a directory in one of the devices that tracks the coherency protocol states of the memory addresses in the last level caches of the system. | 2022-02-17 |
20220050787 | MULTIFUNCTION COMMUNICATION INTERFACE SUPPORTING MEMORY SHARING AMONG DATA PROCESSING SYSTEMS - In a data processing environment, a communication interface of a second host data processing system receives, from a first host data processing system, a host command in a first command set. The host command specifies a memory access to a memory coupled to the second host data processing system. The communication interface translates the host command into a command in a different second command set emulating coupling of an attached functional unit to the communication interface. The communication interface presents the second command to a host bus protocol interface of the second host data processing system. Based on receipt of the second command, the host bus protocol interface initiates, on a system fabric of the second host data processing system, a host bus protocol memory access request specifying the memory access. | 2022-02-17 |
20220050788 | USER PROCESS IDENTIFIER BASED ADDRESS TRANSLATION - A processing device of a memory sub-system can receive a first address from a host and can provide the first address to a memory management unit (MMU) for translation. The processing device can also receive a second address from the MMU wherein the second address is translated from the first address. The processing device can further access the memory device utilizing the second address. | 2022-02-17 |
20220050789 | RESTARTABLE, LOCK-FREE CONCURRENT SHARED MEMORY STATE WITH POINTERS - Systems and methods for processing memory address spaces corresponding to a shared memory are disclosed. After a writer restart process, pre-restart writer pointers of a pre-restart writer addressable space in the shared memory are replaced with corresponding location independent pointers. A writer pointer translation table is rebuilt in the shared memory to replace an association of modified pre-restart writer pointers and pre-restart translation base pointers based on the pre-restart writer pointers, respectively, with an association of modified post-restart writer pointers and post-restart translation base pointers based on post-restart writer pointers, respectively. After the writer pointer translation table is rebuilt, the location independent pointers are replaced with post-restart writer pointers in the shared memory, respectively, and the post-restart writer pointers are stored in the shared memory for access by one or more readers of the shared memory. | 2022-02-17 |
20220050790 | Private Memory Management using Utility Thread - Techniques are disclosed relating to private memory management using a mapping thread, which may be persistent. In some embodiments, a graphics processor is configured to generate a pool of private memory pages for a set of graphics work that includes multiple threads. The processor may maintain a translation table configured to map private memory addresses to virtual addresses based on identifiers of the threads. The processor may execute a mapping thread to receive a request to allocate a private memory page for a requesting thread, select a private memory page from the pool in response to the request, and map the selected page in the translation table for the requesting. The processor may then execute one or more instructions of the requesting thread to access a private memory space, wherein the execution includes translation of a private memory address to a virtual address based on the mapped page in the translation table. The mapping thread may be a persistent thread for which resources are allocated for an entirety of a time interval over which the set of graphics work is executed. | 2022-02-17 |
20220050791 | LINEAR TO PHYSICAL ADDRESS TRANSLATION WITH SUPPORT FOR PAGE ATTRIBUTES - Embodiments of the invention are generally directed to systems, methods, and apparatuses for linear to physical address translation with support for page attributes. In some embodiments, a system receives an instruction to translate a memory pointer to a physical memory address for a memory location. The system may return the physical memory address and one or more page attributes. Other embodiments are described and claimed. | 2022-02-17 |
20220050792 | DETERMINING PAGE SIZE VIA PAGE TABLE CACHE - A page directory entry cache (PDEC) can be checked to potentially rule out one or more possible page sizes for a translation lookaside buffer (TLB) lookup. Information gained from the PDEC lookup can reduce the number of TLB checks required to conclusively determine if the TLB lookup is a hit or a miss. | 2022-02-17 |
20220050793 | PREVENTION OF RAM ACCESS PATTERN ATTACKS VIA SELECTIVE DATA MOVEMENT - Aspects of the present disclosure relate to techniques for minimizing the effects of RowHammer and induced charge leakage. In examples, systems and methods for preventing access pattern attacks in random-access memory (RAM) are provided. In aspects, a data request associated with a page table may be determined to be a potential security risk and such potential security risk may be mitigated by randomly selecting a memory region from a subset of memory regions, copying data stored in a memory region associated with a page table entry in the page table to the second memory region, disassociating the second memory region from the subset of memory regions and associating the memory region associated with the page table to the second memory region, and updating the page table entry in the page table to refer to the second memory region. | 2022-02-17 |
20220050794 | NARROW DRAM CHANNEL SYSTESM AND METHODS - The systems and methods are configured to efficiently and effectively access memory. In one embodiment, a memory controller comprises a request queue, a buffer, a control component, and a data path system. The request queue receives memory access requests. The control component is configured to process information associated with access requests via a first narrow memory channel and a second narrow memory channel. The first narrow memory channel and the second narrow memory channel can have a portion of command/control communication lines and address communication lines that are included in and shared between the first narrow memory channel and the second narrow memory channel. The data path system can include a first data module and one set of unshared data lines associated with the first memory channel and a second data module and another set of unshared data lines associated with second memory channel. | 2022-02-17 |
20220050795 | DATA PROCESSING METHOD, APPARATUS, AND DEVICE - A data processing method includes receiving, by a virtual machine, an I/O access request. The I/O access request is used to access data, the I/O access request includes a type of hardware data used to indicate a working status of a virtual I/O device, and the virtual I/O device is obtained after the I/O device is virtualized. The method also includes identifying, by the virtual machine, that the type of the hardware data in the I/O access request is first-type data. The first-type data is hardware data of the virtual I/O device that remains unchanged in a data processing process. The method further includes obtaining, by the virtual machine, to-be-accessed data from a first memory space. The first memory space is memory storage space in the data processing system. | 2022-02-17 |
20220050796 | FAN COMMUNICATION METHOD AND RELATED FAN SYSTEM - A communication method for a fan includes transmitting an initial signal with a specific duty cycle pattern to the fan; entering a communication mode after the fan receives the initial signal; reading information of the fan by a firmware of the fan; and transforming the information of the fan into a fake tachometer (TACH) signal and transmitting the fake TACH signal to a controller via a TACH signal line under the communication mode. | 2022-02-17 |
20220050797 | Multi-Channel Communications Between Controllers In A Storage System - Enabling multi-channel communications between controllers in a storage array, including: creating a plurality of logical communications channels between two or more storage array controllers; inserting, into a buffer utilized by a direct memory access (‘DMA’) engine of a first storage array controller, a data transfer descriptor describing data stored in memory of the first storage array controller and a location to write the data to memory of a second storage array controller; retrieving, in dependence upon the data transfer descriptor, the data stored in memory of the first storage array controller; and writing, via a predetermined logical communications channel, the data into the memory of the second storage array controller in dependence upon the data transfer descriptor. | 2022-02-17 |
20220050798 | DYNAMICALLY REPROGRAMMABLE TOPOLOGICALLY UNIQUE INTEGRATED CIRCUIT IDENTIFICATION - A method, apparatus, and computer program product provide for dynamically reprogrammable topologically unique integrated circuit identification. In an example embodiment, an integrated circuit may be arranged among multiple integrated circuits. The integrated circuit may be configured to derive a topologically unique identifier by performing input measurements of stimuli provided by a host circuit. The integrated circuit may be topologically indistinguishable from at least one other integrated circuit of the multiple integrated circuits from a perspective of the host circuit. | 2022-02-17 |
20220050799 | UNIT FOR A BUS SYSTEM, MASTER-SLAVE BUS SYSTEM WITH A PLURALITY OF UNITS, AND METHOD FOR ADDRESSING UNITS OF A BUS SYSTEM - The disclosure relates to a unit for a bus system, a master/slave bus system with such units, and a method for assigning individual unit addresses for units of a bus system, wherein through the use of an enable signal, which is relayed from unit to unit, only one unit is respectively in an allocation mode in which the unit that is respectively in the allocation mode is allocated an individual unit address so that the units of the bus system can each be allocated with the unique individual address one after the other in the sequence of their cabling. | 2022-02-17 |
20220050800 | MEMORY CONTROLLER, METHOD OF OPERATING MEMORY CONTROLLER AND STORAGE DEVICE - Example memory controllers are disclosed. An example memory controller may include a PHY module including a first PHY terminal connected to a plurality of pins of a device connector, a MAC module including a first MAC terminal that is enabled to form a first lane with the first PHY terminal, and a second MAC terminal that is disabled without being connected to the first PHY terminal, a switch controller configured to receive a signal of a host connector connected to the device connector from at least one of the plurality of pins and output a switch signal in response to the signal of the host connector, and a switch configured to disable the second MAC terminal and form the first lane by connecting the first PHY terminal to the first MAC terminal in response to the switch signal. | 2022-02-17 |
20220050801 | METHOD FOR SELECTIVELY CONNECTING TO A SMART PERIPHERAL AND SYSTEM THEREFOR - A method may include a software service executing at an information handling system to determine desired capabilities of a docking station. The software service receives information from available docking stations via a wireless communication interface, the information identifying actual capabilities of each docking station. The method further includes coupling the information handling system to a selected docking station in response to determining at the information handling system that the actual capabilities of the selected docking station provide the desired capabilities. | 2022-02-17 |
20220050802 | COMMAND BASED ON-DIE TERMINATION FOR HIGH-SPEED NAND INTERFACE - Systems, apparatus and methods are provided for multi-drop multi-load NAND interface topology where a number of NAND flash devices share a common data bus with a NAND controller. A method for controlling on-die termination in a non-volatile storage device may comprise receiving a chip enable signal on a chip enable signal line from a controller, receiving an on-die termination (ODT) command on a data bus from the controller while the chip enable signal is on, decoding the on-die termination command and applying termination resistor (RTT) settings in the ODT command to a selected non-volatile storage unit at the non-volatile storage device to enable ODT for the selected non-volatile storage unit. | 2022-02-17 |
20220050803 | Universal Serial Bus Type-C Adaptor Board - A USB type-C adapter board is disclosed. Through a circuit board electrically connected to a USB type-C connector and a JTAG connector, ground pins of the USB type-C connector and the JTAG connector can form a ground net, data output pins of the USB type-C connector and the JTAG connector can form a data output net, data input pins of the USB type-C connector and the JTAG connector can form a data input net, clock pins of the USB type-C connector and the JTAG connector can form a clock net, test mode selection pins of the USB type-C connector and the JTAG connector can form a test mode selection net. | 2022-02-17 |
20220050804 | ELECTRONIC DEVICE AND CONTROL METHOD THEREOF - An electronic device includes a communication unit, a control unit, and a display unit. The communication unit communicates with an external device using one of communication methods. The control unit determines a communication method, from among the communication methods, unable to be used in communication with the external device. The display unit displays an user interface that is not capable of selecting the determined communication method. | 2022-02-17 |
20220050805 | MULTIPLE DIES HARDWARE PROCESSORS AND METHODS - Methods and apparatuses relating to hardware processors with multiple interconnected dies are described. In one embodiment, a hardware processor includes a plurality of physically separate dies, and an interconnect to electrically couple the plurality of physically separate dies together. In another embodiment, a method to create a hardware processor includes providing a plurality of physically separate dies, and electrically coupling the plurality of physically separate dies together with an interconnect. | 2022-02-17 |
20220050806 | COMPUTATIONAL ARRAY MICROPROCESSOR SYSTEM USING NON-CONSECUTIVE DATA FORMATTING - A microprocessor system comprises a computational array and a hardware data formatter. The computational array includes a plurality of computation units that each operates on a corresponding value addressed from memory. The values operated by the computation units are synchronously provided together to the computational array as a group of values to be processed in parallel. The hardware data formatter is configured to gather the group of values, wherein the group of values includes a first subset of values located consecutively in memory and a second subset of values located consecutively in memory. The first subset of values is not required to be located consecutively in the memory from the second subset of values. | 2022-02-17 |
20220050807 | PREFIX PROBE FOR CURSOR OPERATIONS ASSOCIATED WITH A KEY-VALUE DATABASE SYSTEM - A prefix probe component receives a request to perform a cursor operation to search for one or more data elements of a key-value data store, the request comprising a key identifier associated with the one or more data elements, and wherein the key-value data store comprises a tree structure with a plurality of nodes; traverses a portion of the plurality of nodes to identify data elements in the key-value data store that match the key identifier; determines whether a number of the data elements that match the key identifier satisfies a threshold condition; and responsive to determining that the number of data elements satisfies the threshold condition, performs the cursor operation for the data elements that match the key identifier. | 2022-02-17 |
20220050808 | DATA PRUNING BASED ON METADATA - A system and method for pruning data based on metadata. The method may include receiving a query comprising a plurality of predicates and identifying one or more applicable files comprising database data satisfying at least one of the plurality of predicates. The identifying the one or more applicable files including reading metadata stored in a metadata store that is separate from the database data. The method further includes pruning inapplicable files comprising database data that does not satisfy at least one of the plurality of predicates to create a reduced set of files and reading the reduced set of files to execute the query. | 2022-02-17 |
20220050809 | DISTRIBUTED METADATA MANAGEMENT CONSISTENCY ASSURANCE METHOD, DEVICE, SYSTEM AND APPLICATION - A distributed metadata management consistency assurance method, device, system and application are provided. A consistent node is deployed in a metadata cluster, the client sends a metadata update request to the consistent node, and the consistent node returns a metadata update success message to the client, sequentially records the metadata update request, marks old metadata as invalidated, and deletes the invalidation mark after asynchronous data synchronization with the metadata server. The client sends a metadata read operation to the metadata server. If an object of the metadata read operation is marked as invalidated, read data that has not yet completed asynchronous data synchronization is returned via the consistent node; otherwise, the read data is directly returned via the metadata server where the metadata is located. The disclosure can ensure consistency of distributed metadata management, and improve metadata access performance as far as possible while ensuring the consistency of metadata update. | 2022-02-17 |
20220050810 | AUTOMATICALLY ASSIGNING APPLICATION SHORTCUTS TO FOLDERS WITH USER-DEFINED NAMES - Systems and methods are described for automatically organizing application shortcuts into folders with user-defined names. An illustrative method includes identifying a plurality of keywords associated with folders with user-defined names on a device, identifying a keyword associated with an application being installed on the device, determining whether the keyword associated with the application matches a keyword in the plurality of keywords, and in response to determining that the keyword associated with the application matches a keyword in the plurality of keywords, adding a shortcut for the application to a folder with a user-defined name corresponding to the matching keyword. | 2022-02-17 |
20220050811 | METHOD AND APPARATUS FOR SYNCHRONIZING FILE - A method for synchronizing a file includes acquiring a first synchronization instruction for a target application. The first synchronization instruction is used to instruct to synchronize a target file generated by the target application to a server of an auxiliary application, the target application is a graphics drawing application, and the auxiliary application is a product lifecycle management (PLM) application. The method includes acquiring the target file based on the first synchronization instruction and sending a second synchronization instruction carrying the target file to the server based on a target interface of a synchronization plug-in of the target application. The target interface of the synchronization plug-in is configured to communicate with the server. | 2022-02-17 |
20220050812 | METHOD FOR LOADING DATA IN A TARGET DATABASE SYSTEM - The present disclosure relates to a computer implemented method for loading data in a target database system. The method comprises: determining that a load of a source table is expected to occur in the target database system. A future target table may be provided in advance in accordance with a defined table schema, and thereafter a load request for loading the source table may be received. Data of the source table may be loaded into the future target table. | 2022-02-17 |
20220050813 | OUTPUT VALIDATION OF DATA PROCESSING SYSTEMS - A method is provided for output validation of data processing systems, performed by one or more processors. The method comprises aggregating at least a portion of a first data table, which is an output of a data pipeline of a first data processing system, into a first aggregated data table; aggregating at least a portion of a second data table, which is an output of a data pipeline of a second data processing system, into a second aggregated data table; the second data processing system being designed to perform essentially a same functionality as the first data processing system; performing a data comparison between the first aggregated data table and the second aggregated data table to obtain a data differentiating table; performing a schema comparison between the first aggregated data table and the second aggregated data table to obtain a schema differentiating table; generating a summary from the data differentiating table and the schema differentiating table; and deriving a value from the summary that indicates a similarity between the output of the data pipeline of the first data processing system and the output of the data pipeline of the second data processing system. | 2022-02-17 |
20220050814 | APPLICATION PERFORMANCE DATA PROCESSING - A system and method for generating performance assistance charts is disclosed. An application performance spectrometer aggregates collected application performance data by scaling, normalizing and quantizing the data so that all samples indicative of low performance appear on one side of the graph, all samples indicative of high performance appear on the other side of the graph, and samples in between are positioned relative to those two poles in quantized buckets. The spectrometer may be used to visualize an application's performance characteristics, and as an application fingerprint may be used to compare different applications and determine which have similar performance profiles. | 2022-02-17 |
20220050815 | DETERMINATION AND RECONCILIATION OF SOFTWARE USED BY A MANAGED NETWORK - A database may contain representations of: (i) software packages managed by a software management tool, including publishers, titles, and categories associated with each, and (ii) a plurality of software activities, including descriptions and amounts associated with each. A server device may be configured to obtain classifications of the software activities that predict the publishers, titles, and categories of the software activities from the descriptions. The server device may further compare the software packages to the classifications in order to identify: (i) unmanaged software packages, and (ii) amounts associated with the software packages. The server device may also transmit a representation of a graphical user interface that depicts first and second panes, the first pane listing the publishers with respective total publisher amounts and whether any of the unmanaged software packages are attributable to each of the publishers, and the second pane including a chart depicting the amounts incurred over time. | 2022-02-17 |
20220050816 | HASH-BASED ATTRIBUTE PREDICTION FOR POINT CLOUD CODING - A method, computer program, and computer system is provided for point cloud coding. Data corresponding to a point cloud is received. Hash elements corresponding to attribute values associated with the received data is reconstructed. A size of a hash table may be decreased based on deleting one or more of the hash elements corresponding to non-border regions associated with the attribute values. The data corresponding to the point cloud is decoded based on the reconstructed hash elements. | 2022-02-17 |
20220050817 | ENHANCING SPARSE INDEXES - A data structure associated with a sparse index is determined to include a plurality of redundant keys with at least one set of duplicate keys. The at least one set of duplicate keys is ranked, according to a set of criteria. According to the ranking, a first set of duplicate keys from the at least one set is selected. In place of the first set, a first guard node is inserted. The first guard node includes a first key value identical to the first set of duplicate keys and is linked to a first set of field nodes representing a first set of field values associated with the first set of duplicate keys. | 2022-02-17 |
20220050818 | DATA PROCESSING METHOD AND RELATED APPARATUS - A data processing method is provided. The method includes obtaining an operation instruction, the operation instruction including operation type information and target data unit information corresponding to a target data unit; querying a target data group in a data group set according to the target data unit information, the data group set including at least one data group, the data group including at least one data unit; obtaining locked-state information of the target data group; performing locking detection on the target data unit based on the locked-state information and the operation type information to obtain a detection result; performing locking processing on the target data unit based on the detection result; and executing the operation instruction after the locking processing is performed, to perform an operation corresponding to the operation type information on the target data unit. | 2022-02-17 |
20220050819 | AUTOMATED PINNING OF FILE SYSTEM SUBTREES - Methods and systems for improved pinning of file system subtrees are provided. In one aspect, a method is provided that includes receiving an identifier of a base directory within a file system tree. A plurality of subnodes of the base directory may be identified within the file system tree. At least a subset of the subnodes may be temporarily pinned to a plurality of metadata servers (MDSs). Pinning each respective subnode of the at least a subset of subnodes may include hashing an identifier of the respective subnode to generate a hashed value corresponding to a particular MDS and assigning the particular MDS to store and manage metadata for a subdirectory associated with the respective subnode. | 2022-02-17 |
20220050820 | Anomaly Detection - The present disclosure relates to a method for detecting anomalies with respect to a database comprising a plurality of physical entity records of insurance claims, each physical entity record comprising physical data values for at least one numeric attribute and partition-specifying values concerning values for one or more nominal attributes from one or more insurance claim records. The method includes retrieving and partitioning the plurality of physical entity records from the database, training an unsupervised anomaly detection algorithm on the plurality of physical entity records to obtain a trained anomaly detection model for each partition, calculating an anomaly score for each physical entity record using the trained anomaly detection model associated with each partition, and updating the plurality of physical entity records in the database by adding the associated anomaly score. The method is used to determine if a user-provided physical entity record is fraudulent using the anomaly score. | 2022-02-17 |
20220050821 | FINE-GRAINED SHARED MULTI-TENANT DE-DUPLICATION SYSTEM - In one example, a method includes receiving, at a cloud storage site, chunks that each take the form of a hash of a combination that includes two or more salts and a file object, and one of the salts is a retention salt shared by the chunks, monitoring a time period associated with the retention salt, when the time period has expired, removing the chunks that include the retention salt, and depositing the removed chunks in a deleted items cloud store. | 2022-02-17 |
20220050822 | FINE-GRAINED SHARED MULTI-TENANT DE-DUPLICATION SYSTEM - In one example, a method includes receiving, at a cloud storage site, chunks that each take the form of a hash of a combination that includes two or more salts and a file object, and one of the salts is a retention salt shared by the chunks, monitoring a time period associated with the retention salt, when the time period has expired, removing the chunks that include the retention salt, and depositing the removed chunks in a deleted items cloud store. | 2022-02-17 |
20220050823 | FINE-GRAINED SHARED MULTI-TENANT DE-DUPLICATION SYSTEM - In one example, a method includes receiving, at a cloud storage site, chunks that each take the form of a hash of a combination that includes two or more salts and a file object, and one of the salts is a retention salt shared by the chunks, monitoring a time period associated with the retention salt, when the time period has expired, removing the chunks that include the retention salt, and depositing the removed chunks in a deleted items cloud store. | 2022-02-17 |
20220050824 | METHOD AND SYSTEM FOR ADDRESS VERIFICATION - The method for address verification preferably includes: receiving an unverified address; parsing the unverified address into address elements; determining a candidate address set based on the address elements; determining an address comparison set from the verified address database; selecting an intended address from the address comparison set; optionally facilitating use of the intended address; and optionally determining and providing a call to action based on the intended address. | 2022-02-17 |
20220050825 | BLOCK CHAIN BASED MANAGEMENT OF AUTO REGRESSIVE DATABASE RELATIONSHIPS - Aspects of the disclosure relate to blockchain based management of auto regressive database relationships for a software application. A computing platform may retrieve, by a computing device and from one or more transaction processing systems, a data field associated with a transaction performed by a customer. A relationship between the data field and the customer may be identified based on a repository of historical transaction data. One or more ledgers of a distributed ledger system to be potentially updated may be identified. Then, the computing platform may determine whether the one or more identified ledgers are to be updated. Based upon a determination that the one or more identified ledgers are to be updated, the computing platform may provide, to the one or more identified ledgers, the data field. Then, the computing platform may cause the one or more identified ledgers to be updated. | 2022-02-17 |
20220050826 | NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING COMMUNICATION PROGRAM, COMMUNICATION METHOD, AND COMMUNICATION APPARATUS - A non-transitory computer-readable storage medium storing a communication program for causing a computer to execute processing, the processing including: managing an update status of data owned by a data-owning apparatus; managing an acquisition status of the data acquired by a data-using apparatus through the data-owning apparatus; determining, when a data acquisition request from the data-using apparatus is detected, whether the data-using apparatus has acquired updated data that has been updated, based on the update status and the acquisition status; and issuing an access permission for acquiring the updated data to the data-using apparatus in a case where it is determined that the data-using apparatus has not acquired the updated data. | 2022-02-17 |
20220050827 | BLOCK VERIFICATION METHOD, APPARATUS AND DEVICE - Embodiments of the present application provide a block verification method, apparatus and device. The method includes: acquiring a block to be detected, where a plurality of transactions are stored in the block to be detected; obtaining a plurality of state operation queues according to a state item of each state operation in each transaction, where each state operation queue includes state operations belonging to the same state item; and performing parallel verification on each state operation queue and obtaining a verification result of the block. Each state operation of each transaction in the block to be detected is divided into the state operation queue corresponding to each state item according to the state item, and parallel verification is performed on each state operation queue. | 2022-02-17 |
20220050828 | ASSET DISCOVERY DATA CLASSIFICATION AND RISK EVALUATION - Methods, systems, and devices for asset discovery, user discovery, data classification, risk evaluation, and data/device security are described. The method includes retrieving data stored at one or more remote locations, summarizing the retrieved data at the one or more remote locations, transferring the summarized data from the one or more remote locations to the at least one computing device, processing the transferred data by the at least one computing device, discovering assets in technology environments, classifying data that resides on each asset of the discovered assets into a respective confidentiality group of multiple confidentiality groups, calculating one or more risk scores for the discovered assets or users of the discovered assets, or both, and performing a security action to protect data that resides on an asset of the discovered assets. | 2022-02-17 |
20220050829 | DATA AGGREGATION SYSTEM - A method and a system for data aggregation for human beings is a single point for collection, aggregation, visualization, and selective distribution of quantitative and qualitative data. The quantitative and qualitative data pertains to various domains which include sports, education, music, healthcare, animal data and the like. The system provides development tools and assessment tools for each human being. The system also includes facilitating the plurality of human beings and a plurality of respective stake-holders to enter qualitative and quantitative information on a web-based platform, collecting the qualitative and quantitative information, analyzing the qualitative and quantitative information, aggregating and visualizing the qualitative and quantitative information, and selectively distributing the qualitative and quantitative information to the plurality of human beings and the plurality of stake-holders. | 2022-02-17 |
20220050830 | SYSTEMS AND METHODS FOR AUTOMATING MAPPING OF REPAIR PROCEDURES TO REPAIR INFORMATION - Systems and methods are provided for automating the process of mapping repair documents, published by Original Equipment Manufacturers (OEMs), to repair information provided in a repair estimate record. A baseline set of repair estimate records specifying one or more parts of a baseline vehicle and an associated set of repair documents specifying instructions for repairing the one or more parts of the baseline vehicle may be saved using a data categorization model in a mapping dataset. The repair documents associated with baseline set of repair estimate records which have been saved in the mapping dataset may then be used to automatically determine associations between another set of repair estimate records and corresponding repair documents. | 2022-02-17 |
20220050831 | INFORMATION PROCESSING APPARATUS - An information processing apparatus includes a processor configured to classify a data set including plural pieces of data each having a first attribute and a second attribute into plural groups according to a similarity of the first attribute, save, as an intermediate description, a result of processing performed based on a value corresponding to the second attribute, for each of the classified groups, re-save, in a case where the data set is updated, as the intermediate description, the result of processing performed based on the value corresponding to the second attribute for a group including updated data out of the plural groups, and calculate a statistic of the data set based on the saved intermediate description. | 2022-02-17 |
20220050832 | PATHWAY VISUALIZATION FOR CLINICAL DECISION SUPPORT - When generating visual representations of gene activity pathways for clinical decision support, a validated pathway database that stores a plurality of validated pathways is accessed, wherein each pathway describes at least one interaction between a plurality of genes. A processor ( | 2022-02-17 |
20220050833 | DYNAMICALLY SUPPRESSING QUERY ANSWERS IN SEARCH - A method for determining whether to dynamically suppress a candidate query answer designated for inclusion in search results includes instantiating a plurality of filtering rules for assessing suppression of a candidate query answer. The filtering rules include one or both of a pattern rule and a site rule. The method further comprises receiving a query, and, after receiving the query, retrieving one or more candidate query answers previously associated with the query. The method further comprises, for each candidate query answer, dynamically suppressing the candidate query answer from a curated position having enhanced prominence within search results relative to a plurality of other result entries, if either or both of a pattern rule and a site rule match the query. The method further includes returning search results including up to one candidate query answer in the curated position, responsive to a candidate query answer not being dynamically suppressed. | 2022-02-17 |
20220050834 | IDENTIFICATION OF DATA IN DISTRIBUTED ENVIRONMENTS - Systems and methods include requesting, from a first application system, of a first one or more combinations of search parameters for identifying a data subject identifier of the first application system, transmission of a first query to the first application system including values of search parameters of a first one of the first one or more combinations of search parameters, the values associated with a first data subject, reception of a first data subject identifier of the first application system in response to the first query, transmission of a second query to the first application system including the first data subject identifier, and reception of data of the first application system associated with the first data subject identifier in response to the second query. | 2022-02-17 |
20220050835 | SYSTEM AND METHOD FOR SQL SERVER RESOURCES AND PERMISSIONS ANALYSIS IN IDENTITY MANAGEMENT SYSTEMS - Embodiments as disclosed allow identity management with respect to SQL database by discovering substantially database objects and their entitlements and associating them with corresponding identities within the identity management system, thus providing insights into such SQL server entitlements and their associated identities, even across multiple SQL servers within an enterprise environment. | 2022-02-17 |
20220050836 | DATABASE SEARCH QUERY ENHANCER - An apparatus includes a memory and a hardware processor that receives a query from a device. The query includes first search parameters. The processor also retrieves, from a database and based on the first search parameters, a plurality of previously issued queries and applies a machine learning algorithm on the plurality of previously issued queries to determine second search parameters. The processor further adds the second search parameters to the query to form an enhanced query and communicates the enhanced query to a plurality of response systems. The processor then receives, from the plurality of response systems, a plurality of responses to the enhanced query, constructs, based on the plurality of responses to the enhanced query, an enhanced response to the query, and communicates the enhanced response to the device for selection of a response from the plurality of responses. | 2022-02-17 |
20220050837 | SELECTIVELY TARGETING CONTENT SECTION FOR COGNITIVE ANALYTICS AND SEARCH - A computer system includes a natural language processing (NLP) unit, a storage unit, a user interface and a search engine. The NLP unit analyzes a content source to identify one or more sections containing searchable content and generate section metadata respective to each identified section included in the content source. The storage unit stores the section metadata and the user interface receives a section-scoped query aimed at searching an identified section corresponding to the at least one first section metadata stored in the storage unit without searching an identified section corresponding to at least one second section metadata stored in the storage unit. Based on the section-scoped query, the search engine analyzes the at least one first section metadata stored in the storage unit without analyzing the at least one second section metadata. | 2022-02-17 |
20220050838 | SYSTEM AND METHOD FOR PROCESSING DATA FOR ELECTRONIC SEARCHING - A system and method for labelling, categorizing, structuring, and enriching unstructured data is disclosed. First, raw data is received, and a checksum is calculated and compared with an existing checksum. The raw data is downloaded to an object storage, wherein it is then parsed and transmitted to a sanity check system. The sanity checked data is then loaded into a staging database and a stability check is performed. The stability checked data is then loaded into a master database and transmitted to a search platform. Post checks are performed, and a report is generated, followed by the system updating the watchlist checksum. | 2022-02-17 |
20220050839 | DATA PROFILING AND MONITORING - A data monitoring and evaluation system may receive a query associated with a data record from a user. The system obtains target data including a plurality of data presentations associated with the data record. The system identifies a plurality of attributes associated with the data record and maps the same with each of the plurality of data presentations for identifying a data presentation modification. The system may evaluate the data presentation modification to identify a principal data presentation. The system may determine the conformity of the principal data presentation a rule to create a principal data record. The system may determine the conformity of the principal data record to a record acceptance parameter. The system may generate a data modeling result comprising the principal data record conforming to the record acceptance parameter. | 2022-02-17 |
20220050840 | NATURAL LANGUAGE QUERY TRANSLATION BASED ON QUERY GRAPHS - Techniques described herein allow for accurate translation of natural language (NL) queries to declarative language. A syntactic dependency parsing tree is generated for an NL query, which is used to map tokens in the query to logical data model concepts. Relationship-type mappings are completed based on relationship constraints. Final mappings are identified for any relationship tokens that are associated with multiple candidate mappings by identifying which candidate mappings have the lowest cost metrics. An NL query-specific query graph is generated based on the mapping data for the NL query and the logical data model. The query graph represents an NL query-specific version of the logical data model where grammatical dependencies between NL query words are translated to the query graph. A query graph is annotated with information, from the mapping data, that is not represented paths in the query graph. The query graph is used generate a computer-executable translation of the NL query. | 2022-02-17 |
20220050841 | ISOMETRIC TRANSFORMATIONS OF DIRECT SEARCH MESH - A computerized optimization method, system, and computer readable storage medium for performing pattern searching includes (a) providing an initial mesh of vectors; (b) providing initial points to be used as a base vector establishing a center region of the mesh of vectors; (c) for each base vector, obtaining a transformed mesh via an isometric transformation of the initial mesh of vectors; (d) evaluating the model objective function via each transformed mesh of vectors; (e) selecting a next set of base vectors by selecting a most favorable set of transformed mesh vectors to correspond with the objective function; (f) repeating steps (c) through (e) in an iterative sequence until a termination criterion of the iterative sequence is met; and (g) storing the selected most favorable transformed mesh vectors in a memory. | 2022-02-17 |
20220050842 | QUERYING A DATABASE - A query is received from a user. A query event type and a query time range associated with the query are determined. An estimated amount of data to be queried associated with the determined query time range is determined based on at least a historical number of the query event type of the user. An allowable amount of data to be queried supported by a database for a single query is determined. One or more sub-queries for the received query are generated. Each sub-query is associated with a different time period within the determined query time range. A corresponding amount of data to be queried associated with each time period is less than, or equal to, the determined allowable amount of data to be queried. The database is queried with the generated one or more sub-queries. | 2022-02-17 |
20220050843 | LEARNING-BASED QUERY PLAN CACHE FOR CAPTURING LOW-COST QUERY PLAN - A query processing device is provided, including a processor coupled to a communication interface and a query storage. The processor receives a current submission of a query in a training mode, a stored prior execution plan, and stored statistics for the prior execution plan. The processor generates a current execution plan for the query, executes the current execution plan, and collects statistics. The processor stores the current execution plan and the statistics in the query storage and determines, based on the current execution plan, that the query is not in the training mode. The processor selects an execution plan for the query from among a plurality of stored execution plans for the query, including the prior execution plan and the current execution plan, and stores the selected execution plan for the query in the query storage with an indication that the query is not in the training mode. | 2022-02-17 |
20220050844 | Systems And Methods For Processing Structured Queries Over Clusters - Systems and methods for processing structured queries over clusters are provided herein. An example system includes a plurality of clusters, wherein a local cluster is configured to receive, from a client, a structured query language (SQL) structured query, determine, based on the SQL structured query, a list of remote clusters of the plurality of clusters, process the SQL structured query to generate a local query executable by a local search engine of the local cluster and remote queries executable by remote search engines of the remote clusters, send the remote queries to the remote clusters to obtain remote results, execute the local query to obtain local results, combine the remote results and the local results to obtain an aggregated result, and return the aggregated result to the client. | 2022-02-17 |
20220050845 | SYSTEM AND METHOD FOR JOINING SKEWED DATASETS IN A DISTRIBUTED COMPUTING ENVIRONMENT - Disclosed is a method and system for joining datasets in a distributed computing environment. The system comprises a memory | 2022-02-17 |
20220050846 | ENABLING EDITABLE TABLES ON A CLOUD-BASED DATA WAREHOUSE - Enabling editable tables on a cloud-based data warehouse including receiving, by a query manager from a query manager client, a request to create a referencing worksheet using, as a data source, a client-provided table; storing, by the query manager, the client-provided table on the cloud-based data warehouse; generating, by the query manager, a database query to create the referencing worksheet, wherein the database query targets the client-provided table on the cloud-based data warehouse; and issuing, by the query manager, the database query to the cloud-based data warehouse. | 2022-02-17 |
20220050847 | QUERY TERM EXPANSION AND RESULT SELECTION - Devices, systems, and methods for improving results returned from a query. A method can include identify, based on a term embedding of a corpus of terms, expansion terms of a raw query term that are nearest the raw query term, normalize distances between the raw query term and the identified expansion terms, identify, based on the term embedding, expansion term neighbors of an expansion term that are nearest the expansion term; normalize distances between the expansion term and the identified expansion term neighbors, determine a WMA weight between the raw query term and the expansion term, and execute the query with the raw query terms and the expansion terms (determined based on the WMA weight) to generate query results. | 2022-02-17 |
20220050848 | Online Post-Processing In Rankings For Constrained Utility Maximization - Online post-processing may be performed for rankings generated with constrained utility maximization. A stream of data items may be received. A batch of data items from the stream may be ranked according to a ranking model trained to rank data items in a descending order of relevance. The batch of data items may be associated with a current time step. A re-ranking model may be applied to generate a re-ranking of the batch of data items according to a re-ranking policy that considers the current batch and previous batches with regard to a ranking constraint. The re-ranked items may then be sent to an application. | 2022-02-17 |