Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


52nd week of 2021 patent applcation highlights part 54
Patent application numberTitlePublished
20210406173COMPUTING SYSTEM AND METHOD FOR CONTROLLING STORAGE DEVICE - According to one embodiment, a computing system transmits to a storage device a write request designating a first logical address for identifying first data to be written and a length of the first data. The computing system receives from the storage device the first logical address and a first physical address indicative of both of a first block selected from blocks except a defective block by the storage device, and a first physical storage location in the first block to which the first data is written. The computing system updates a first table which manages mapping between logical addresses and physical addresses of the storage device and maps the first physical address to the first logical address.2021-12-30
20210406174METHODS FOR MANAGING STORAGE OPERATIONS FOR MULTIPLE HOSTS COUPLED TO DUAL-PORT SOLID-STATE DISKS AND DEVICES THEREOF - Methods, non-transitory machine readable media, and computing devices that manage storage operations directed to dual-port solid state disks (SSDs) coupled to multiple hosts are disclosed. With this technology, context metadata comprising a checksum is retrieved based on a first physical address mapped, in a cached zoned namespace (ZNS) mapping table, to a logical address. The logical address is extracted from a request to read a portion of a file. A determination is made when the checksum is valid based on a comparison to identification information extracted from the request and associated with the file portion. At least the first physical address is replaced in the cached ZNS mapping table with a second physical address retrieved from an on-disk ZNS mapping table, when the determination indicates the checksum is invalid. The file portion retrieved from a dual-port SSD using the second physical address is returned to service the request.2021-12-30
20210406175SHARED MEMORY WORKLOADS USING EXISTING NETWORK FABRICS - Shared memory workloads using existing network fabrics, including: presenting, by a Memory Mapped Input/Output (MMIO) translator, memory of the MMIO translator as a portion of a memory space of a host; receiving, by the MMIO translator, a first interrupt from an input/output (I/O) adapter; and storing, by the MMIO translator, without sending the first interrupt to an operating system, data associated with the first interrupt from the I/O adapter into the memory of the MMIO translator.2021-12-30
20210406176ACCELERATED IN-MEMORY CACHE WITH MEMORY ARRAY SECTIONS HAVING DIFFERENT CONFIGURATIONS - An apparatus having a memory array. The memory array having a first section and a second section. The first section of the memory array including a first sub-array of memory cells made up of a first type of memory. The second section of the memory array including a second sub-array of memory cells made up of the first type of memory with a configuration to each memory cell of the second sub-array that is different from the configuration to each cell of the first sub-array. Alternatively, the section can include memory cells made up of a second type of memory that is different from the first type of memory. Either way, the second type of memory or the differently configured first type of memory has memory cells in the second sub-array having less memory latency than each memory cell of the first type of memory in the first sub-array.2021-12-30
20210406177DIRECT MAPPING MODE FOR ASSOCIATIVE CACHE - A method of controlling a cache is disclosed. The method comprises receiving a request to allocate a portion of memory to store data. The method also comprises directly mapping a portion of memory to an assigned contiguous portion of the cache memory when the request to allocate a portion of memory to store the data includes a cache residency request that the data continuously resides in cache memory. The method also comprises mapping the portion of memory to the cache memory using associative mapping when the request to allocate a portion of memory to store the data does not include a cache residency request that data continuously resides in the cache memory.2021-12-30
20210406178DEDICATED MEMORY BUFFERS FOR SUPPORTING DETERMINISTIC INTER-FPGA COMMUNICATION - A server includes a field programmable gate array (FPGA) partitioned into a set of partial reconfiguration (PR) slots and a memory that supports a set of logical buffers. A deterministic application request module (DARM) receives application requests to allocate the set of reconfiguration slots to one or more tenants and the one or more tenants configure the allocated reconfiguration slot to perform tasks. The DARM stores data associated with the application request in a first logical buffer from the set of logical buffers. A reconfiguration slot scheduling (RSS) module identifies a first reconfiguration slot from the set of reconfiguration slots and associates the first reconfiguration slot with the first logical buffer. A reconfiguration slot initialization (RSI) module reconfigures the first reconfiguration slot to perform the tasks based on the data stored in the first logical buffer.2021-12-30
20210406179Memory-based synchronization of distributed operations - A network device in a communication network includes a controller and processing circuitry. The controller is configured to manage execution of an operation whose execution depends on inputs from a group of one or more work-request initiators. The processing circuitry is configured to read one or more values, which are set by the work-request initiators in one or more memory locations that are accessible to the work-request initiators and to the network device, and to trigger execution of the operation in response to verifying that the one or more values read from the one or more memory locations indicate that the work-request initiators in the group have provided the respective inputs.2021-12-30
20210406180REGION BASED DIRECTORY SCHEME TO ADAPT TO LARGE CACHE SIZES - Systems, apparatuses, and methods for maintaining a region-based cache directory are disclosed. A system includes multiple processing nodes, with each processing node including a cache subsystem. The system also includes a cache directory to help manage cache coherency among the different cache subsystems of the system. In order to reduce the number of entries in the cache directory, the cache directory tracks coherency on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. Accordingly, the system includes a region-based cache directory to track regions which have at least one cache line cached in any cache subsystem in the system. The cache directory includes a reference count in each entry to track the aggregate number of cache lines that are cached per region. If a reference count of a given entry goes to zero, the cache directory reclaims the given entry.2021-12-30
20210406181DISTRIBUTED MEMORY-AUGMENTED NEURAL NETWORK ARCHITECTURE - A method for using a distributed memory device in a memory augmented neural network system includes receiving, by a controller, an input query to access data stored in the distributed memory device, the distributed memory device comprising a plurality of memory banks. The method further includes determining, by the controller, a memory bank selector that identifies a memory bank from the distributed memory device for memory access, wherein the memory bank selector is determined based on a type of workload associated with the input query. The method further includes computing, by the controller and by using content based access, a memory address in the identified memory bank. The method further includes generating, by the controller, an output in response to the input query by accessing the memory address.2021-12-30
20210406182HARDWARE-BASED COHERENCY CHECKING TECHNIQUES - Methods, systems, and devices for hardware-based coherency checking techniques are described. A memory sub-system with hardware-based coherency checking can include a coherency block that maintains a coherency lock and releases coherency upon completion of a write command. The coherency block can perform operations to lock coherency associated with the write command, monitor for completion of the write to the memory device(s), release the coherency lock, and update one or more records used to monitor coherency associated with the write command. A coherency command and coherency status can be provided through a dedicated hardware bridge, such as a bridge through a level-zero cache coupled with the coherency hardware.2021-12-30
20210406183METHOD AND APPARATUS FOR A PAGE-LOCAL DELTA-BASED PREFETCHER - A method includes recording a first set of consecutive memory access deltas, where each of the consecutive memory access deltas represents a difference between two memory addresses accessed by an application, updating values in a prefetch training table based on the first set of memory access deltas, and predicting one or more memory addresses for prefetching responsive to a second set of consecutive memory access deltas and based on values in the prefetch training table.2021-12-30
20210406184MANAGING PREFETCH REQUESTS BASED ON STREAM INFORMATION FOR PREVIOUSLY RECOGNIZED STREAMS - Managing prefetch requests associated with memory access requests includes storing stream information associated with multiple streams. At least one stream was recognized based on an initial subset of memory access requests within a previously performed set of related memory access requests and is associated with stream information that includes stream matching information and stream length information. After the previously performed set has ended, a matching memory access request is identified that matches with a corresponding matched stream based at least in part on stream matching information within stream information associated with the matched stream. In response to identifying the matching memory access request, the managing determines whether or not to perform a prefetch request for data at an address related to a data address in the matching memory access request based at least in part on stream length information within the stream information associated with the matched stream.2021-12-30
20210406185DIRECT CACHE HIT AND TRANSFER IN A MEMORY SUB-SYSTEM THAT PROGRAMS SEQUENTIALLY - A system includes having buffers and a processing device that receives a read request with a logical block address (LBA) value for a memory device, creates a logical transfer unit (LTU) value, to include the LBA value, that is mapped to a first physical address of the memory device, and generates command tags that are to direct the processing device to retrieve data from the memory device and store the data in buffers. The command tags include a first command tag associated with the first physical address and a second command tag associated with a second physical address that sequentially follows the first physical address. The processor further creates an entry in the read cache table for the buffers. The entry can include a starting LBA value set to the first LBA value and the read offset value corresponding to the amount of data.2021-12-30
20210406186PROVISIONING VIRTUAL MACHINES WITH A SINGLE IDENTITY AND CACHE VIRTUAL DISK - A virtual disk is provided to a computing environment. The virtual disk includes identity information to enable identification of a virtual machine within the computing environment. A size of the virtual disk is increased within the computing environment to enable the virtual disk to act as a storage for the identity information and as a cache of other system data to operate the virtual machine. The virtual machine is booted within the computing environment. The virtual machine is configured to at least access the virtual disk that includes both identity information and caches other system data to operate the virtual machine. Related apparatus, systems, techniques and articles are also described.2021-12-30
20210406187DYNAMIC L2P CACHE - Disclosed in some examples are methods, systems, and machine readable mediums that dynamically adjust the size of an L2P cache in a memory device in response to observed operational conditions. The L2P cache may borrow memory space from a donor memory location, such as a read or write buffer. For example, if the system notices a high amount of read requests, the system may increase the size of the L2P cache at the expense of the write buffer (which may be decreased). Likewise, if the system notices a high amount of write requests, the system may increase the size of the L2P cache at the expense of the read buffer (which may be decreased).2021-12-30
20210406188MEMORY CONTROLLER AND METHOD OF OPERATING THE SAME - A memory controller may include a host interface controller, a first queue, a second queue, and a cache memory. The host interface controller may be configured to generate, based on a request received from a host, one or more command segments corresponding to the request. The first queue may be configured to store the one or more command segments. The second queue may be configured to store a target command segment from among the one or more command segments. The memory controller caches a target map segment corresponding to the target command segment into the cache memory in response to the target command segment being transferred from the first queue to the second queue.2021-12-30
20210406189INFORMATION PROCESSING DEVICE, ACCESS CONTROLLER, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM - Disclosed herein is an information processing device including a host unit adapted to request data access by specifying a logical address of a secondary storage device, and a controller adapted to accept the data access request and convert the logical address into a physical address using an address conversion table to perform data access to an associated area of the secondary storage device, in which an address space defined by the address conversion table includes a coarsely granular address space that collectively associates, with logical addresses, physical addresses that are in units larger than those in which data is read.2021-12-30
20210406190METHODS AND APPARATUS TO FACILITATE ATOMIC COMPARE AND SWAP IN CACHE FOR A COHERENT LEVEL 1 DATA CACHE SYSTEM - Methods, apparatus, systems and articles of manufacture to facilitate atomic compare and swap in cache for a coherent level 1 data cache system are disclosed. An example system includes a cache storage; a cache controller coupled to the cache storage wherein the cache controller is operable to: receive a memory operation that specifies a key, a memory address, and a first set of data; retrieve a second set of data corresponding to the memory address; compare the second set of data to the key; based on the second set of data corresponding to the key, cause the first set of data to be stored at the memory address; and based on the second set of data not corresponding to the key, complete the memory operation without causing the first set of data to be stored at the memory address.2021-12-30
20210406191DEVICES AND METHODS FOR FAILURE DETECTION AND RECOVERY FOR A DISTRIBUTED CACHE - A programmable switch includes at least one memory configured to store a cache directory for a distributed cache, and circuitry configured to receive a cache line request from a client device to obtain a cache line. The cache directory is updated based on the received cache line request, and the cache line request is sent to a memory device to obtain the requested cache line. An indication of the cache directory update is sent to a controller for the distributed cache to update a global cache directory. In one aspect, the controller sends at least one additional indication of the update to at least one other programmable switch to update at least one backup cache directory stored at the at least one other programmable switch.2021-12-30
20210406192STORAGE CAPACITY RECOVERY SOURCE SELECTION - A non-volatile memory device includes a volatile memory, a non-volatile memory, and a controller. The controller is configured to map logical addresses for stored data to physical addresses of the stored data in the non-volatile memory using a logical-to-physical mapping structure stored partially in the volatile memory and at least partially in the non-volatile memory. The controller is configured to perform a storage capacity recovery operation for a region of the non-volatile memory that is selected based at least partially on a number of mappings for the region likely to be stored in the volatile memory for the storage capacity recovery operation.2021-12-30
20210406193Operation-Deterministic Write Operations For Data Recovery And Integrity Checks - Aspects of a storage device including a memory and a controller are provided that allow for storage of tags identifying data types and sequence numbers with data to facilitate data recovery and system integrity checks following a power failure or other system failure event. The controller is configured during a write operation to include a tag in the data identifying the data type as a host write, a recycle write, or another internal write. Following a system failure event, the controller is configured to read the tags to identify the data type in the write. Based on the tags, the controller is configured to properly rebuild or update a logical-to-physical (L2P) table of the storage device to assign correct logical addresses to the most recent data during data recovery, as well as to verify correct logical addresses during system integrity checks.2021-12-30
20210406194PIPELINED OUT OF ORDER PAGE MISS HANDLER - Systems, methods, and apparatuses relating to circuitry to implement a pipelined out of order page miss handler are described. In one embodiment, a hardware processor core includes an execution circuit to generate data storage requests for virtual addresses, a translation lookaside buffer to translate the virtual addresses to physical addresses, and a single page miss handler circuit comprising a plurality of pipelined page walk stages, wherein the single page miss handler circuit is to contemporaneously perform a first page walk within a first stage of the plurality of pipelined page walk stages for a first miss of a first virtual address in the translation lookaside buffer, and a second page walk within a second stage of the plurality of pipelined page walk stages for a second miss of a second virtual address in the translation lookaside buffer.2021-12-30
20210406195METHOD AND APPARATUS TO ENABLE A CACHE (DEVPIC) TO STORE PROCESS SPECIFIC INFORMATION INSIDE DEVICES THAT SUPPORT ADDRESS TRANSLATION SERVICE (ATS) - Embodiments described herein may include apparatus, systems, techniques, or processes that are directed to PCIe Address Translation Service (ATS) to allow devices to have a DevTLB that caches address translation (per page) information in conjunction with a Device ProcessInfoCache (DevPIC) that will store process specific information. Other embodiments may be described and/or claimed.2021-12-30
20210406196MEMORY POOLS IN A MEMORY MODEL FOR A UNIFIED COMPUTING SYSTEM - A method and system for allocating memory to a memory operation executed by a processor in a computer arrangement having a plurality of processors. The method includes receiving a memory operation from a processor that references an address in a shared memory, mapping the received memory operation to at least one virtual memory pool to produce a mapping result, and providing the mapping result to the processor.2021-12-30
20210406197SPATIAL CACHE - A cache includes a p-by-q array of memory units; a row addressing unit; and a column addressing unit. Each memory unit has an m-by-n array of memory cells. The column addressing unit has, for each memory unit, m n-to-one multiplexers, one associated with each of the m rows of the memory unit, wherein each n-to-one multiplexer has an input coupled to each of the n memory cells associated with the row associated with that multiplexer. The row addressing unit has, for each memory unit, n m-to-one multiplexers, one associated with each of the n columns of the memory unit, wherein each m-to-one multiplexer has an input coupled to each of the m memory cells associated with the column associated with that multiplexer. The row addressing unit and column addressing unit support reading and/or writing of the array of memory units, e.g. using virtual or physical addresses.2021-12-30
20210406198CLASSIFYING ACCESS FREQUENCY OF A MEMORY SUB-SYSTEM COMPONENT - A system detects a request for an access operation relating to an address of a component of a memory sub-system, determines a number of access operations pertaining to the particular address using at least one of a plurality of counters. The plurality of counters comprises an extended counter corresponding to an extended period of time and a recent counter corresponding to a recent period of time. The system assigns an access frequency classification to at least one of the address or the component of the memory sub-system based on the number of access operations pertaining to the particular address.2021-12-30
20210406199SECURE ADDRESS TRANSLATION SERVICES USING CRYPTOGRAPHICALLY PROTECTED HOST PHYSICAL ADDRESSES - Embodiments are directed to providing a secure address translation service. An embodiment of a system includes a memory for storage of data, an Input/Output Memory Management Unit (IOMMU) coupled to the memory via a host-to-device link the IOMMU to perform operations, comprising receiving an address translation request from a remote device via a host-to-device link, wherein the address translation request comprises a virtual address (VA), determining a physical address (PA) associated with the virtual address (VA), generating an encrypted physical address (EPA) using at least the physical address (PA) and a cryptographic key, and sending the encrypted physical address (EPA) to the remote device via the host-to-device link.2021-12-30
20210406200Just-In-Time Post-Processing Computation Capabilities for Encrypted Data - Aspects of a storage device including a memory and an encryption core are provided. The storage device may be configured for providing secure data storage, as well as one or more post-processing operations to be performed with the data. The encryption core, which may be configured to decrypt data, may control execution of one or more post-processing operations using the data. A read command received from a host device may include a tag associated with data identified by the read command. When encrypted data is retrieved from memory according to the read command, the encryption core may decrypt the encrypted data and provide the decrypted data for post-processing based on the tag. A corresponding post-processing operation may return a result when executed using the decrypted data. Rather than raw data identified by the read command, the result may be delivered to the host device in response to the read command.2021-12-30
20210406201PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS TO SUPPORT LIVE MIGRATION OF PROTECTED CONTAINERS - A processor includes a decode unit to decode an instruction that is to indicate a page of a protected container memory, and a storage location outside of the protected container memory. An execution unit, in response to the instruction, is to ensure that there are no writable references to the page of the protected container memory while it has a write protected state. The execution unit is to encrypt a copy of the page of the protected container memory. The execution unit is to store the encrypted copy of the page to the storage location outside of the protected container memory, after it has been ensured that there are no writable references. The execution unit is to leave the page of the protected container memory in the write protected state, which is also valid and readable, after the encrypted copy has been stored to the storage location.2021-12-30
20210406202SCALE-OUT HIGH BANDWIDTH MEMORY SYSTEM - A high bandwidth memory (HBM) system includes a first HBM+ card. The first HBM+ card includes a plurality of HBM+ cubes. Each HBM+ cube has a logic die and a memory die. The first HBM+ card also includes a HBM+ card controller coupled to each of the plurality of HBM+ cubes and configured to interface with a host, a pin connection configured to connect to the host, and a fabric connection configured to connect to at least one HBM+ card.2021-12-30
20210406203CACHE ARCHITECTURE FOR A STORAGE DEVICE - The present disclosure relates to a method for improving the reading and/or writing phase in storage devices including a plurality of non-volatile memory portions managed by a memory controller, comprising: 2021-12-30
20210406204MEMORY SYSTEM - According to one embodiment, a memory system includes a first chip and a second chip. The second chip is bonded with the first chip. The memory system includes a semiconductor memory device and a memory controller. The semiconductor memory device includes a memory cell array, a peripheral circuit, and an input/output module. The memory controller is configured to receive an instruction from an external host device and control the semiconductor memory device via the input/output module. The first chip includes the memory cell array. The second chip includes the peripheral circuit, the input/output module, and the memory controller.2021-12-30
20210406205STORAGE SYSTEM WITH CAPACITY SCALABILITY AND METHOD OF OPERATING THE SAME - The present disclosure provides a storage system including a first storage device (e.g., a main storage device) and one or more additional storage devices (e.g., sub storage devices). The first storage device includes a host interface for communicating with a host device and is directly connected to the host device. The additional storage devices may be directly connected to the first storage device and may communicate with the host device through the host interface included in the first storage device. The storage system thus has a total combined capacity of both the capacity of the first storage device and the capacity of the one or more additional storage devices. Further, the one or more additional storage devices may be added or removed to increase or decrease the total capacity of the storage system, and the one or more additional storage devices may not necessarily themselves include a host interface.2021-12-30
20210406206MEMORY DEVICE MANAGEABILITY BUS - An embodiment of an electronic apparatus may comprise one or more substrates, and a controller coupled to the one or more substrates, the controller including circuitry to enumerate respective sideband addresses to ten or more memory devices, and provide bi-directional communication with an individual memory device of the ten or more memory devices with a particular sideband address enumerated to the individual memory device. Other embodiments are disclosed and claimed.2021-12-30
20210406207SECURE TIMER SYNCHRONIZATION BETWEEN FUNCTION BLOCK AND EXTERNAL SOC - Various embodiments include methods and systems performed by a processor of a first function block for providing secure timer synchronization with a second function block. Various embodiments may include storing, in a shared register space, a first time counter value in which the first time counter value is based on a global counter of the second function block, transmitting, from the shared register space, the stored first time counter value to a preload register of the first function block, receiving, by the first function block, a strobe signal from the second function block configured to enable the first time counter value in the preload register to be loaded into a global counter of the first function block, and configuring the global counter with the first time counter value from the preload register.2021-12-30
20210406208APPARATUSES AND METHODS FOR WRITING DATA TO A MEMORY - Apparatuses and methods for writing data to a memory array are disclosed. When data is duplicative across multiple data lines, data may be transferred across a single line of a bus rather than driving the duplicative data across all of the data lines. The data from the single data line may be provided to the write amplifiers of the additional data lines to provide the data from all of the data lines to be written to the memory. In some examples, error correction may be performed on data from the single data line rather than all of the data lines.2021-12-30
20210406209ALLREDUCE ENHANCED DIRECT MEMORY ACCESS FUNCTIONALITY - Systems, apparatuses, and methods for performing an allreduce operation on an enhanced direct memory access (DMA) engine are disclosed. A system implements a machine learning application which includes a first kernel and a second kernel. The first kernel corresponds to a first portion of a machine learning model while the second kernel corresponds to a second portion of the machine learning model. The first kernel is invoked on a plurality of compute units and the second kernel is converted into commands executable by an enhanced DMA engine to perform a collective communication operation. The first kernel is executed on the plurality of compute units in parallel with the enhanced DMA engine executing the commands for performing the collective communication operation. As a result, the allreduce operation may be executed in parallel on the enhanced DMA engine to the compute units.2021-12-30
20210406210HIGH SPEED COMMUNICATION SYSTEM - A method for communicating between a master and a plurality of slaves includes generating a communication frame including generating a slave data frame in each slave. The slave data frame has a data packet including one or more data bytes and at least one gap of variable time length comprising no information in the slave data frame. The gap may be at the beginning of said slave data frame before the beginning of the first data byte of said data packet and/or at the end of said data packet after the end of a last data byte of said data packet, where the gaps have a time length dependency based on parameters locally stored in each of said at least one slave. The slave data frame is transmitted sequentially where the gap increases for each subsequent slave.2021-12-30
20210406211PERIPHERAL DEVICE, INFORMATION PROCESSING SYSTEM, AND DISPLAY CONTROL METHOD - A display control method for a peripheral device shared by a plurality of user terminals includes: storing a connection history of a user terminal for connecting the peripheral device; storing settings information specifying an operation of the peripheral device, the settings information being associated with identification information of the user terminal; selecting settings information stored in the storing settings information or identification information of a user terminal on the basis of the connection history stored in the storing a connection history; and displaying settings information or identification information selected in the selecting.2021-12-30
20210406212CONFIGURABLE STORAGE SERVER WITH MULTIPLE SOCKETS - Embodiments herein describe a computing system which is reconfigurable into different server configurations that have different numbers of sockets. For example, the computing system may include two server nodes which can be configured into either two independent servers (i.e., two 2S servers) or a single server (i.e., one 4S server). In one embodiment, the computing system includes a midplane which is connected to processor buses on the server nodes. When configured as a single server, the midplane connects the processor bus (or buses) on one of the server nodes to the processor bus or buses on the other server node. In this manner, the processors in the two server nodes can be interconnected to function as a single server. In contrast, the connections between the server nodes in the midplane are disabled when the server nodes operate as two independent servers.2021-12-30
20210406213USER STATION FOR A SERIAL BUS SYSTEM, AND METHOD FOR TRANSMITTING A MESSAGE IN A SERIAL BUS SYSTEM - A user station for a serial bus system and a method for transmitting a message in a serial bus system. The user station includes a communication control device for transmitting messages to a bus of the bus system and/or for receiving messages from the bus of the bus system, and a bit rate switchover unit for switching over a bit rate of the messages from a first bit rate in a first communication phase to a second bit rate for a second communication phase. The bit rate switching unit is designed to switch the bit rate from the first bit rate over to the second bit rate, due to an edge of a predetermined bit sequence that includes one bit of the first communication phase and one bit of the second communication phase.2021-12-30
20210406214IN-NETWORK PARALLEL PREFIX SCAN - Methods and apparatus for in-network parallel prefix scan. In one aspect, a dual binary tree topology is embedded in a network to compute prefix scan calculations as data packets traverse the binary tree topology. The dual binary tree topology includes up and down aggregation trees. Input values for a prefix scan are provided at leaves of the up tree. Prefix scan operations such as sum, multiplication, max, etc. are performed at aggregation nodes within the up tree as packets containing associated data propagate from the leaves to the root of the up tree. Output from aggregation nodes in the up tree are provide as input to aggregation nodes in the down tree. In the down tree, the packets containing associated data propagate from the root to its leaves. Output values for the prefix scan are provided at the leaves of the down tree.2021-12-30
20210406215AGGREGATING METRICS IN DISTRIBUTED FILE SYSTEMS - Embodiments are directed to managing file systems over a network. A hierarchical index may be provided based on a file system and a plurality of objects stored in the file system A token index may be generated based on the hierarchical index. Each token may be a portion of the path of the objects Metric indices may be generated based on the hierarchical index and a plurality of metrics associated with the objects such that the metrics indices include one or more rows that corresponds to a place position for a metric value. Employing the token index and the metric indices to generate query results based on the plurality of metrics associated with the objects.2021-12-30
20210406216Managing Volume Snapshots in the Cloud - Systems, methods, and machine-readable media for creating, deleting, and restoring volume snapshots in a remote data store are disclosed. A storage volume and a storage operating system are implemented in a software container. Through a user interface, a user may create a snapshot of the volume to a cloud storage. A user may also delete individual snapshots from the cloud storage. Further, deletion of a most recent snapshot may occur by awaiting deletion (though marking as deleted to the user) until a next snapshot is received. Snapshots in the cloud storage are manipulatable even after destruction of the source volume (by destruction of the container, for example). A controller outside the container is used by implementing the same API as the controller in the container had. Full restores of snapshots in the cloud are also possible even when the original container and volume have been destroyed.2021-12-30
20210406217METHOD FOR PROCESSING RESOURCE DESCRIPTION FILE, PAGE RESOURCE ACQUISITION METHOD, AND INTERMEDIATE SERVER - Method for processing a resource description file includes: receiving an access request directed to a target page from a client terminal, and receiving the resource description file fed back by an origin server with respect to the access request after transmitting the access request to the origin server; identifying one or more resource links in the resource description file and determining whether the one or more resource links include one or more external links; if the one or more resource links include the one or more external links, rewriting an external link of the one or more resource links to an internal link and replacing the external link in the resource description file with a corresponding internal link formed by rewriting the external link; and feeding back a rewritten resource description file to the client terminal to acquire resource of the target page according to the rewritten resource description file.2021-12-30
20210406218QUERY-BASED RECOMMENDATION SYSTEMS USING MACHINE LEARNING-TRAINED CLASSIFIER - Systems and methods for query-based recommendation systems using machine learning-trained classifiers are provided. A service provider server receives, from a communication device through an application programming interface, a query in an interaction between the server provider server and the communication device. The service provider server generates a vector of first latent features from a set of first visible features associated with the query using a machine learning-trained classifier. The service provider server generates a likelihood scalar value indicating a likelihood of the query is answered by a candidate user in a set of users using a combination of the vector of first latent features and a vector of second latent features. The service provider server provides, to the communication device through the application programming interface, a recommendation message as a response to the query, where the recommendation message includes the likelihood scalar value and an indication of the candidate user.2021-12-30
20210406219SYSTEMS AND METHOD FOR ELECTRONIC DISCOVERY MANAGEMENT - Implementations described and claimed herein provide systems and methods for electronic discovery management. In one implementation, a staging path from source(s) to a staging area in a target location is generated automatically in connection with a collection request. Criteria for the collection request is obtained automatically and an encrypted export key for the collection request is captured using first robot(s). An export of a responsive data collection is obtained automatically from the source(s) using the first robot(s). The responsive data collection is exported along the staging path based on the criteria and the encrypted export key. An image of the responsive data collection is generated in the target location by sending parameter(s) to second robot(s), and the collection request is fulfilled by triggering a compression of the image of the responsive data collection into forensic container(s) using the second robot(s).2021-12-30
20210406220METHOD, APPARATUS, DEVICE, STORAGE MEDIUM AND COMPUTER PROGRAM PRODUCT FOR LABELING DATA - A method, an apparatus, an electronic device, a computer-readable storage a medium and a computer program product for labeling data are provided. The method may include: obtaining a labeling accuracy requirement for to-be-labeled data; determining a process monitoring parameter matching the to-be-labeled data; weighting the process monitoring parameter with a coefficient having a corresponding size to obtain a comprehensive accuracy according to dependent and causal relationships between contents of different to-be-labeled data; and outputting, in response to the comprehensive accuracy satisfying the labeling accuracy requirement, labeled data.2021-12-30
20210406221FACILITATING GENERATION AND UTILIZATION OF GROUP FOLDERS - Methods, computer systems, computer-storage media, and graphical user interfaces are provided for facilitating generation and utilization of group folders, according to embodiments of the present technology. In embodiments, an indication to merge a first folder associated with a first entity with a second folder associated with a second entity is received. Based on the indication to merge, a group folder associated with the first entity and the second entity is generated. A representation of the group folder is provided for presentation via a navigation pane of the messaging application. A selection of the representation of the group folder can cause execution of a search query in association with an index of messages to obtain messages associated with the first entity and/or the second entity.2021-12-30
20210406222SYSTEM AND METHOD FOR IDENTIFYING BUSINESS LOGIC AND DATA LINEAGE WITH MACHINE LEARNING - An embodiment of the present invention is directed to implementing machine learning to define business logic and lineage. The system analyzes data patterns of SORs as well as consumption attributes to define the business logic. An embodiment of the present invention may achieve over 95% match rate for complex attributes. When provided with thousands of SOR attributes, the innovative system may identify a handful of relevant SOR attributes required as well as the business logic to derive the consumption attribute.2021-12-30
20210406223AGGREGATING METRICS IN FILE SYSTEMS USING STRUCTURED JOURNALS - Embodiments are directed to managing file systems. Update information associated with a change of a metric associated with a target object may be provided. A journal that includes a base bin that includes base records that associate the metric with each object in the file system. Records that include the change of the metric associated with the ancestors of target object may be generated. Another record that includes the change of the metric associated with the target object may be generated. A level bin associated with the base bin of the journal may be provided based on the update information. The records may be stored in the level bin using a sort order based on the ordering of the base bin records. In response a query, the journal may be employed to reduce latency in generating query results.2021-12-30
20210406224Cloud-native global file system with data exporter - A cloud-native global file system is augmented to include a file exporter (or, more generally, a file export tool) that facilitates an enterprise customer's use of a cloud-native tool that would otherwise be unable to operate against the global file system's underlying file system representation. In a typical use case, the file exporter is configured to extract in a native object format and to an unencrypted target (e.g., an S3 bucket, an Azure storage account, and the like) all or a portion of a volume's data from the underlying file system representation. In this manner, the exporter creates a copy of the data set that the enterprise user can then leverage against the desired cloud-native tool or other cloud services that are not under the management or control of the global file system service provider.2021-12-30
20210406225FILE SOURCE TRACKING - A computing system may determine different patterns of modifications that are to be made to data of a file to generate respective modified versions of the file, the different patterns of modifications enabling identification of other files derived from the respective modified versions of the file, the different patterns of modifications including a first pattern of modifications. The computing system may generate a first modified version of the file at least in part by modifying the data based on the first pattern of modifications, may send the first modified version of the file to a client device, and may store signature data indicative the first pattern of modifications so as to enable identification of other files derived from the first modified version of the file.2021-12-30
20210406226System Building Assisting Apparatus and System Building Assisting Method - At present, on the basis of microservice, capabilities and applications of a monolithic integrated system are transitioned to a component system and rebuilt. However, the application of microservice to a shared database has not sufficiently been under study. A system building assisting apparatus of the present invention uses, in building of a component system, access history aggregation table indicating access history of access to a database for a production control system corresponding to an integrated system to be analyzed, to classify data items constituting the database as “individual data” or “common data.” At this time, the system building assisting apparatus desirably uses a capability corresponding to an access source indicated by the access history to classify each of the data items as “individual data” or “common data.”2021-12-30
20210406227LINKING, DEPLOYING, AND EXECUTING DISTRIBUTED ANALYTICS WITH DISTRIBUTED DATASETS - Methods and systems for execution of distributed analytics include building a global linked structure that describes correspondences between dataset metadata structures, analytics metadata structures, and location metadata structures and that encodes compatibility between respective datasets, analytics, and locations. A set of analytics and compatible datasets for execution is determined based on the dataset metadata structures, analytics metadata structures, and global linked structure. An optimal execution location is determined based on the determined set of analytics and compatible datasets, the location metadata structures, and the global linked structure. The set of analytics and compatible datasets are deployed to the optimal location for execution.2021-12-30
20210406228METHOD AND APPARATUS FOR VISUALIZING A PROCESS MAP - A method for visualizing a process map is executed by a process map server. The method includes receiving a flowchart and a step-by-step recording related to a process. Generating a process map by combining the flowchart and the step-by-step recording and displaying the process map. The process map displays a task, step, and action related to the process. A detail window shows information associated with the process, and portions of the process, in response to user input. The action is based on information from the step-by-step recording.2021-12-30
20210406229Schema Agnostic Migration Of Delineated Data Between Relational Databases - Initially, a database schema is parsed and a table tree structure is created delineating the relationships between data that are identified in the schema. In addition to accomodating relationships between main tables of data, the table tree structure also accomodates possible side tables of data, and possible circular references between tables, should such be encountered when parsing the schema. Subsequently, a migration mechanism consumes the generated table tree structure and iteratively migrates data in accordance therewith. Individual layers of the table tree structure are migrated consecutively with referenced layers being migrated prior to referencing layers. Circular links are accommodated through temporary null values, and side tables are accommodated during migration of the referencing main table. The iterative process provides completeness and fault tolerance/failure recovery.2021-12-30
20210406230VERIFICATION MICROSERVICE FOR DEPLOYMENT OF CONTROLLER AND WORKER NODES FOR VERIFICATION PATHS - Described is a system for a verification microservice engine for generating and deploying a controller module and one or more worker nodes to detect corruption in a deduplicated object storage system accessible by one or more microservices while minimizing costly read operations on objects. The controller module builds local versions of slice recipe names based on metadata available object recipes. The controller module verifies the accuracy of the metadata based on whether the locally built slice recipes names match slice recipe names in object storage.2021-12-30
20210406231SOURCE-AGNOSTIC SERVICE FOR PERFORMING DEDUPLICATION FOR AN OBJECT STORAGE - Described is a system for a providing a service (or microservice) for performing deduplication for an object storage. The service (or microservice) may be source-agnostic in that it may receive data from multiple types of source systems by providing a uniform set of functions for deduplicating and writing the data to a destination object storage. The set of functions encapsulate a previously dispersed set of functionality provided by various components. Accordingly, the service provides a single scalable and stateless component for performing deduplication. For example, the service (e.g. deduplication service) may receive object related information and perform a filtering to accelerate network transfers. Accordingly, the service provides the ability to only transfer and write data that does not already exist on the object storage.2021-12-30
20210406232METHODS AND APPARATUS TO ESTIMATE AUDIENCE SIZES OF MEDIA USING DEDUPLICATION BASED ON MULTIPLE VECTORS OF COUNTS - Disclosed examples to estimate audience sizes of media include a coefficient generator to determine coefficient values for a polynomial based on normalized weighted sums of variances, a normalized weighted sum of covariances, and cardinalities corresponding to a first plurality of vectors of counts from a first database proprietor and a second plurality of vectors of counts from a second database proprietor, a real roots solver to determine a real root value of the polynomial, the real root value indicative of a number of audience members represented in the first plurality of vectors of counts that are also represented in the second plurality of vectors of counts, and an audience size generator to determine the audience size based on the real root value and the cardinalities of the first plurality of vectors of counts and the second plurality of vectors of counts.2021-12-30
20210406233EXTENSIBLE VERSION HISTORY AND COMPARISON WITHIN A BACKUP - Described is a system for providing quick and efficient identification of a desired version of content from an editing history of the content. The system receives a search index identifying versions of content from an editing history of the content. The system sorts the search index according to sort criteria and receives a selection from the sorted search index of a first version of the content and a second version of the content. The system identifies and displays one or more content differences between the first and second versions of the content.2021-12-30
20210406234METHOD AND SYSTEM FOR MIGRATING CONTENT BETWEEN ENTERPRISE CONTENT MANAGEMENT SYSTEMS - Migrating content between enterprise content management systems is described. A source object identifier is identified for metadata tables for content for a source enterprise content management system, based on a migration job definition. The metadata tables are retrieved from the source enterprise content management system, based on the source object identifier. A target object identifier is identified for a target enterprise content management system, based on the metadata tables and the migration job definition. An object identifier map is created that maps the source object identifier to the target object identifier. The metadata tables are stored to the target enterprise content management system, based on the object identifier map. The content for the source enterprise content management system is retrieved. The content is stored as content for the target enterprise content management system.2021-12-30
20210406235KEY-VALUE INDEX WITH NODE BUFFERS - A computer implemented method may include: receiving write requests to add key-value pairs to an index; storing the key-value pairs in a buffer of an indirect node of the index; determining whether the buffer of the indirect node exceeds a threshold level; and in response to a determination that the buffer of the indirect node exceeds the threshold level, transferring the key-value pairs stored in the buffer of the indirect node to buffers of a plurality of child nodes, where each buffer of the plurality of child nodes is smaller than the buffer of the indirect node.2021-12-30
20210406236GENERATING SNAPSHOTS OF A KEY-VALUE INDEX - A computer implemented method may include: storing key-value pairs in an index in persistent storage, where indirect nodes of the index include pointers, where each pointer identifies an index portion and includes a generation identifier for the identified index portion, where the index comprises a plurality of snapshots associated with a plurality of generations; receiving a request to read data of a particular snapshot of the index, wherein the particular snapshot is associated with a particular generation of the plurality of generations; in response to the request, performing a traversal starting from a particular root node associated with the particular generation; and providing the requested data based on the traversal.2021-12-30
20210406237SEARCHING KEY-VALUE INDEX WITH NODE BUFFERS - A computer implemented method may include: receiving a read request for a key-value pair in an index, wherein each indirect node of the index comprises a buffer and a Bloom filter, and wherein sizes of the Bloom filters vary across the levels according to a predefined function; responsive to a read request for the key-value pair, determining whether the Bloom filter of the indirect node indicates that the buffer of the indirect node includes the key-value pair; and responsive to a determination that the Bloom filter of the indirect node indicates that the buffer of the indirect node includes the key-value pair, searching the buffer of the indirect node for the key-value pair.2021-12-30
20210406238BACKUP OPERATIONS IN A TREE-BASED DISTRIBUTED FILE SYSTEM - Techniques for cloning, writing to, and reading from file system metadata. Cloning involves identifying a first set of pointers included h a first root node in a file system metadata tree structure that stores file system metadata n leaf nodes of the tree structure, creating a first copy of the first root node that includes the first set of pointers, creating a second copy of the first root node that includes the first set of pointers, associating the first copy with a first view, and associating the second copy with a second view. Reading generally involves traversing the tree structure towards a target leaf node that contains data to be read. Writing generally involves traversing the tree structure n the same manner, but also creating copies of any nodes to be modified if those nodes are deemed to have a different treeID than a particular root node.2021-12-30
20210406239COLLISION-FREE HASHING FOR ACCESSING CRYPTOGRAPHIC COMPUTING METADATA AND FOR CACHE EXPANSION - Embodiments are directed to collision-free hashing for accessing cryptographic computing metadata and for cache expansion. An embodiment of an apparatus includes one or more processors to compute a plurality of hash functions that combine additions, bit-level reordering, bit-linear mixing, and wide substitutions, wherein each of the plurality of hash functions differs in one of the additions, the bit-level reordering, the wide substitutions, or the bit-linear mixing; and access a hash table utilizing results of the plurality of hash functions.2021-12-30
20210406240METHODS AND APPARATUS TO ESTIMATE CARDINALITY OF USERS REPRESENTED ACROSS MULTIPLE BLOOM FILTER ARRAYS - Methods and apparatus to estimate cardinality of users represented across multiple bloom filter arrays are disclosed. Examples includes processor circuitry to execute and/or instantiate instructions to generate a first composite Bloom filter array based on first and second Bloom filter arrays. The processor circuitry is to generate a final composite Bloom filter array based on the first composite Bloom filter array and a third Bloom filter array. Different ones of the first, second, and third Bloom filter arrays representative of different sets of users who accessed media. The first, second, and third Bloom filter arrays including differential privacy noise. The processor circuitry to estimate a cardinality of a union of the first, second, and third Bloom filter arrays based on the final composite Bloom filter array.2021-12-30
20210406241RECONSTRUCTION OF LINKS BETWEEN LOGICAL PAGES IN A STORAGE SYSTEM - An apparatus comprises a processing device configured to determine that an entry of a first data structure comprises an indication that a link between a first logical page and a second logical page is broken and to determine that a first address pointed to by the first logical page and a second address pointed to by the second logical page match. The processing device is further configured to determine that the first logical page corresponds to the second logical page based at least in part on the determination that the first address and the second address match and to add an indication of a third address that corresponds to the first logical page to an entry associated with the second logical page.2021-12-30
20210406242TECHNIQUES AND ARCHITECTURES FOR PARTITION MAPPING IN A MULTI-NODE COMPUTING ENVIRONMENT - Mapping of database partitions to available nodes. Metric information related to the partitions of the database are stored. One or more metrics associated with the partitions are gathered. A plurality of potential mappings of partitions to nodes are evaluated. One of the potential mappings of partitions to nodes to result in improved metric distribution among the nodes and the partition moves are within a pre-selected move constraint is selected. The selected potential mapping is implemented by moving one or more partitions between one or more nodes.2021-12-30
20210406243NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS - An addition unit writes operation information of an addition instruction to a partition-specific RowID file, and an update unit and a deletion unit write operation information of an update instruction and a deletion instruction to a common RowID file, respectively. Then, when restoring a RowID hydra, a restoration unit performs processing of reading the partition-specific RowID file and creating the RowID hydra in parallel for each partition. Then, the restoration unit reads the common RowID file and reflects the update instruction and the deletion instruction in the RowID hydra.2021-12-30
20210406244PRODUCTION DATABASE UPDATE TOOL - A computing system may obtain code for a query to a database from a first user. In response, the application may automatically perform an operation to determine a number of records of the database that would be accessed by executing the query on the database. The computing system may output, for display to the first user, the number of records of the database that would be accessed by executing the query on the database. The computing system may output, for display to the first user, a prompt for an acknowledgement of the number of records of the database that would be accessed. In response to receiving an indication of the acknowledgement by the first user of the number of records of the database that would be accessed, the computing system may output, for display to a second user, the code for the query for review by the second user.2021-12-30
20210406245Rollback-Free Referential Integrity Update Processing - An import job associated with a data store update is inspected and schemas associated with target data tables that are to be updated with update data are analyzed. Referential integrity issues associated with foreign keys in the target tables are identified in the update data. The update data is broken into three portions, a first portion that is guaranteed to not have a referential integrity issue, a second portion this is known to have a referential integrity issue, and a third portion that cannot be determined at this stage of processing. The import job is modified to update the corresponding target tables with the first portion of the update data while the second and third portions are not updated to the target data base tables but a custom message is recorded in fields associated with the second and third portions that describes the issue for subsequent review/resolution.2021-12-30
20210406246MANAGEMENT OF DIVERSE DATA ANALYTICS FRAMEWORKS IN COMPUTING SYSTEMS - Data Analytics Engines can be provided as a “black-boxed” abstraction to their “users.” This allows a user to mix and match analytical components if their input data matches the input requirement of the engine. Furthermore, by decoupling the Data Analytics Engine creation from the environment, a high degree of process automation, scalability, and improved maintainability can be achieved. As a result, Data Analytic engineers and Data Scientists can create reusable components for other scientists and business users, whereas the users need not know how the Engines are coded or the environment in which their engines will run, but only need to know the input schema of the data.2021-12-30
20210406247MODULE EXPIRATION MANAGEMENT - Systems, methods, and non-transitory computer readable media are provided for managing expiration of modules. An expiry dataset may be maintained. The expiry dataset may include a set of identifiers corresponding to a set of modules, a set of expiry values for the set of modules, and a set of termination tasks for the set of modules. A request to refresh a module may be received from a client. Responsive to the reception of the request, an expiry value and a termination task for the module within the expiry dataset may be updated. The expiry value may be independent of a timestamp associated with the request.2021-12-30
20210406248AUTOMATED DATA ROUTING IN A DATA CONFIDENCE FABRIC - Routing data in a data confidence fabric. Data ingested into a data confidence fabric is routed to maximize confidence scores and to minimize the amount of missing confidence information. Routing is based on a configuration file and on pathing map information that allows nodes capable of applying the trust insertions set forth in the configuration file to be identified.2021-12-30
20210406249AUTOMATIC EVENTS DETECTION FROM ENTERPRISE APPLICATIONS - Embodiments provide a computer-implemented method for automatic detection of an event from an enterprise application, comprising: initiating a CRUD action connector of an integration platform; providing a business object name to the action connector; randomly obtaining a sample record from the enterprise application; creating a dummy record in the enterprise application, wherein the dummy record has a dummy record ID and a first time range within which the dummy record is created; and searching for a first timestamp field in a business object schema to determine whether the first timestamp field is queryable, wherein the first timestamp field indicates when the dummy record is created. If the first timestamp field exists, and the first timestamp field is queryable, then a first event indicating creation of the dummy record is detectable; if the first timestamp field does not exist, then the first event is undetectable.2021-12-30
20210406250METHOD AND SYSTEM FOR REDUCING THE SIZE OF A BLOCKCHAIN - A method and a system for reducing the size of a blockchain. The blockchain includes a first set of two or more blocks including an initial genesis block. A new genesis block for the blockchain is generated. The first hash value resulting from hashing the new genesis block matches a second hash value resulting from hashing a last block from the first set of blocks, and the difficulty of determining the first hash value is computationally greater than the cumulative difficulty of determining hash values of all blocks in the first set of blocks. The new genesis block is transmitted to one or more blockchain nodes of a blockchain network, and the first set of blocks is replaced with the new genesis block.2021-12-30
20210406251PATCHINDEX SYSTEM AND METHOD FOR UPDATABLE MATERIALIZATION OF APPROXIMATE CONSTRAINTS - Aspects described herein relate to maintaining a dataset with approximate constraints including determining, for a dataset, a constraint collection of tuples that satisfy a constraint and an exception collection of tuples that are an exception to the constraint, constructing, for the dataset, a sharded bitmap of bits, wherein each bit in the sharded bitmap indicates whether a tuple in the dataset is in the exception collection of tuples, wherein the sharded bitmap includes, for each shard of multiple shards, a bitmap of bits and a starting bit location index within the sharded bitmap of bits for the shard, and processing a query on the dataset including processing the constraint collection of tuples and the exception collection of tuples based on the sharded bitmap.2021-12-30
20210406252AUTOMATIC DERIVATION OF SHARD KEY VALUES AND TRANSPARENT MULTI-SHARD TRANSACTION AND QUERY SUPPORT - Techniques are provided for processing a database command in a sharded database. The processing of the database command may include generating or otherwise accessing a shard key expression, and evaluating the shard key expression to identify one or more target shards that contain data used to execute the database command.2021-12-30
20210406253SYSTEMS AND METHODS FOR LOW-LATENCY PROVISION OF CONTENT - The present disclosure provides systems and methods for low-latency provision of content. The method includes receiving one or more signals indicating a current location of a client device; before receiving an input query from a map application of the client device, retrieving characteristics of the client device; and generating a set of identifications, the set of identifications including the current location of the client device, and the characteristics of the client device. The method further includes determining that a query prediction exceeds a threshold; responsive to the determination that the query prediction exceeds the threshold, selecting a link to a geographic location of an entity that is associated with the query prediction; and, responsive to a selection of a map application on the client device by a user, transmitting the selected link to the client device before receiving a query from the user of the client device.2021-12-30
20210406254PROVENANCE ANALYSIS SYSTEMS AND METHODS - Provenance analysis systems and methods. Datums representing relationships between entities can be stored in a knowledge store. Datums can be received from agents as agents perform activities. Activity records are be stored in a provenance graph, the activity record and associate received datums with any input datums used in the activity. Provenance subgraphs can be retrieved by traversing the provenance graph for selected datums and presented through a user interface. Provenance subgraphs can be augmented with trust modifiers determined based on attributions, confidences, and refutations provided by a user. Trust modifiers can be propagated downstream to enable the addressing of junctions in variable confidence.2021-12-30
20210406255INFORMATION ENHANCED CLASSIFICATION - Systems, methods, and related technologies for classification are described. Network traffic from a network may be accessed and an entity may be selected. One or more values associated with one or more properties associated with the entity may be determined. The one or more values may be accessed from the network traffic. A search query based on the one or more values associated with the one or more properties associated with the entity is determined and performed. A search query result is received and the search query result comprises a plurality of webpages. Data from a webpage of the plurality webpages is accessed. A classification result of the entity is determined, by a processing device, based on the data from the webpage of the plurality of webpages. The classification result is stored.2021-12-30
20210406256Using a Search to Determine What a Group of People are Working On - Examples of the present disclosure describe systems and methods for determining relationships between content items to create a visualization associated with the various content items. The visualization may provide information regarding what various individuals in a group, team, or organization have been working on (e.g., content, documents, projects).2021-12-30
20210406257PROVENANCE ANALYSIS SYSTEMS AND METHODS - Provenance analysis systems and methods. Datums representing relationships between entities can be stored in a knowledge store. Datums can be received from agents as agents perform activities. Activity records are be stored in a provenance graph, the activity record and associate received datums with any input datums used in the activity. Provenance subgraphs can 2021-12-30
20210406258REDUCING INDEX FILE SIZE BASED ON EVENT ATTRIBUTES - Techniques and mechanisms are disclosed to optimize the size of index files to improve use of storage space available to indexers and other components of a data intake and query system. Index files of a data intake and query system may include, among other data, a keyword portion containing mappings between keywords and location references to event data containing the keywords. Optimizing an amount of storage space used by index files may include removing, modifying and/or recreating various components of index files in response to detecting one or more storage conditions related to the event data indexed by the index files. The optimization of index files generally may attempt to manage a tradeoff between an efficiency with which search requests can be processed using the index files and an amount of storage space occupied by the index files.2021-12-30
20210406259VIRTUAL ARCHIVING OF DATABASE RECORDS - A database-management system (DBMS) archives a record of a database table by updating the record's unique “Archived” field. This indicates that the record should be considered to have been archived despite the fact that the record has not been physically moved to a distinct archival storage area. When a query requests access to the table, the DBMS determines whether the query requests access to only archived data, only active data, or both. If both, the DBMS searches the entire table. Otherwise, the DBMS scans each record's Archived field to consider only those records that satisfy the query's requirement for either archived or active data. If the DBMS incorporates Multi-Version Concurrency Control (MVCC) technology, the DBMS combines this procedure with MVCC's time-based version-selection mechanism.2021-12-30
20210406260COMBINING PARAMETERS OF MULTIPLE SEARCH QUERIES THAT SHARE A LINE OF INQUIRY - Methods, systems, and computer readable media related to generating a combined search query based on search parameters of a current search query of a user and search parameters of one or more previously submitted search quer(ies) of the user that are determined to be of the same line of inquiry as the current search query. Two or more search queries may be determined to share a line of inquiry when it is determined that they are within a threshold level of semantic similarity to one another. Once a shared line of inquiry has been identified and a combined search query generated, users may interact with the search parameters and/or the search results to update the search parameters of the combined search query.2021-12-30
20210406261RENDERING INTERACTIVE SUBSIDIARY APPLICATION(S) IN RESPONSE TO A SEARCH REQUEST - Implementations set forth herein relate to providing a subsidiary application GUI via a client interface. The GUI can be rendered when a user is accessing a first party system via an application that is provided by the first party or a separate entity. The subsidiary application GUI can be rendered in response to the user providing a search query to the first party server—such as a search query that is in furtherance of initializing receiving certain search results. The server can identify, based on the search query, one or more entities that offer primary and/or subsidiary applications, and request subsidiary data for visibly rendering corresponding subsidiary applications for each entity. The subsidiary applications can optionally provide access to application functions that would not otherwise be available at the client without a corresponding application being installed.2021-12-30
20210406262SYSTEMS AND METHODS FOR ENCODING AND SEARCHING SCENARIO INFORMATION - Systems, methods, and non-transitory computer-readable media can receive a query specifying at least one example scenario. At least one image representation of the at least one example scenario can be encoded based on the query to produce at least one encoded representation. An embedding of the at least one representation of the at least one example scenario can be generated based on the at least one encoded representation. At least one scenario that is similar to the at least one example scenario can be identified based at least in part on the embedding of the at least one representation of the at least one example scenario and an embedding representing the at least one scenario. Information describing the at least one identified scenario can be provided in response to the query.2021-12-30
20210406263KNOWLEDGE GRAPH-BASED LINEAGE TRACKING - A knowledge graph stores connections among tables in a data set and queries used to extract information from the data set. The queries may be used to generate reports. The knowledge graph indicates which of the tables each query uses and indicates which of the queries is used by each table. The knowledge graph may also store schema for the tables and information describing the tables and the queries. A graph builder may generate the knowledge graph by crawling the data set and the queries and by using a query parser to determine the tables each query uses. The graph builder may automatically update the knowledge graph. The graph builder may detect data quality issues in a table of the data set. The graph builder may query the knowledge graph for the queries that use the table. The graph builder may associate notifications with the queries.2021-12-30
20210406264DOCUMENT PRE-PROCESSING FOR QUESTION-AND-ANSWER SEARCHING - Disclosed are methods, systems, devices, apparatus, media, design structures, and other implementations, including a method that includes receiving a source document, applying one or more pre-processes to the source document to produce contextual information representative of the structure and content of the source document, and transforming the source document, based on the contextual information, to generate a question-and-answer searchable document.2021-12-30
20210406265TRANSFORMING A FUNCTION-STEP-BASED GRAPH QUERY TO ANOTHER GRAPH QUERY LANGUAGE - To execute function-step-based graph queries on a graph engine that has its own graph query language, rather than re-implementing an existing infrastructure to support function-step-based graph protocols, function-step-based graph queries are transformed to the graph query language that is understood by the graph engine. The existing infrastructure computes the results of the transformed queries. Result sets are then transformed to function-based-based result sets, which are returned to customers. In this manner, the graph engine supports function-step-based graph query workloads without implementation of the function-step-based graph protocol.2021-12-30
20210406266COMPUTERIZED INFORMATION EXTRACTION FROM TABLES - Computerized systems are provided for detecting one or more tables and performing information extraction and analysis on any given table. Information can be extracted from one or more cells or fields of a table and feature vectors representing individual cells, rows, and/or columns of the table can be derived and concatenated together. In this way, embodiments can use some or all of the “context” or values contained in various feature vectors representing some or all of a single table as signals or factors to consider when generating a decision statistic, such as a classification prediction, for a particular cell.2021-12-30
20210406267Method and System of Performing an Operation on a Single-Table, Multi-Tenant Database and Partition Key Format Therefor - A partition key format for allocating partitions to data items in a single table database, where the data items are owned by different entities. The partition key format including a sequence of a plurality of frames, wherein a first of said frames is an identifier of the requesting entity (EID), and a second one of said frames is an identifier of the type of data item (TID).2021-12-30
20210406268SEARCH RESULT ANNOTATIONS - A flexible annotation framework normalizes auxiliary information from diverse sources, ranks the information for an individual search result, and provides a lightweight or full display of the auxiliary information in an annotation for the search result. An annotation thus displays information not typically part of the details included in the search result. An example method comprises, for at least one item in a search result page, identifying at least one annotation of a first annotation type in an annotation data store that references the item, identifying at least one annotation for a second annotation type in an annotation data store that references the item, ranking the annotation of the first annotation type and the annotation of the second annotation type and providing the highest ranked annotation as part of a search result for the item in the search result page.2021-12-30
20210406269DATA STORAGE SELECTION BASED ON DATA IMPORTANCE - An example system and method may provide an importance score for a data file based on the content of the data file. An importance score may be used to determine whether to store the data file in a regular reliability storage media or in a higher reliability storage media. A controller generates a document vector for a data file based on content processed from a data file. The data file includes metadata and the content. The controller generates, using an artificial intelligence (AI) model and the document vector, a data file importance score for the data file. The controller then stores the data file in one of the first data storage zone and the second data storage zone based on the data file importance score.2021-12-30
20210406270Leveraging Interlinking Between Information Resources to Determine Shared Knowledge - Examples of the present disclosure describe systems and methods for leveraging interlinking between resources to determine shared knowledge. In aspects, user interaction with one or more applications or services may be detected. User input associated with the user interaction may be processed to identify information, such as one or more content items, content topics, or entities. The identified information may be used to search one or more data sources for relationships between the identified information and content items, topics, and/or entities stored by the data sources. The results of the search may be collected and/or evaluated to identify the knowledge level of one or more entities with one or more topics. Based on the evaluation, an indication of the identified knowledge level(s) may be provided.2021-12-30
20210406271Determining Authoritative Documents Based on Implicit Interlinking and Communications Signals - Examples of the present disclosure describe systems and methods for determining authoritative documents based on implicit interlinking and communication signals. In aspects, a search operation may be initiated from one or more applications or services. The search operation may be processed to identify search information, such as one or more content items, content topics, or entities. The identified search information may be used to search one or more data sources for implicit relationships between the search information and content items and/or entities stored by the data sources. The results of the search may be collected and ranked according to one or more criteria. The ranked results may be provided in response to the search operation.2021-12-30
20210406272METHODS AND SYSTEMS FOR SUPERVISED TEMPLATE-GUIDED UNIFORM MANIFOLD APPROXIMATION AND PROJECTION FOR PARAMETER REDUCTION OF HIGH DIMENSIONAL DATA, IDENTIFICATION OF SUBSETS OF POPULATIONS, AND DETERMINATION OF ACCURACY OF IDENTIFIED SUBSETS - Some embodiments provide methods, systems and computer-readable media that for identifying corresponding distributions of functional subpopulations of cells from high dimensional data across multiple samples and verifying the accuracy of the identified subpopulations.2021-12-30
Website © 2022 Advameg, Inc.