41st week of 2020 patent applcation highlights part 50 |
Patent application number | Title | Published |
20200319999 | STORAGE DEVICE, CONTROL METHOD OF STORAGE DEVICE, AND STORAGE MEDIUM - A storage device comprises a flash memory and processing circuitry. The processing circuitry is configured to divide a storage area into pages to manage the storage area, and deletes each of the blocks including a plurality of pages. The processing circuitry receives a write instruction including address information specifying a writing location of the data, and stores, with respect to a plurality of groups in which each group includes one or more blocks, a plurality of group identification information each identifying a group and information specifying blocks included in the group in association with each other. The processing circuitry performs a predetermined calculation to obtain group identification information, and identifies a group including a block including pages onto which data is to be written according to the write instruction. Finally, the processing circuitry writes the data onto the pages of the block included in the group identified. | 2020-10-08 |
20200320000 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - An operating method of a memory system may include: searching for, in a memory, target map data corresponding to the read request; loading the target map data from a memory device when the target map data are not searched; compressing the loaded target map data using a predetermined compression ratio depending on an available capacity of the memory; caching the compressed target map data in the memory; parsing the compressed target map data; reading target user data corresponding to the read request from the memory device based on the parsed target map data; and outputting the read target user data. | 2020-10-08 |
20200320001 | STORAGE DEVICE AND GARBAGE COLLECTION METHOD THEREOF - A memory controller is for controlling operations of a nonvolatile memory including a first memory block group for storing a first type of data and a second memory block group for storing a second type of data. The memory controller includes a garbage collection management unit configured to execute a garbage collection policy in which a first garbage collection criteria is applied to the first memory block group, and a second garbage collection criteria is applied to the second memory block group, where first garbage collection criteria is different than the second garbage collection criteria | 2020-10-08 |
20200320002 | INTELLIGENTLY MANAGING DATA FACILITY CACHES - Architectures and techniques are described that can address challenges associated with efficiently managing a cache of a data facility. In that regard, for each block (or other file system structure) of a storage array spanning multiple storage device, relationships can be established between other blocks of the array. The blocks can then be represented as multidimensional vectors, and an aggregation of the vectors can be represented as a weight matrix having values that reflect the corresponding relationships between any two given blocks. In response to any given IO transaction, a corresponding vector can be selected that is representative of a block referenced by the IO transaction and one or more target blocks having a high relationship value to the block can be identified and used in connection with a cache update procedure. | 2020-10-08 |
20200320003 | SELECTIVE EXECUTION OF CACHE LINE FLUSH OPERATIONS - The present disclosure is directed to systems and methods that include cache operation storage circuitry that selectively enables/disables the Cache Line Flush (CLFLUSH) operation. The cache operation storage circuitry may also selectively replace the CLFLUSH operation with one or more replacement operations that provide similar functionality but beneficially and advantageously prevent an attacker from placing processor cache circuitry in a known state during a timing-based, side channel attack such as Spectre or Meltdown. The cache operation storage circuitry includes model specific registers (MSRs) that contain information used to determine whether to enable/disable CLFLUSH functionality. The cache operation storage circuitry may include model specific registers (MSRs) that contain information used to select appropriate replacement operations such as Cache Line Demote (CLDEMOTE) and/or Cache Line Write Back (CLWB) to selectively replace CLFLUSH operations. | 2020-10-08 |
20200320004 | PROCESSING MEMORY ACCESSES WHILE SUPPORTING A ZERO SIZE CACHE IN A CACHE HIERARCHY - A system and method for efficiently supporting a cache memory hierarchy potentially using a zero size cache in a level of the hierarchy. In various embodiments, logic in a lower-level cache controller or elsewhere receives a miss request from an upper-level cache controller. When the requested data is non-cacheable, the logic sends a snoop request with an address of the memory access operation to the upper-level cache controller to determine whether the requested data is in the upper-level data cache. When the snoop response indicates a miss or the requested data is cacheable, the logic retrieves the requested data from memory. When the snoop response indicates a hit, the logic retrieves the requested data from the upper-level cache. The logic completes servicing the memory access operation while preventing cache storage of the received requested data in a cache at a same level of the cache memory hierarchy as the logic. | 2020-10-08 |
20200320005 | PUNCTUATION CONTROLLED TEMPORAL REFERENCE DATA LOOKUP - A method for joining an event stream with reference data includes loading a plurality of reference data snapshots from a reference data source into a cache. Punctuation events are supplied that indicate temporal validity for the plurality of reference data snapshots in the cache. A logical barrier is provided that restricts a flow of data events in the event stream to a cache lookup operation based on the punctuation events. The cache lookup operation is performed with respect to the data events in the event stream that are permitted to cross the logical barrier. | 2020-10-08 |
20200320006 | PREFETCH MANAGEMENT IN A HIERARCHICAL CACHE SYSTEM - An apparatus includes a CPU core, a first memory cache with a first line size, and a second memory cache having a second line size larger than the first line size. Each line of the second memory cache includes an upper half and a lower half. A memory controller subsystem is coupled to the CPU core and to the first and second memory caches. Upon a miss in the first memory cache for a first target address, the memory controller subsystem determines that the first target address resulting in the miss maps to the lower half of a line in the second memory cache, retrieves the entire line from the second memory cache, and returns the entire line from the second memory cache to the first memory cache. | 2020-10-08 |
20200320007 | METHOD AND APPARATUS TO PROTECT CODE PROCESSED BY AN EMBEDDED MICRO-PROCESSOR AGAINST ALTERING - The disclosure discloses a method, the method includes: reading immutable boot code from a ROM; loading a code image from an external memory and calculating a hash by a core unit; initially authenticating the hash using the boot code for decrypting the hash of the external memory; whereas concurrently calculating a salted hash for each equivalent of a cache line of the code image by a cache protection block; storing the salted hash for each cache line in an internal hash table; whereas if the authentication succeeds, a part of the code image is loaded into a secure cache of the embedded micro-processor; otherwise if a secure cache miss occurs, the code image is reloaded from the external memory and the salted hash for the missed cache line is re-calculated by the cache protecting block and is checked against the stored salted hash in the internal hash table. | 2020-10-08 |
20200320008 | MEMORY SYSTEM FOR UTILIZING A MEMORY INCLUDED IN AN EXTERNAL DEVICE - A memory system includes a memory device configured to store a piece of data in a location which is distinguished by a physical address and a controller configured to generate a piece of map data associating a logical address, inputted along with a request from an external device, with the physical address and to transfer a response including the piece of map data to the external device. | 2020-10-08 |
20200320009 | Mapping for Multi-State Programming of Memory Devices - Storage device programming methods, systems and devices are described. A method may generate a mapping of data based on a set of data, the mapping of data including a first mapped data and a second mapped data. The method may include performing a first programming operation to write, in a first mode, the first mapped data to the memory device. The method may include storing the second mapped data to a cache. The method may include generating a second set of data, based on an inverse mapping of the mapping of data including the second mapped data from the cache and the first mapped data from the memory device, for writing, in a second mode, to the memory device, wherein the second set of data includes the set of data, and the first mode and the second mode correspond to different modes of writing to the memory device. | 2020-10-08 |
20200320010 | DIRECTLY MAPPED BUFFER CACHE ON NON-VOLATILE MEMORY - A method and apparatus for implementing a buffer cache for a persistent file system in non-volatile memory is provided. A set of data is maintained in one or more extents in non-volatile random-access memory (NVRAM) of a computing device. At least one buffer header is allocated in dynamic random-access memory (DRAM) of the computing device. In response to a read request by a first process executing on the computing device to access one or more first data blocks in a first extent of the one or more extents, the first process is granted direct read access of the first extent in NVRAM. A reference to the first extent in NVRAM is stored in a first buffer header. The first buffer header is associated with the first process. The first process uses the first buffer header to directly access the one or more first data blocks in NVRAM. | 2020-10-08 |
20200320011 | CASCADE CACHE REFRESHING - The present application discloses a cascade cache refreshing method, system, and device. The method in an embodiment of the present specification includes: determining a cache refreshing sequence based on a dependency relationship between caches in a cascade cache; and sequentially determining, based on the cache refreshing sequence, whether the caches in the cascade cache need to be refreshed, and refreshing a cache that needs to be refreshed, where when it is determined that a current cache needs to be refreshed, it is determined whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed. | 2020-10-08 |
20200320012 | MEMORY SYSTEM AND METHOD FOR OPERATING THE SAME - A data processing system includes a host; and a memory system suitable for outputting map information to the host; wherein the host is suitable for performing a mapping operation based on the map information and outputting, to the memory system, a physical address corresponding to the mapping operation, and wherein the memory system generates a response signal in response to a command, the response signal including changed map information which is changed after the map information is outputted and outputs the response signal to the host. | 2020-10-08 |
20200320013 | System Control Using Sparse Data - A method and apparatus for storing and accessing sparse data is disclosed. A sparse array circuit may receive information indicative of a request to perform a read operation on a memory circuit that includes multiple banks. The sparse array circuit may compare an address included in the received information to multiple entries that correspond to address locations in the memory circuit that store sparse data. In response to a determination that that the address matches a particular entry, the sparse array may generate one or more control signals that may disable the read operation, and cause a data control circuit to transmits the sparse data pattern. | 2020-10-08 |
20200320014 | Method and Apparatus for Managing Storage Device in Storage System - In a method for managing a storage device in a storage system, a client may send, based on an obtained start address that is of a queue of an NVMe storage device and to which an access request points and an obtained logical address that is of the NVMe storage device and to which the access request points, a remote direct memory access command to a storage node in which the NVMe storage device is located. | 2020-10-08 |
20200320015 | LOGICAL TO PHYSICAL DATA STORAGE MAPPING - Systems, methods and computer-readable memory for garbage collection in a storage device. One method comprises, upon a write of data to a first garbage collection unit (GCU) of the storage device, incrementing a number of logical mapping units stored in the first GCU along with a number of logical mapping units with valid data stored in the first GCU. A number of logical mapping units with invalid data stored in a second GCU is decremented based on the incremented number of logical mapping units with valid data stored in the first GCU. The second GCU is erased when a valid data rate of the second GCU is below a valid data rate of the first GCU. | 2020-10-08 |
20200320016 | METHOD ENABLING VIRTUAL PAGES TO BE ALLOCATED WITH NONCONTIGUOUS BACKING PHYSICAL SUBPAGES - A device includes an address translation buffer to, for each virtual page number of a plurality of virtual page numbers, store a mapping associated with the virtual page number. The mapping identifies a set of physical subpages allocated for the virtual page number, and the set of physical subpages includes at least a first physical subpage of a plurality of contiguous subpages in a physical memory region and excludes at least a second physical subpage of the plurality of contiguous subpages in the physical memory region. A memory management unit is coupled with the address translation buffer to, in response to receiving a requested virtual subpage number and a requested virtual page number of the plurality of virtual page numbers, determine, based on the mapping associated with the requested virtual page number, a physical subpage number identifying a physical subpage that is allocated for the requested virtual subpage number. | 2020-10-08 |
20200320017 | NETWORK INTERFACE CARD RESOURCE PARTITIONING - Presented herein are techniques enable existing hardware input/output resources, such as the hardware queues (queue control registers), of a network interface card to be shared with different hosts (i.e., each queue mapped to many hosts) by logically segregating the hardware I/O resources using assignable interfaces each associated with a distinct Process Address Space Identifier (PASID). That is, different assignable interfaces are created and associated with different PASIDs, and these assignable interfaces each correspond to a different host (i.e., there is a mapping between a host, an assignable interface, a PASID, and a partition of a hardware queue). The result is that that the hosts can use the assignable interface to directly access the hardware queue partition that corresponds thereto. | 2020-10-08 |
20200320018 | ON-CHIP LOGIC ACCELERATOR - Embodiments of the invention are directed to a computer-implemented method of memory acceleration. The computer-implemented method includes mapping, by a processor, an array of logic blocks in system memory to an array of logic blocks stored in level 1 (L1) on an accelerator chip, wherein each logic block stores a respective look up table for a function, wherein each function row of a respective look up table stores an output function value and a combination of inputs to the function. The processor determines that a number of instances of request for the output function value from a logic block is less than a first threshold. The processor evicts the function row to a higher level memory. | 2020-10-08 |
20200320019 | CONTROLLER, MEMORY SYSTEM INCLUDING THE SAME, AND METHOD OF OPERATING MEMORY SYSTEM - A memory system includes a memory device and a controller. The memory device includes first and second memory groups. The controller includes a resource controller and first and second flash translation layer (FTL) cores. Each of the first and second FTL cores manages a plurality of logical addresses (LAs) that are mapped, respectively, to a plurality of physical addresses (PAs) of a corresponding memory group. The resource controller determines LA use rates of the first and second FTL cores, selects a source FTL core and a target FTL core from the first and second FTL cores using the LA use rates, and balances the LA use rates of the source FTL core and the target FTL core by moving data stored in storage spaces associated with a portion of the LAs from the source FTL core to storage spaces associated with the target FTL core. | 2020-10-08 |
20200320020 | ENCRYPTION OF EXECUTABLES IN COMPUTATIONAL MEMORY - The present disclosure is related to encryption of executables in computational memory. Computational memory can traverse an operating system page table in the computational memory for a page marked as executable. In response to finding a page marked as executable, the computational memory can determine whether the page marked as executable has been encrypted. In response to determining that the page marked as executable is not encrypted, the computational memory can generate a key for the page marked as executable. The computational memory can encrypt the page marked as executable using the key. | 2020-10-08 |
20200320021 | ACCESS MANAGEMENT APPARATUS AND ACCESS MANAGEMENT METHOD - An access management apparatus is provided that manages an access to a memory that includes a plurality of access regions in which an access stop period is regularly generated. The access management apparatus includes an acquisition unit configured to acquire schedule information for the access stop period, and a transmission unit configured to select one access request from among a plurality of access requests to the memory based on the schedule information, and transmit the selected access request to the memory. | 2020-10-08 |
20200320022 | BALANCED NETWORK AND METHOD - A low-latency, high-bandwidth, and highly scalable method delivers data from a source device to multiple communication devices on a communication network. Under this method, the communication devices (also called player nodes) provide download and upload bandwidths for each other. In this manner, the bandwidth requirement on the data source is significantly reduced. Such a data delivery network is scalable without limits with the number of player nodes. In one embodiment, a computer network includes (a) a source server that provides a data stream for delivery in the computer network, (b) player nodes that exchange data with each other to obtain a complete copy of the data stream, the network nodes being capable of dynamically joining or exiting the computer network, and (c) a control server which maintains a topology graph representing connections between the source server and the player nodes, and the connections among the player nodes themselves. In one embodiment, the control server is associated with a network address (e.g., an IP address) known to both the source server and the player nodes. The data stream may include, for example, a real-time broadcast of a sports event. | 2020-10-08 |
20200320023 | SYSTEM AND METHOD FOR SECURELY CONNECTING TO A PERIPHERAL DEVICE - A device connectable between a host computer and a computer peripheral over a standard bus interface is disclosed, used to improve security, and to detect and prevent malware operation. Messages passing between the host computer and the computer peripherals are intercepted and analyzed based on pre-configured criteria, and legitimate messages transparently pass through the device, while suspected messages are blocked. The device communicates with the host computer and the computer peripheral using proprietary or industry standard protocol or bus, which may be based on a point-to-point serial communication such as USB or SATA. The messages may be stored in the device for future analysis, and may be blocked based on current or past analysis of the messages. The device may serve as a VPN client and securely communicate with a VPN server using the host Internet connection. | 2020-10-08 |
20200320024 | I/O MESH ARCHITECTURE FOR AN INDUSTRIAL AUTOMATION SYSTEM - An industrial automation system employing a mesh topology of input/output allows flexibility in pairing field devices and controllers though the I/O mesh. Field devices can be connected to the geographically closest I/O module channel without regard to the location of the necessary controller. Modular prefabrication and deployment of the I/O modules becomes less complex and less time consuming thereby reducing costs. | 2020-10-08 |
20200320025 | STORAGE-BASED SLOW DRAIN DETECTION AND AUTOMATED RESOLUTION - Storage-based slow drain detecting and automated resolution is provided herein. A data storage system as described herein can include a memory that stores computer executable components and a processor that executes computer executable components stored in the memory. The computer executable components can include a switch query component that obtains a host transfer rate negotiated between a host device and a network switch from a host-connected port of the network switch; a comparison component that compares the host transfer rate to an array transfer rate negotiated between the network switch and a storage array; and a rate limiter component that limits a data transfer from the storage array to the host device to the host transfer rate in response to the host transfer rate being less than the array transfer rate. | 2020-10-08 |
20200320026 | BANDWIDTH MANAGEMENT ALLOCATION FOR DISPLAYPORT TUNNELING - A system can include a host router comprising connection manager logic, a display port adapter, and a display port adapter register to comprise display port adapter register values. A display port source device comprises a display port transmitter connected to the display port adapter. A display port configuration data (DPCD) register comprises display port configuration register values for the display port, the display port transmitter to write to the DPCD register. The display port adapter is to map DPCD register values to the display port adapter register. The connection manager logic is to receive a notification message requesting bandwidth allocation for the display port transmitter, determine an allocated bandwidth for the display port transmitter, and write the allocated bandwidth into the display port adapter register. | 2020-10-08 |
20200320027 | BUS ARRANGEMENT AND METHOD FOR OPERATING A BUS ARRANGEMENT - A bus arrangement includes a coordinator, a first subscriber, a first subscriber arrangement, and a bus. The first subscriber arrangement has a second subscriber. The bus couples the coordinator with the first subscriber and the second subscriber. The first subscriber is arranged between the coordinator and the second subscriber on the bus. The bus arrangement is configured such that the first subscriber arrangement can be decoupled from the bus in an operating phase, and such that the first subscriber cannot be decoupled from the bus in the operating phase. | 2020-10-08 |
20200320028 | INTEGRATED CIRCUIT WITH COMBINED INTERRUPT AND SERIAL DATA OUTPUT - An integrated circuit includes a combined serial data output and interrupt output terminal, a serial communication control circuit; an interrupt generation circuit, and an output circuit. The output circuit includes a serial data input, an interrupt input, and a combined serial data and interrupt output. The serial data input is coupled to a serial data output of the serial communication circuit. The interrupt input is coupled to an interrupt output of the interrupt generation circuit. The combined serial data and interrupt output is coupled to the combined serial data output and interrupt output terminal. | 2020-10-08 |
20200320029 | System and Method of Rerouting an Inter-Processor Communication Link Based on a Link Utilization Value - In one or more embodiments, one or more systems, methods, and/or processes may configure multiple link registers, of a first semiconductor package of an information handling system (IHS), that configure an input/output (I/O) communication fabric of the first semiconductor package to route communications of multiple components of the first semiconductor package to multiple inter-processor communication link interfaces; may communicate with a second semiconductor package of the IHS via the multiple inter-processor communication link interfaces; may determine that a link utilization value of multiple link utilization values is at or above a threshold value; and may configure a link register of the multiple link registers, associated with the at least one component of the multiple components, that configures the I/O communication fabric to route communications of the at least one component of the multiple components to a second inter-processor communication link interface of the multiple inter-processor communication link interfaces. | 2020-10-08 |
20200320030 | BLOCKING SYSTEMS FROM RESPONDING TO BUS MASTERING CAPABLE DEVICES - In some examples, a system includes a memory resource, a communication channel to allow a bus mastering capable device to access the memory resource, and a controller to block the system from responding to a request from the bus mastering capable device for accessing the memory resource until the controller has authorized the bus mastering capable device. | 2020-10-08 |
20200320031 | MULTICHIP PACKAGE LINK - Physical layer logic is provided that is to receive data on one or more data lanes of a physical link, receive a valid signal on another of the lanes of the physical link identifying that valid data is to follow assertion of the valid signal on the one or more data lanes, and receive a stream signal on another of the lanes of the physical link identifying a type of the data on the one or more data lanes. | 2020-10-08 |
20200320032 | SWITCH FABRIC HAVING A SERIAL COMMUNICATIONS INTERFACE AND A PARALLEL COMMUNICATIONS INTERFACE - A switch fabric is disclosed that includes a serial communications interface and a parallel communications interface. The serial communications interface is configured for connecting a plurality of slave devices to a master device in parallel to transmit information between the plurality of slave devices and the master device, and the parallel communications interface is configured for separately connecting the plurality of slave devices to the master device to transmit information between the plurality of slave devices and the master device, and to transmit information between individual ones of the plurality of slave devices. The parallel communications interface may comprise a dedicated parallel communications channel for each one of the plurality of slave devices. The serial communications interface may comprise a multidrop bus, and the parallel communications interface may comprise a cross switch. | 2020-10-08 |
20200320033 | [RSA OR HARVESTING] UNIFIED FPGA VIEW TO A COMPOSED HOST - Mechanisms for Field Programmable Gate Array (FPGA) chaining and unified FPGA views to a composed system hosts and associated methods, apparatus, systems and software A rack is populated with pooled system drawers including pooled compute drawers and pooled FPGA drawers communicatively coupled via input-output (IO) cables. The FPGA resources in the pooled system drawers are enumerated, identifying a location of type of each FPGA and whether it is a chainable FPGA. Intra-drawer chaining mechanisms are identified for the chainable FPGAs in each pooled compute and pooled FPGA drawer. Inter-drawer chaining mechanism are also identified for chaining FPGAs in separate pooled system drawers. The enumerated FPGA and chaining mechanism data is aggregated to generate a unified system view of the FPGA resources and their chaining mechanisms. Based on available compute nodes and FPGAs in the unified system view, new compute nodes are composed using chained FPGAs. The chained FPGAs are exposed to a hypervisor or operating system virtualization layer, or to an operating system hosted by the composed compute node as a virtual monolithic FPGA or multiple local FPGAs. | 2020-10-08 |
20200320034 | SERVICE DEPLOYMENT IN A CLUSTER OF I/O DEVICES - A method for deploying a service in a cluster of Input/Output devices comprising several I/O devices comprising a container engine. The method can be performed via a container client. A stack file is obtained that identifies at least one service and specifies at least one device constraint. Then, a command is sent based on the stack file, to deploy a service on a container stack of at least one first I/O device among the IO devices of the cluster if the at least one first I/O device matches the device constraint. | 2020-10-08 |
20200320035 | TEMPORAL DIFFERENCE LEARNING, REINFORCEMENT LEARNING APPROACH TO DETERMINE OPTIMAL NUMBER OF THREADS TO USE FOR FILE COPYING - For a given file type, an optimal number of threads to use to copy files of each of a number of different discrete file sizes is determined, using a temporal difference learning, reinforcement learning approach in which file copy time is used as feedback reward reinforcement. A continuous function corresponding to the given file type and outputting the number of threads to use to copy files having this given file type and that are of any input file size is fitted onto the optimal numbers of threads determined for the discrete file sizes. | 2020-10-08 |
20200320036 | DATA UNIT CLONING IN MEMORY-BASED FILE SYSTEMS - A data structure used in memory-based file system, method and apparatus using thereof. The data structure comprising: a tree of the nodes comprising tree nodes and leaf nodes, each tree node points to at least one node, each leaf node is associated with a plurality of data unit elements each of which representing a data unit, wherein each data unit element is associated with two pointers, wherein at least one of the two pointers is capable of pointing to a data unit or to a data unit element; and a cyclic linked list of data unit elements representing identical clones of a data unit, wherein the cyclic linked list comprises a first element pointing directly to the data unit, wherein from each element in the cyclic linked list, the data unit can be reached in time complexity of O(1). | 2020-10-08 |
20200320037 | PERSISTENT INDEXING AND FREE SPACE MANAGEMENT FOR FLAT DIRECTORY - Methods, non-transitory computer readable media, computing devices and systems for persistent indexing and space management for flat directory include creating, using at least one of said at least one processors, an index file to store mapping information, computing, using at least one of said at least one processor, a hash based on a lookup filename, searching, using at least one of said at least one processor, the index file to find all matching directory cookies based on the computed hash, selecting, using at least one of said at least one processor, the directory entity associated with the lookup filename from among the matched directory cookies, and returning, using at least one of said at least one processor, the determined directory entity. | 2020-10-08 |
20200320038 | EVENT MANAGEMENT DEVICE AND METHOD - Embodiments of the invention provide an event management device for managing events comprising an event detector configured to detect the occurrence of an event related to data delivered by a data delivery system and to extract user data related to the detected event from a user data storage, the extracted user data comprising user data stored in at least one entry of the user data storage. The event management device further comprising a rule manager configured to determine one or more actions to be executed by applying one or more rules using the extracted user data, the event management device being configured to trigger execution of at least one determined action. The system may further dynamically update the rules using feedback data received for the executed actions. | 2020-10-08 |
20200320039 | SYSTEMS AND METHODS FOR DATA DISTILLATION - Systems and methods are described for distilling data. First data associated with a user may be received. The first data associated with the user may comprise an anonymized hash of an identifier associated with the user. A database may be determined to comprise a first record indicating the anonymized hash. The first record may comprise second data associated with the user. Based on the determining that the database comprises the first record, a second record may be generated. The second record may comprise the first data associated with the user, the second data associated with the user, and the anonymized hash. Based on the determining that the database comprises the first record, the example method may be stored to the database. These and other user and/or data distillation methods and systems are described herein. | 2020-10-08 |
20200320040 | CONTAINER INDEX PERSISTENT ITEM TAGS - Examples may include container index persistent item tags. Examples may store chunk signatures in at least one container index and, for each chunk signature, store at least one persistent item tag identifying a respective backup item that references or formerly referenced the chunk signature. Examples may determine that all chunks formerly referenced by a backup item have been erased based on the persistent item tags in the at least one container index and output an indication that the backup item has been erased. | 2020-10-08 |
20200320041 | MULTITENANCY USING AN OVERLAY FILE SYSTEM - Example methods and systems are directed to multitenancy using an overlay file system. Each tenant has one or more users and a tenant layer in the overlay file system. Each user has a user layer in the overlay file system. The overlay file system provides a logical file system to each user based on the user layer, the tenant layer, and a strategy comprising a set of application layers. A first user shares a file with other users of the same tenant by moving the file from the first user's user layer to the tenant layer. After the file is moved, all users of the tenant have access to the file. The moving of the file is achieved by modifying metadata for the file. | 2020-10-08 |
20200320042 | MULTITENANT APPLICATION SERVER USING A UNION FILE SYSTEM - Example methods and systems are directed to a multitenant application server using a union file system. Each tenant has one or more users and a tenant layer in the union file system. Each user has a user layer in the union file system. The union file system provides a logical file system to each user based on the user layer, the tenant layer, and a base layer comprising a set of application layers. A first user shares an application template file with other users of the same tenant by moving the file from the first user's user layer to the tenant layer. After the file is moved, all users of the tenant have access to the application defined by the application template file. The moving of the file is achieved by modifying metadata for the file. | 2020-10-08 |
20200320043 | LINKING OF TOKENS - An example operation may include one or more of sending, by a node A, a signed transaction Tr | 2020-10-08 |
20200320044 | SYSTEM AND METHOD FOR EXTRACTING A STAR SCHEMA FROM TABULAR DATA FOR USE IN A MULTIDIMENSIONAL DATABASE ENVIRONMENT - The application describes a system and method for extracting a star schema from tabular data for use in a multidimensional database. The system can receive a tabular data including a plurality of columns, and determine a relationship between each pair of the plurality of columns by analyzing actual values in a plurality of rows for each pair of columns. Based on the determined relationships among the plurality of columns and a type of each column, the system can use a heuristic process to identify a plurality of cube elements from the plurality of columns to construct a star schema. A user interface can be provided to display potential problems of the star schema, and one or more alternative approaches for a user to select to extract a star schema from the tabular data. | 2020-10-08 |
20200320045 | SYTEMS AND METHODS FOR CONTEXT-INDEPENDENT DATABASE SEARCH PATHS - The present disclosure provides a computer-implemented method for applying an analysis to a data model comprising data objects. The method may comprise receiving the analysis and the first data model each in semantic format. Next, the analysis and the data model may be computer processed to (i) identify one or more elements missing from the data model and (ii) determine that the analysis is not applicable to the data model upon identification of the one or more elements. The one or more elements may then be presented to a user for adjusting the data model. This may be repeated until the analysis is applicable to the data model. The analysis may then be performed on the data objects of the data model. | 2020-10-08 |
20200320046 | DEDUPLICATION OF ENCRYPTED DATA WITHIN A REMOTE DATA STORE - Techniques are provided for deduplicating encrypted data. For example, a device has data to store in an encrypted state within a remote data store. A key is used to encrypt the data to create encrypted data. The data is hashed to create hashed data and the encrypted data is hashed to create hashed encrypted data. A probabilistic data structure of the data is generated. The key is encrypted based upon the data to create an encrypted key. The encrypted data is transmitted to the remote data store, along with metadata comprising the hashed data, the hashed encrypted data, the probabilistic data structure, and the encrypted key. The metadata may be used to implement deduplication for subsequent requests, to store data within the remote data store, with respect to the encrypted data. | 2020-10-08 |
20200320047 | SYSTEM AND METHOD FOR FINGERPRINTING-BASED CONVERSATION THREADING - Systems, methods, and computer readable media for staging a corpus of electronic communication documents for analysis, such as, for example, via a content analysis platform. The staging may include a staging platform accessing the corpus of electronic communication document. For each electronic communication document within the corpus, the staging platform may generate a fingerprint based upon the output of a hash function executed upon a set of characteristics corresponding to each segment within the electronic communication document. The staging platform may analyze the generated fingerprints to generated a plurality of threaded conversations that do not include electronic communication documents that fail to convey any new information. The systems and methods may also include detecting and flagging any segments within an electronic communication document that may have been mutated by its author. | 2020-10-08 |
20200320048 | Cost Heuristic For Filter Evaluation - A method, a system, and a computer program product for executing a query. A query plan for execution of a query is generated. The query requires access to at least one table stored in a database system. The query includes one or more filter predicates. A filter predicate in the one or more filter predicates is selected. For the selected filter predicate, a plurality of cost function values associated executing a filter evaluation of the selected filter predicate are determined. Filter evaluation of the selected predicate is executed in accordance with at least one determined cost function value in the plurality of cost function values. | 2020-10-08 |
20200320049 | System and Method for Representing Media Asssets - Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for representing media assets. The method includes receiving an original media asset and derivative versions of the original media asset and associated descriptors, determining a lineage to each derivative version that traces to the original media asset, generating a version history tree of the original media asset representing the lineage to each derivative version and associated descriptors from the original media asset, and presenting at least part of the version history tree to a user. In one aspect, the method further includes receiving a modification to one associated descriptor and updating associated descriptors for related derivative versions with the received modification. The original media asset and the derivative versions of the original media asset can share a common identifying mark. Descriptors can include legal documentation, licensing information, creation time, creation date, actors' names, director, producer, lens aperture, and position data. | 2020-10-08 |
20200320050 | EFFICIENT DATABASE SEARCH AND REPORTING, SUCH AS FOR ENTERPRISE CUSTOMERS HAVING LARGE AND/OR NUMEROUS FILES - This application discloses a server for handling data reporting requests in a system that also comprises storage managers, primary storage devices, and secondary storage devices connected over one or more networks. The server receives, from each storage manager, a copy of data associated with the storage manager, and stores the received copies in one or more local databases. The server builds offline one or more indices for part or all of the received copies to improve query processing against the one or more local databases. Next, the server receives a request over a network from one of the storage managers or a standalone console, which received the request from a user for a report of data associated with the storage managers. The server produces a data report in response to the request, using the one or more indices and without impacting performance of the storage managers. | 2020-10-08 |
20200320051 | SUPPORTING SCALABLE DISTRIBUTED SECONDARY INDEX USING REPLICATION ENGINE FOR HIGH-PERFORMANCE DISTRIBUTED DATABASE SYSTEMS - Implementations of the present disclosure include providing, at each node in a set of nodes of a database system, a table partition of a plurality of table partitions, the plurality of table partitions being provided by partitioning a table using a primary key, providing, at each node in the set of nodes of the database system, a secondary index partition of a plurality of secondary index partitions, each secondary index partition including a replicate table of at least a portion of the table, the plurality of secondary index partitions being provided by partitioning the table using one or more secondary keys, and for at least one operation executed on a table partition, executing a replication protocol to replicate the at least one operation on a secondary index partition that corresponds to the table partition. | 2020-10-08 |
20200320052 | FAST CIRCULAR DATABASE - A data management system and associated data management method is disclosed herein. An exemplary method for managing data includes receiving data records timestamped with times spanned by a defined time interval; generating a data cube that includes data planes, wherein each data plane contains a set of data records timestamped with times spanned by the defined time interval; generating an index hypercube for the data cube, wherein dimensions of the index hypercube represent hash values of index keys defined for accessing the data cube; and generating an indexed data cube for storing in a database, wherein the indexed data cube includes the data cube and the index hypercube. The index hypercube includes index hypercube elements, where each index hypercube element represents a unique combination of hashed index key values that map to a data plane in the data cube. | 2020-10-08 |
20200320053 | LEVERAGING A COLLECTION OF TRAINING TABLES TO ACCURATELY PREDICT ERRORS WITHIN A VARIETY OF TABLES - The present disclosure relates to systems, methods, and computer-readable media for using a variety of hypothesis tests to identify errors within tables and other structured datasets. For example, systems disclosed herein can generate a modified table from an input table by removing one or more entries from the input table. The systems disclosed herein can further leverage a collection of training tables to determine probabilities associated with whether the input table and modified table are drawn from the collection of training tables. The systems disclosed herein can additionally compare the probabilities to accurately determine whether the one or more entries include errors therein. The systems disclosed herein may apply to a variety of different sizes and types of tables to identify different types of common errors within input tables. | 2020-10-08 |
20200320054 | COMPUTER PROGRAM FOR PROVIDING DATABASE MANAGEMENT - An embodiment of the present disclosure discloses a computer program executable by one or more processors and stored in a computer readable medium. The computer program causes the one or more processors to perform operations for database management, the operations including: generating a sorted table by sorting a table including one or more records based on at least one column; identifying a current key corresponding to each of one or more rows included in the sorted table; sequentially recording a record corresponding to each of the one or more rows in a data structure based on a result of a comparison between the current key corresponding to each of the one or more rows and a previous key; and generating an integrated result value for a record recorded in the data structure and recording the generated integrated result value in an output table. | 2020-10-08 |
20200320055 | CONSTRUCTING BLOCKCHAIN WORLD STATE MERKLE PATRICIA TRIE SUBTREE - Implementations of this specification include traversing a world-state MPT in multiple iterations, and, at each iteration, for a current node of the world-state MPT, executing one of: marking the current node as an account node and storing an address of the current node in the address list, determining that the current node is an extension node, and moving to a next iteration of the traversal setting the current node to a node referenced by the extension node, and marking the current node as a transition node, and storing an address of the current node in the address list; creating a sub-tree of the world-state MPT based on the address list, a root node of the sub-tree including a root node of the world-state MPT, and one or more child nodes of the sub-tree corresponding to nodes of the world-state MPT having an address stored in the address list. | 2020-10-08 |
20200320056 | SYSTEMS AND METHODS FOR A REPUTATION-BASED CONSENSUS PROTOCOL - Systems and methods are described for a reputation-based consensus protocol. A reputation score of a first node of a plurality of nodes may be determined. A distributed ledger record associated with a second node of the plurality of nodes may be received. The distributed ledger record may be stored to a distributed ledger based on the first node validating the distributed ledger record and based on the reputation score. | 2020-10-08 |
20200320057 | ASSET MANAGEMENT SYSTEM, METHOD, APPARATUS, AND ELECTRONIC DEVICE - This specification describes techniques for managing assets in a blockchain. One example method includes receiving, from a target user recorded in a distributed database of a blockchain network, a user input including a request to perform a contract operation on asset objects including digital assets corresponding to physical assets associated with the target user, in response to receiving the request, generating an asset container as an operation target of the contract operation, the asset container recording field information of the asset objects, generating an asset container group by dividing the asset container into the asset container group based on an association relationship between the asset objects, wherein the association relationship defines correspondences between each asset container in the asset container group and at least one other asset container in the asset container group, and performing the contract operation on the asset container group using a contract object. | 2020-10-08 |
20200320058 | ASSET MANAGEMENT SYSTEM, METHOD, APPARATUS, AND ELECTRONIC DEVICE - This specification describes techniques for managing assets in a blockchain. One example method includes receiving, from a target user recorded in a distributed database of a blockchain network, a user input including a request to perform a contract operation on asset objects including digital assets corresponding to physical assets associated with the target user, in response to receiving the request, generating an asset container as an operation target of the contract operation, the asset container recording field information of the asset objects, generating an asset container group by dividing the asset container into the asset container group based on an association relationship between the asset objects, wherein the association relationship defines correspondences between each asset container in the asset container group and at least one other asset container in the asset container group, and performing the contract operation on the asset container group using a contract object. | 2020-10-08 |
20200320059 | TRANSACTION CHANGE DATA REPLICATION - Transaction change data replication includes identifying changes being made to a source database as part of an ongoing transaction at a source. The identifying is performed as the changes are made to the source database and as the transaction remains ongoing prior to commit or rollback thereof at the source. The source and a target are in a replication relationship in which data of the source database at the source is replicated to destinations in a target database at the target. The indications of the changes being made to the source are forwarded, to the target, as the transaction remains ongoing prior to commit or rollback thereof, and based on ending the transaction at the source, an indication of the transaction end is sent to the target. | 2020-10-08 |
20200320060 | PARTITION MOVE IN CASE OF TABLE UPDATE - A system includes reception of a query to update a partition key value of a first set of rows of a database table, determination that the updated partition key value is associated with a first partition of the database table stored on a first database server node, fetching of row identifiers of each of the first set of rows from two or more database server nodes in which each of the first set of rows is respectively stored, determination, based on the row identifiers, of a first subset of the first rows which are not stored on the first database server node and a second subset of the first rows which are stored on the first database server node, fetching of the first subset of rows from the database server nodes in which each of the first set of rows is respectively stored, update of the partition key value of each row of the fetched first subset of rows, instructing of the first database server node to store the updated rows of the fetched first subset in the first partition stored on the first database server node, and instructing of the first database server node to update the partition key value of each of the second subset of rows of the partition stored on the first database server node. | 2020-10-08 |
20200320061 | MANAGING DATA OBJECTS FOR GRAPH-BASED DATA STRUCTURES - Various embodiments provide methods, systems, apparatus, computer program products, and/or the like for managing, ingesting, monitoring, updating, and/or extracting/retrieving information/data associated with an electronic record (ER) stored in an ER data store and/or accessing information/data from the ER data store, wherein the ERs are generated, updated/modified, and/or accessed via a graph-based domain ontology. | 2020-10-08 |
20200320062 | MANAGING DATA OBJECTS FOR GRAPH-BASED DATA STRUCTURES - Various embodiments provide methods, systems, apparatus, computer program products, and/or the like for managing, ingesting, monitoring, updating, and/or extracting/retrieving information/data associated with an electronic record (ER) stored in an ER data store and/or accessing information/data from the ER data store, wherein the ERs are generated, updated/modified, and/or accessed via a graph-based domain ontology. | 2020-10-08 |
20200320063 | METHOD AND A NODE FOR STORAGE OF DATA IN A NETWORK - Disclosed is a method for storing data from a remote device in a blockchain database, the method, performed by a network node | 2020-10-08 |
20200320064 | METHODS AND APPARATUS FOR A DISTRIBUTED DATABASE WITHIN A NETWORK - In some embodiments, an apparatus includes an instance of a distributed database at a first compute device configured to be included within a set of compute devices that implement the distributed database. The apparatus also includes a processor configured to define a first event linked to a first set of events. The processor is configured to receive, from a second compute device from the set of compute devices, a signal representing a second event (1) defined by the second compute device and (2) linked to a second set of events. The processor is configured to identify an order associated with a third set of events based at least one a result of a protocol. The processor is configured to store in the instance of the distributed database the order associated with the third set of events. | 2020-10-08 |
20200320065 | Method and Apparatus for Processing Write-Ahead Log - A method and apparatus for processing a write-ahead log (WAL) in a storage device that records a WAL set key-value pair and a status key-value pair include determining a status of the WAL set key-value pair based on the status key-value pair, replaying all uncompleted WALs in the WAL set key-value pair when the status of the WAL set key-value pair is in a sealed state, modifying, in the status key-value pair, the status of the WAL set key-value from the sealed state to a completed state, and deleting the WAL set key-value pair in the completed state. | 2020-10-08 |
20200320066 | METHOD AND DEVICE FOR INTERFACE OPERATION AND MAINTENANCE - The embodiments of present disclosure provide a method and device for interface operation and maintenance. The method includes: acquiring query condition parameters input by a user for querying interface log information of at least one system, wherein the query condition parameters at least include a message time parameter that uniquely marks log message time and a system name parameter that uniquely marks the system; invoking an application programming interface provided by the at least one system according to the system name parameter in the query condition parameters, and acquiring first interface log information according to a result of invoking the application programming interface; determining second interface log information in the first interface log information according to the message time parameter that uniquely marks the log message time; and displaying the second interface log information. | 2020-10-08 |
20200320067 | DISPLAYING MESSAGES RELEVANT TO SYSTEM ADMINISTRATION - Among other things, in an aspect, a method includes receiving an indication of a condition at a first component of a computer system. The method also includes performing a search of one or more archives that identifies, among artifacts of at least one of the archives, information relevant to the condition. The method also includes displaying, in a user interface, information determined based on at least one of the artifacts in association with a visual representation indication of the condition. | 2020-10-08 |
20200320068 | USER INTERFACE COMMANDS FOR REGULAR EXPRESSION GENERATION - Techniques for generated regular expressions are disclosed. In some embodiments, a regular expression generator may receive input data comprising one or more character sequences. The regular expression generator may convert character sequences into a sets of regular expression codes and/or span data structures. The regular expression generator may identify a longest common subsequence shared by the sets of regular expression codes and/or spans, and may generate a regular expression based upon the longest common subsequence. Generation of the regular expressions can be implemented on an interactive user interface. Commands can be applied to the one or more character sequences and regular expressions are generated based on the applied commands. | 2020-10-08 |
20200320069 | HYBRID COMPILATION FRAMEWORK FOR ARBITRARY AD-HOC IMPERATIVE FUNCTIONS IN DATABASE QUERIES - Implementations of the present disclosure include providing a parse tree including a declarative portion and an imperative portion, dividing the parse tree to provide a first parse sub-tree and a second parse sub-tree, compiling the first parse sub-tree using a declarative compiler to provide a query execution plan (QEP) including an imperative script operator to prompt execution of the imperative portion, compiling the second parse sub-tree using an imperative compiler to provide one or more script execution plans, executing, by an execution engine, the QEP until encountering an imperative script operator, and, in response to encountering the imperative script operator, initiating execution of the one or more script execution plans to provide an imperative result, and providing a query result at least partially including the imperative result. | 2020-10-08 |
20200320070 | Iterative Multi-Attribute Index Selection for Large Database Systems - The inventors have implemented in a columnar in-memory database and studied access patterns of a large production enterprise system. To obtain accurate cost estimates for a configuration, the inventors have used the what-if capabilities of modern query optimizers. What-if calls, however, are the major bottleneck for most index selection approaches. Hence, a major constraint is to limit the number of what-if optimizer calls. And even though the inventive approach does not limit the index candidate set, it decreases the number of what-if calls because in each iteration step the number of possible (index) extensions is comparably small which results in a limited number of what-if calls. | 2020-10-08 |
20200320071 | TECHNIQUES FOR DATA RETENTION - Systems and techniques for managing data in a relational database environment and a non-relational database environment. Data in the relational database environment that is static and to be maintained beyond a preselected threshold length of time is identified. The data is copied from the relational database and stored in the data the non-relational database. Access to the data is provided from the non-relational database via a user interface that accesses both the relational database and the non-relational database. | 2020-10-08 |
20200320072 | SCALABLE MATRIX FACTORIZATION IN A DATABASE - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scalable matrix factorization. A method includes obtaining a Structured Query Language (SQL) query to create a matrix factorization model based on a set of training data, generating SQL sub-queries that don't include non-scalable functions, obtaining the set of training data, and generating a matrix factorization model based on the set of training data and the SQL sub-queries that don't include non-scalable functions. | 2020-10-08 |
20200320073 | UNIQUE KEY LOOKUP WITH ADDITIONAL FILTER - A method, a system, and a computer program product for executing a query. The query requiring access to one or more tables stored in a database system is executed. The query includes one or more filter predicates. Using a unique key value corresponding to a first predicate, at most one row in the tables including a portion of data matching the unique key value is identified. Using filter values corresponding to the filter predicates, another portion of data in the identified row is compared to the filter values to determine whether that portion of data matches the filter values. Based on the comparison, a result of the execution of the query is outputted. The result includes data stored in the identified row upon determination that the data matches the unique key value corresponding to the first filter predicate and the filter values corresponding to remaining filter predicates. Otherwise, the result is empty. | 2020-10-08 |
20200320074 | Filter Evaluation For Table Fragments - A method, a system, and a computer program product for analysis of query filtering mechanisms for table fragments. A query plan for execution of a query is generated. The query requires access to at least one table stored in a database system. The query includes one or more filter predicates. The table is partitioned into a plurality of fragments. A determination whether a fragment in the table is compressed and whether the fragment is associated with an index is made. A filter predicate is selected for processing the fragment. For the selected filter predicate, a filter evaluation of the selected filter predicate for the fragment is determined. The filter evaluation of the selected predicate is executed for the fragment. | 2020-10-08 |
20200320075 | Method of Extracting Relationships from a Nosql Database - Aspects of the present invention disclose a method for identifying a relationship between objects of a NoSQL database based on queries of an application programming interface (API) call. The method includes one or more processors identifying an API call that includes two or more NoSQL query requests. The method further includes determining a class for the two or more NoSQL query requests of the API call. The method further includes determining whether a query value of the first NoSQL query request of the API call is present in a second NoSQL query request. The method further includes determining a relationship between the first NoSQL query request and the second NoSQL query request of the API call. The method further includes creating a view in a relational model database based on the respective determined classes for the two or more NoSQL query requests of the API call and the determined relationship. | 2020-10-08 |
20200320076 | DETERMINATION OF QUERY OPERATOR EXECUTION LOCATION - A system includes determination of a first partition-wise operation on a first database table partition of a first table located at a first server node and a first database table partition of a second table located at a second server node, determination of a first cost to execute the first partition-wise operation on the first server node, and a second cost to execute the first partition-wise operation on the second server node, determination of a second partition-wise operation on a result of the first partition-wise operation, determination of a third cost to execute the second partition-wise operation on the first server node based on the first cost and the second cost, and a fourth cost to execute the second partition-wise operation on the second server node based on the first cost and the second cost, determination of one of the first server node and the second server node to execute the second partition-wise operation based on the third cost and the fourth cost, and determination of one of the first server node and the second server node to execute the first partition-wise operation based on the third cost and the fourth cost. | 2020-10-08 |
20200320077 | PARTITION-WISE PROCESSING DISTRIBUTION - A system includes determination, for a first partitioned physical query operator in a query operator tree, of a partition-wise placement cost based on a cost of each table partition associated with the first partitioned physical query operator and a partition-wise placement cost of any child physical query operator of the first partitioned physical query operator, determination of a placement cost for the first partitioned physical query operator physical query operator for each of a plurality of operator execution locations based on the determined partition-wise placement cost, determination, for a logical query operator associated with the first partitioned physical query operator, of a merged placement cost for each of the plurality of operator execution locations, and determination an execution location for the first partitioned physical query operator based on the determined partition-wise placement cost. | 2020-10-08 |
20200320078 | BENCHMARK FRAMEWORK FOR COST-MODEL CALIBRATION - In some aspects, there is provided a method including receiving an execution plan file, the execution plan file utilizing at least one operator of interest and further utilizing other actions separate from the at least one operator of interest. The method further includes forming an execution plan object by modifying the execution plan file by isolating the at least one operator of interest from the other actions. The method further includes performing a series of tests executing an extended execution plan object. The series of tests can include receiving the input data identified by the one or more pointers in the extended execution plan object, executing the extended execution plan object using the received input data, measuring, based on the execution of the extended execution plan object, at least one cost metric representative of execution of the at least one operator of interest, and outputting the measured cost metric. | 2020-10-08 |
20200320079 | ENHANCED SEARCH FUNCTIONS AGAINST CUSTOM INDEXES - A database query may be determined based on a database query definition. The database query definition may include a filter criterion that contains a wildcard match, which may include a first fixed portion and a second wildcard portion. The first fixed portion may include one or more combining characters. The database query may include a first query portion including a first canonical representation of the first fixed portion that omits the one or more characters. The database query may include a second query portion including a second canonical representation of the first fixed portion. The database query may be executed to select a result set that includes a plurality of query result values by applying the second query portion to filter values accessed by the first query portion. | 2020-10-08 |
20200320080 | DATA PROCESSING IN AN OPTIMIZED ANALYTICS ENVIRONMENT - Systems and methods for data processing in an optimized analytics environment are disclosed. The system may enable users to create data processing requests, interact with various data sources and datasets, and generate data processing outputs. The system may receive a data processing request from an audio-enabled input source or a UI-based input source. The system may determine whether the data processing request at least partially matches a stored data processing request. The system may receive a data processing request selection comprising the data processing request or the stored data processing request. The system may execute the data processing request selection on a data source. | 2020-10-08 |
20200320081 | CACHE FOR EFFICIENT RECORD LOOKUPS IN AN LSM DATA STRUCTURE - Techniques are disclosed relating to maintaining a cache usable to locate data stored in a data structure. A computer system, in various embodiments, maintains a data structure having a plurality of levels that store files for a database. The files may include one or more records that each have a key and corresponding data. The computer system may also maintain a cache for the database whose entries store, for a key, an indication of a location of a corresponding record in a file of the data structure. In some embodiments, the computer system receives a request to access a particular record stored in the data structure where the request specifies a key usable to locate the particular record. The computer system may retrieve, from the cache via the key, a particular indication of a location of the particular record and may use the particular indication to access the particular record. | 2020-10-08 |
20200320082 | ADVANCED MULTIPROVIDER OPTIMIZATION - A calculation engine of a database management system is described that determines a multiprovider includes a first data source and a second data source that each require different approaches for operation optimization. The calculation engine can split the multiprovider into a first node corresponding to a first operation compatible with the first data source and a second node corresponding to a second operation compatible with the second data source. The calculation engine can perform the first operation at the first data source to produce a first result and perform the second operation at the second data source to produce a second result. The calculation engine can then merge the first result and the second result according to a third operation, and perform such third operation at the first data source. | 2020-10-08 |
20200320083 | KEY-VALUE STORAGE USING A SKIP LIST - This disclosure provides various techniques that may allow for accessing values stored in a data structure that stores multiple values corresponding to database transactions using a skip list. A key may be used to traverse the skip list to access data associated with the key. The skip list maintains on ordering of multiple keys, each associated with a particular record in the data structure, using indirect links between data records in the data structure that reference buckets included in hash table. Each bucket includes pointers to one or more records in the skip list. | 2020-10-08 |
20200320084 | DATA RECORDING AND ANALYSIS SYSTEM - A system for recording and analyzing a data stream, a method for analyzing a data stream, and a computer readable memory that stores instructions that cause a computer to execute a method of analyzing a data stream are disclosed. The system includes an input port, output port, buffer, and controller. The controller identifies a segment, referred to as a new extracted data segment (EDS) of the data stream stored in a buffer, the new EDS satisfying an extraction protocol. The controller compares the new EDS to each of a plurality of reference data segments (RDSs) using a similarity protocol. A new RDS is created if the new EDS is not similar to an existing EDS. If the new EDS is similar to an RDS, the RDS is updated to list that new EDS as being similar. | 2020-10-08 |
20200320085 | BLOCKCHAIN BASED IOT DATA MANAGEMENT - Blockchain based IoT data management can include receiving IoT data are with one or more processing units. At least one aggregation pattern of the IoT data can be determined by one or more processing units. The IoT data can be hashed, based upon the at least one aggregation pattern to obtain hash values of the IoT data by one or more processing units. The hash values can be sent to a blockchain system for storing by one or more processing units. | 2020-10-08 |
20200320086 | METHOD AND SYSTEM FOR CONTENT RECOMMENDATION - One embodiment provides a method and system for recommending content to users. During operation, the system can select a content piece from a content library and extract, by a computer using a natural language processing (NLP) technique, one or more keywords from the content piece. The system can determine a domain associated with the content piece based on the extracted keywords and obtain domain knowledge of the determined domain. The system can generate a feature tag for the content piece based on the extracted keywords and the obtained domain knowledge, and generate an attribute tag for a user based on historical data associated with the user. The system can then recommend one or more content pieces from the content library to the user based on feature tags associated with the one or more content pieces and the attribute tag for the user. | 2020-10-08 |
20200320087 | Data Transfer in a Computer-Implemented Database - Computer-implemented methods, systems and products, the method comprising receiving, at a data server associated with a database, a command for data transfer between a client machine and the data server over a communications network, the data being stored in at least a data table comprising one or more columns; in response to receiving the command for data transfer, determining whether one or more columns of the data table are designated; identifying the one or more designated columns, such that data associated with the one or more designated columns is either considered or not considered for purpose of the data transfer; and executing the command to transfer the data in the database according to the designated columns. | 2020-10-08 |
20200320088 | SYSTEM FOR EFFICIENT INFORMATION EXTRACTION FROM STREAMING DATA VIA EXPERIMENTAL DESIGNS - A system, method, and computer-readable medium for extracting the samples from big data to extract most information about the relationships of interest between dimensions and variables in the data repository. More specifically, extracting information from lame data repositories follows an adaptive process that uses systematic sampling procedures derived from optimal experimental designs to target from a large data set specific observations with information value of interest for the analytic task under consideration. The application of adaptive optimal design to guide exploration of large data repositories provides advantages over known big data technologies. | 2020-10-08 |
20200320089 | INDEX COMPRESSION USING REORDERING AND SELF-UPDATES - The present disclosure involves systems, software, and computer implemented methods for compression operation combining duplicate index entries independent of a data model. One example method includes operations to identify an update to at least one entry in a compressed index that includes a plurality of entries, each associated with a unique entry ID. An entry ID of each entry associated with the update are identified. A self-update is performed for each entry not associated with the entry IDs associated with the update, which comprises inserting a value associated with those non-updated entries to an uncompressed index in connection with that entry's corresponding entry ID. For each entry associated with the update, a particular update value from the identified update is inserted into the uncompressed index associated with the particular entry ID. When completed, the uncompressed index is compressed into a new version of the compressed index. | 2020-10-08 |
20200320090 | METHOD AND DEVICE FOR DATA FUSION, NON-TRANSITORY STORAGE MEDIUM AND SERVER - A method and a device for data fusion, a non-transitory storage medium and a server are provided, wherein the method includes: performing a data structuring on an obtained set of data to obtain a structured data set including a plurality of structured data; selecting any two pieces of structured data in the structured data set to form a plurality of structured data pairs; performing a similarity calculation on each of the plurality of structured data pairs to obtain a similarity value for each structured data pair; and when the similarity value is greater than a predetermined similarity threshold, classifying structured data in the structured data pair into a same data subject. In embodiments of the present disclosure, whether the data belongs to the same data body can be determined, which provides technical support for data fusion. | 2020-10-08 |
20200320091 | SCHEMALESS TO RELATIONAL REPRESENTATION CONVERSION - A system is disclosed. The system includes a processor configured to: receive a set of data structured in a schemaless data representation; automatically translate the set of data into a relational representation by: translating an array map value in the set of data into an ordered multi-map; and converting the ordered multi-map to the relational representation. The processor is further configured to store the translated set of data in a key-value data store for a query-based retrieval. | 2020-10-08 |
20200320092 | REGULAR EXPRESSION GENERATION FOR NEGATIVE EXAMPLE USING CONTEXT - Techniques for generated regular expressions are disclosed. In some embodiments, a regular expression generator may receive input data comprising one or more character sequences. The regular expression generator may convert character sequences into a sets of regular expression codes and/or span data structures. The regular expression generator may identify a longest common subsequence shared by the sets of regular expression codes and/or spans, and may generate a regular expression based upon the longest common subsequence. A negative example may be used to generate the regular expression. Context from the negative example may be determined in order to generate the regular expression. | 2020-10-08 |
20200320093 | Extensible Data Transformations - Methods, computer systems, computer-storage media, and graphical user interfaces are provided for facilitating data transformations, according to embodiments of the present invention. In one embodiment, a set of example values are received. A repository of transformation tools is searched to identify a new transformation tool as relevant to a data transformation associated with the received set of example values. The repository includes annotations associated with the new transformation tool. The new transformation tool is used to generate a transformation program that produces transformed output values. Additional annotations are generated for the new transformation tool based on the transformed output values. | 2020-10-08 |
20200320094 | OPTIMIZATION OF RELOCATED QUERIES IN FEDERATED DATABASES USING CROSS DATABASE TABLE REPLICAS - Disclosed herein are system, method, and computer program product embodiments for appropriately routing requests for data stored in multiple storage mediums. An embodiment operates by maintaining a first and second data stored on a first storage medium in communication with a second storage medium. Thereafter, a replicate of the first data stored in the first storage medium may be created for the second storage medium to store a replica data mirroring the first data. Subsequently, a request for retrieval of the first data may be received. Afterward, a previous update time of the second storage medium in receiving the replicate of the first data stored in the first storage medium may be determined. Lastly, based on the previous update time, the request may be forwarded to the first storage medium or second storage medium. | 2020-10-08 |
20200320095 | SUBSCRIPTION-BASED CHANGE DATA CAPTURE MECHANISM USING DATABASE TRIGGERS - Disclosed herein are system, method, and computer program product embodiments for replicating data from a source database table to a target database table. An embodiment operates by maintaining a master logging table in communication with a source database table and a subscriber logging table. Thereafter, a copy of a first modification of data of the source database table is provided to the master logging table as a record, where the first record includes the copy of the first modification of data. Subsequently, upon determining that the first record in the master logging table is committed, a copy of the first record is provided to the subscriber logging table. And after identifying a first target database associated with the master logging table, the first record is sent to the first target database. | 2020-10-08 |
20200320096 | RESOURCE PROVISIONING SYSTEMS AND METHODS - A method and apparatus managing a set of processors for a set of queries is described. In an exemplary embodiment, a device receives a set of queries for a data warehouse, the set of queries including one or more queries to be processed by the data warehouse. The device further provisions a set of processors from a first plurality of processors, where the set of processors to process the set of queries, and a set of storage resources to store data for the set of queries. In addition, the device monitors a utilization of the set of processors as the set of processors processes the set of queries. The device additionally updates a number of the processors in the set of processors provisioned based on the utilization/ Furthermore, the device processes the set of queries using the updated set of processors. | 2020-10-08 |
20200320097 | Import and Export in Blockchain Environments - Importation and exportation allows software services in blockchain environments. Blockchains may import data and export data, thus allowing blockchains to offer software services to clients (such as other blockchains). Individual users, businesses, and governments may create their own blockchains and subcontract or outsource operations to other blockchains. Moreover, the software services provided by blockchains may be publically ledgered by still other blockchains, thus providing two-way blockchain interactions and two-way ledgering for improved record keeping. | 2020-10-08 |
20200320098 | High Throughput Cross Database Table Synchronization and Transactional Replication in Federated Databases - Disclosed herein are system, method, and computer program product embodiments for providing a lock-free parallel log replay and synchronization scheme to support asynchronous table replication. By synchronizing a replica table with the server-side data and conducting subsequent updates using transaction logs via a replayer, locking of tables may be avoided. A consistent transactional state may be maintained by employing a replayer to mark the table as enabled instead of a synchronizer. The replayer may also deduce transitive closures among transactions and replay the transactions in parallel based on the deduced transitive closures to optimize playback. These techniques provide enhanced data availability and minimize database blocking and deadlocking while improving query performance. | 2020-10-08 |