37th week of 2021 patent applcation highlights part 45 |
Patent application number | Title | Published |
20210286714 | STRESS TEST IMPACT ISOLATION AND MAPPING - A method for testing a system under test (SUT) in an active environment to identify cause of a soft failure includes recording a first difference vector by executing a set of test cases on a baseline system and monitoring performance parameters of the baseline system before and after executing the test cases. Each performance record represents differences in the performance parameters of the baseline system from before and after the execution of a corresponding test case. The method further includes, similarly, recording a second difference vector by executing the test cases on the SUT and monitoring performance parameters of the SUT before and after executing the test cases. The method further includes identifying an outlier performance record from the second difference vector by comparing the difference vectors and further, determining a root cause of the soft failure by analyzing a test case corresponding to the outlier. | 2021-09-16 |
20210286715 | TEST DEVICE, TEST METHOD, AND COMPUTER READABLE MEDIUM - A test device ( | 2021-09-16 |
20210286716 | CASCADING PID CONTROLLER FOR METADATA PAGE EVICTION - In a storage system that implements metadata paging, the page free pool is replenished in the background to reduce foreground evictions and associated latency on page-in. A two-level page eviction controller with cascaded proportional, integral, derivative (PID) controllers optimizes the size of the free page pool and optimizes the rate at which pages are freed in the background. By optimizing these two parameters the page eviction controller dynamically maximizes used pages (minimizing free pages) to increase the metadata cache hit ratio. Optimizing the parameters also reduces the chances of foreground page evictions, thereby reducing IO latency, during both steady state and burst page-in requests. | 2021-09-16 |
20210286717 | SOLID-STATE DRIVE PERFORMANCE AND LIFESPAN BASED ON DATA AFFINITY - The example embodiments disclose a system and method, a computer program product, and a computer system for improving solid-state drive performance. The example embodiments may include generating, by an affinity adapter located external to the solid-state drive, a plurality of affinities for each of a plurality of data to a respective plurality of subdivisions of data of a solid-state drive, wherein each of the plurality of data is associated with a logical block address (LBA) and each of the respective plurality of subdivisions has a physical block address (PBA). The example embodiments may also include receiving a request to write first data having a first LBA to the solid-state drive, determining by the solid-state drive, at a first time, that the first data has an affinity with a particular subdivision of data of a solid-state drive based on the generated plurality of affinities, and writing the first data to a memory location of the solid-state drive, wherein the PBA of the memory location has the determined affinity. | 2021-09-16 |
20210286718 | DATA STRUCTURE ALLOCATION INTO STORAGE CLASS MEMORY - A method, a computer program product, and a system for allocating a variable into storage class memory during compilation of a program. The method includes selecting a variable recorded in a symbol table during compilation and computing a variable size of the variable by analyzing attributes related to the variable. The method further includes computing additional attributes relating to the variable. The method also includes computing a control flow graph and analyzing the control flow graph and the additional attributes to determine an allocation location for the variable. The method further includes allocating the variable into a storage class memory based on the analysis performed. | 2021-09-16 |
20210286719 | MANAGING STORAGE SPACE FOR METADATA CONSISTENCY CHECKING - A method of managing storage space for a metadata consistency checking procedure (MCCP) is provided. The method includes (a) tracking an amount of metadata and an amount of user data organized by the metadata; (b) provisioning a quantity of storage dedicated to the MCCP based, at least in part, on a ratio of the amount of metadata to the amount of user data; and (c) upon initiation of the MCCP, building tracking structures within the provisioned storage dedicated to the MCCP. An apparatus, system, and computer program product for performing a similar method are also provided. | 2021-09-16 |
20210286720 | MANAGING SNAPSHOTS AND CLONES IN A SCALE OUT STORAGE SYSTEM - Methods, systems, and media for supporting snapshots and clones in a scale out storage system are disclosed. The system maintains first metadata that maps logical addresses of logical data blocks to corresponding content IDs, a distributed hash table that maps content IDs to corresponding node IDs, and second metadata that maps content IDs to corresponding physical addresses of physical data blocks. Clones are created by mapping each logical block address of each clone to the content ID associated with its corresponding logical block address of the original and incrementing the reference counts in the second metadata. The task of incrementing reference counts in the second metadata can be distributed across multiple storage nodes. A logical device can be designated as a golden image. Clones of a golden image are created by decrementing its clone credit without incrementing the reference counts in the second metadata. | 2021-09-16 |
20210286721 | CONTROLLER AND OPERATING METHOD THEREOF - A controller controls an operation of a semiconductor memory device. The controller includes a request analyzer, a storage, and a garbage collection controller. The request analyzer generates invalid data information, based on an erase request received from a host. The storage stores a garbage collection reference table representing memory blocks excluded from selection as a victim block on which a garbage collection operation is to be performed, based on the invalid data information. The garbage collection controller controls the garbage collection operation on the semiconductor memory device, based on exclusion block information generated according to the garbage collection reference table. | 2021-09-16 |
20210286722 | CONFIGURABLE TRIM SETTINGS ON A MEMORY DEVICE - The present disclosure includes apparatuses and methods related to configurable trim settings on a memory device. An example apparatus can include configuring a set of trim settings for an array of memory cells such that the array of memory cells have desired operational characteristics in response to being operated with the set of trim settings. | 2021-09-16 |
20210286723 | INDICATING EXTENTS OF TRACKS IN MIRRORING QUEUES BASED ON INFORMATION GATHERED ON TRACKS IN EXTENTS IN CACHE - Provided are a computer program product, system, and method for indicating extents of tracks in mirroring queues based on information gathered on tracks in extents in cache. Extent information on an extent of tracks in a cache indicated in an active cache list is processed in response to destaging a track from the active cache list to add to a demote list used to determine tracks to remove from the cache. The extent information is related to a number of modified tracks in an extent destaged from the active cache list. The extent information for the extent is used to determine one of a plurality of mirroring queues to indicate the extent including modified tracks. A mirroring queue having a higher priority than another mirroring queue is processed at a higher rate to determine extents of tracks to mirror from the cache to the secondary storage. | 2021-09-16 |
20210286724 | DATA CACHE WITH HYBRID WRITEBACK AND WRITETHROUGH - Described is a data cache implementing hybrid writebacks and writethroughs. A processing system includes a memory, a memory controller, and a processor. The processor includes a data cache including cache lines, a write buffer, and a store queue. The store queue writes data to a hit cache line and an allocated entry in the write buffer when the hit cache line is initially in at least a shared coherence state, resulting in the hit cache line being in a shared coherence state with data and the allocated entry being in a modified coherence state with data. The write buffer requests and the memory controller upgrades the hit cache line to a modified coherence state with data based on tracked coherence states. The write buffer retires the data upon upgrade. The data cache writebacks the data to memory for a defined event. | 2021-09-16 |
20210286725 | INFORMATION PROCESSING APPARATUS, COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN MEMORY CONTROL PROGRAM, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN INFORMATION PROCESSING PROGRAM - An information processing apparatus including: a first management data storing region that stores a plurality of first links being provided one for each of multiple calculating cores and representing an order of migration of pages of a page group allocated to the calculating core among a plurality of the pages; a second management data storing region that stores a second link being provided for an operating system and managing a plurality of pages selected in accordance with the order of migration among the page group of the plurality of first links as a group of candidate pages to be migrated to the second memory; and a migration processor that migrates data of a page selected from the group of the second link from the first memory to the second memory. With this configuration, occurrence of a spinlock is reduced, so that the load on processor is reduced. | 2021-09-16 |
20210286726 | TECHNIQUES FOR DETERMINING AND USING CACHING SCORES FOR CACHED DATA - Techniques for cache management may include: receiving pages of data having page scores, wherein each of the pages of data is associated with a corresponding one of the page scores, wherein the corresponding page score associated with a page of data is determined in accordance with one or more criteria including one or more of a deduplication score, a compression score, and a neighbor score that uses a popularity metric based on deduplication related criteria of neighboring pages of data; and storing the page of data in a cache in accordance with the plurality of page scores. The cache may include buckets of pages where each bucket is associated with a different page size and all pages in the bucket are the different page size. The one or more criteria may also include an access score. The page scores may be based on multiple criteria that is weighted. | 2021-09-16 |
20210286727 | DYNAMIC RANDOM ACCESS MEMORY (DRAM) WITH SCALABLE META DATA - A memory is described. The memory includes row buffer circuitry to store a page. The page is divided into sections, wherein, at least one of the sections of the page is to be sequestered for the storage of meta data, and wherein, a first subset of column address bits is to: 1) define a particular section of the page, other than the at least one sequestered sections of the page, whose data is targeted by a burst access; and, 2) define a field within the at least one of the sequestered sections of the page that stores meta data for the particular section. | 2021-09-16 |
20210286728 | CACHE AND I/O MANAGEMENT FOR ANALYTICS OVER DISAGGREGATED STORES - Methods, systems, apparatuses, and computer program products are provided for prefetching data. A workload analyzer may identify job characteristics for a plurality of previously executed jobs in a workload executing on a cluster of one or more compute resources. For each job, identified job characteristics may include identification of an input dataset and an input bandwidth characteristic for the input dataset. A future workload predictor may identify future jobs expected to execute on the cluster based at least on the identified job characteristics. A cache assignment determiner may determine a cache assignment that identifies a prefetch dataset for at least one of the future jobs. A network bandwidth allocator may determine a network bandwidth assignment for the prefetch dataset. A plan instructor may instruct a compute resource of the cluster to load data to a cache local to the cluster according to the cache assignment and the network bandwidth assignment. | 2021-09-16 |
20210286729 | USING A MIRRORING CACHE LIST TO DEMOTE MODIFIED TRACKS FROM CACHE - Provided are a computer program product, system, and method for using mirroring cache list to demote modified tracks from cache A modified track for a primary storage stored in the cache to mirror to a secondary storage is indicated in a mirroring cache list. The mirroring cache list is processed to select modified tracks in the cache to transfer to the secondary storage that have not yet been transferred. The selected modified tracks in the cache are transferred to the secondary storage. The mirroring cache list is processed to determine modified tracks in the cache to demote from the cache. | 2021-09-16 |
20210286730 | METHOD, ELECTRONIC DEVICE AND COMPUTER PROGRAM PRODUCT FOR MANAGING CACHE - Techniques for cache management involve accessing, when a first data block to be accessed is missing in a first cache, the first data block from a storage device storing the first data block; selecting, when the first cache is full and based on a plurality of parameters associated with a plurality of eviction policies, an eviction policy for evicting a data block in the first cache from the plurality of eviction policies, the plurality of parameters indicating corresponding possibilities that the plurality of eviction policies are selected; evicting a second data block in the first cache to a second cache based on the selected eviction policy, the second cache being configured to record the data block evicted from the first cache; and caching the accessed first data block in the first cache. Such techniques can improve the cache hit rate, thereby improving the access performance of a system. | 2021-09-16 |
20210286731 | MEMORY ACCESS COLLISION MANAGEMENT ON A SHARED WORDLINE - A processing device in a memory sub-system sends a program command to the memory device to cause the memory device to initiate a program operation on a corresponding wordline and sub-block of a memory array of the memory device. The processing device further receives a request to perform a read operation on data stored on the wordline and sub-block of the memory array, sends a suspend command to the memory device to cause the memory device to suspend the program operation, reads data corresponding to the read operation from a page cache of the memory device, and sends a resume command to the memory device to cause the memory device to resume the program operation. | 2021-09-16 |
20210286732 | MULTI-WAY CACHE MEMORY ACCESS - A cache memory is disclosed. The cache memory includes an instruction memory portion having a plurality of instruction memory locations configured to store instruction data encoding a plurality of CPU instructions. The cache memory also includes a tag memory portion having a plurality of tag memory locations configured to store tag data encoding a plurality of RAM memory address ranges the CPU instructions are stored in. The instruction memory portion includes a single memory circuit having an instruction memory array and a plurality of instruction peripheral circuits communicatively connected with the instruction memory array. The tag memory portion includes a plurality of tag memory circuits, where each of the tag memory circuits includes a tag memory array, and a plurality of tag peripheral circuits communicatively connected with the tag memory array. | 2021-09-16 |
20210286733 | Memory Sharing Via a Unified Memory Architecture - A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table. | 2021-09-16 |
20210286734 | PERSISTENT READ CACHE IN A SCALE OUT STORAGE SYSTEM - Methods, apparatuses, systems, and media for implementing a persistent read cache in a scale out storage system are disclosed to reduce access latency and achieve higher performance. Both the cached data blocks and distributed data placements are referenced by their unique content identifiers and are deduplicated. The persistent read cache spans across node reboots and is inherently coherent across all storage nodes without a distributed lock manager. The cached data blocks share the same storage pool as distributed data placements without costing storage capacity. A cached data block can become a distributed data placement or vice versa without moving the physical data block. Methods are also disclosed to reduce time to performance for logical device mobility. | 2021-09-16 |
20210286735 | INFORMATION PROCESSING APPARATUS - Each of a plurality of IC chips, which are connected in series, is configured such that each IC chip access an entire memory space of each of other IC chips. | 2021-09-16 |
20210286736 | SYSTEMS AND METHODS FOR SECURING PROTECTED ITEMS IN MEMORY - System, methods, and other embodiments described herein relate to improving security of protected values in a memory. In one embodiment, a method includes, in response to receiving a write request indicating at least an item and a write value to write into the memory, determining whether a protected items list (PIL) indicates that the item is protected. The method includes replacing the write value of the write request with a protected value from the PIL that corresponds with the item when the item is listed in the PIL as being protected. The method further includes executing the write request to the memory. | 2021-09-16 |
20210286737 | APPARATUSES AND METHODS FOR SECURING AN ACCESS PROTECTION SCHEME - A device includes a memory. The device also includes a controller. The controller includes a register configured to store an indication of whether an ability of a received command to alter an access protection scheme of the memory is enabled. The received command may alter the access an access protection scheme of the memory responsive to the indication. | 2021-09-16 |
20210286738 | SEMICONDUCTOR DEVICE WITH SECURE ACCESS KEY AND ASSOCIATED METHODS AND SYSTEMS - Memory devices, systems including memory devices, and methods of operating memory devices are described, in which security measures may be implemented to control access to a fuse array (or other secure features) of the memory devices based on a secure access key. In some cases, a customer may define and store a user-defined access key in the fuse array. In other cases, a manufacturer of the memory device may define a manufacturer-defined access key (e.g., an access key based on fuse identification (FID), a secret access key), where a host device coupled with the memory device may obtain the manufacturer-defined access key according to certain protocols. The memory device may compare an access key included in a command directed to the memory device with either the user-defined access key or the manufacturer-defined access key to determine whether to permit or prohibit execution of the command based on the comparison. | 2021-09-16 |
20210286739 | SEPARATE INTER-DIE CONNECTORS FOR DATA AND ERROR CORRECTION INFORMATION AND RELATED SYSTEMS, METHODS, AND APPARATUSES - Separate inter-die connectors for data and error correction information and related systems, methods, and devices are disclosed. An apparatus includes a master die, a target die including data storage elements, inter-die data connectors, and inter-die error correction connectors. The inter-die data connectors electrically couple the master die to the target die. The inter-die data connectors are configured to conduct data between the master die and the target die. The inter-die error correction connectors electrically couple the master die to the target die. The inter-die error correction connectors are separate from the inter-die data connectors. The inter-die error correction connectors are configured to conduct error correction information corresponding to the data between the master die and the target die. | 2021-09-16 |
20210286740 | IN-LINE MEMORY MODULE (IMM) COMPUTING NODE WITH AN EMBEDDED PROCESSOR(S) TO SUPPORT LOCAL PROCESSING OF MEMORY-BASED OPERATIONS FOR LOWER LATENCY AND REDUCED POWER CONSUMPTION - In-line memory module (IMM) computing nodes with an embedded processor(s) to support local processing of memory-based operations for lower latency and reduced power consumption, and related methods are disclosed. The IMM computing node that includes one or more memory chips mounted on a circuit board. The IMM computing node also includes one or more embedded processor(s) on the circuit board that are each interfaced to at least one memory chip among the one or more memory chips. The processor(s) can be configured to access its interfaced memory chip(s) through an internal memory bus on the circuit board to perform processing onboard the IMM computing node in an offload computing access mode. The embedded processors(s) can also be configured to forward memory access requests received from an external processor to the memory chip(s) for data storage and retrieval in a transparent access mode without further local processing of the memory access requests. | 2021-09-16 |
20210286741 | SYMBOLIC NAMES FOR NON-VOLATILE MEMORY EXPRESS (NVME) ELEMENTS IN AN NVME-OVER-FABRICS (NVME-OF) SYSTEM - Presented herein are embodiments for providing and using a symbolic name for referencing an element of a non-volatile memory express (NVMe) entity in an NVMe-over-Fabric (NVMe-oF) environment. In one or more embodiments, the symbolic name may be used to identify an element of an NVMe host or NVM subsystem in one or more processes. In one or more embodiments, a symbolic name may be provided as part of a registration process. Symbolic names may be used for identifying elements when performing other processes, such as masking and zoning for granting access rights. In one or more embodiments, a symbolic name may be shared by two or more elements. | 2021-09-16 |
20210286742 | Single Command for Reading then Clearing a Memory Buffer - An example printing method can involve a memory buffer of a printing system containing image data, and the method can include (i) issuing, by an initiator of the printing system, a single read-then-clear memory command; (ii) receiving, by a memory controller of the printing system, the single read-then-clear memory command; and (iii) in response to receiving the single read-then-clear memory command, the memory controller both (a) reading the image data from the memory buffer of the printing system and (b) after reading the image data, clearing the image data from the memory buffer of the printing system. | 2021-09-16 |
20210286743 | MEMORY SYSTEM AND INFORMATION PROCESSING SYSTEM - A memory system includes a connector including a terminal, and a controller configured to perform a single-line bidirectional communication with a host via a signal line connected to the terminal. A format of a signal communicated via the single-line bidirectional communication includes a start pulse at a first level, a stop pulse at a second level, data pulses at the second level, and division pulses at the first level. The data pulses are after the start pulse but before the stop pulse. Each of the data pulses has a pulse width corresponding to a data value represented thereby. The division pulses have a uniform pulse width. A pulse width of the start pulse is greater than the uniform pulse width of the divisional pulses. A pulse width of the stop pulse is greater than any pulse width of the data pulses. | 2021-09-16 |
20210286744 | PROGRAMMABLE INPUT/OUTPUT PORT - A system manages communication between a host device and an end device. The system includes a programmable input/output (I/O) port associated with the host device. The host device is connectable through the programmable I/O port and a cable to a plurality of different types of end devices that are respectively associated with different types of protocols. The system further includes a port manager to detect a signal from an end device interface associated with the end device and determine a type of the end device based on the detected signal. The port manager directs the programmable I/O port to present signals that correspond to a protocol associated with the determined type of the end device to allow the host device to communicate with the end device. | 2021-09-16 |
20210286745 | DISCOVERY CONTROLLER REGISTRATION OF NON-VOLATILE MEMORY EXPRESS (NVMe) ELEMENTS IN AN NVME-OVER-FABRICS (NVMe-oF) SYSTEM - Presented herein are embodiments for registering elements of a non-volatile memory express (NVMe) entity in an NVMe-over-Fabric (NVMe-oF) environment. In embodiments, a method for registering with a centralized storage fabric service component via a discovery controller (DC) of the centralize service comprises transmitting a DC registration command to the DC. In embodiments, the DC registration command includes a number of registration entries that the NVMe entity will be submitting for registration. In embodiments, the identified number of NVMe registration entries are transmitted to the centralized service and are stored in a registry. The NVMe registration entry may include an entry type for indicating an NVMe registration entry type, an NVMe qualified name (NQN) for identifying the NVMe entity, and a transport address for specifying an address of the element of the NVMe entity. Other NVMe entities may query the registry to obtain information about NVMe elements in the system. | 2021-09-16 |
20210286746 | SEMICONDUCTOR MEMORY DEVICE - According to one embodiment, a semiconductor memory device includes a first string including a first memory cell transistor and a second memory cell transistor which are coupled in series, a first switch element, a first latch circuit coupled in sere es between a first end of the first string and a first end of the first switch element, and a second switch element and a third switch element coupled in parallel between a second end of the first switch element and a data bus. | 2021-09-16 |
20210286747 | SYSTEMS AND METHODS FOR SUPPORTING INTER-CHASSIS MANAGEABILITY OF NVME OVER FABRICS BASED SYSTEMS - A data storage system includes: a plurality of Ethernet solid-state drive (SSD) chassis including at least one switching Ethernet SSD chassis and one or more switchless Ethernet SSD chassis. The at least one switching Ethernet SSD chassis comprises an Ethernet switch, a first baseboard management controller (BMC), and a first management local area network (LAN) port. At least one of the one or more switchless Ethernet SSD chassis comprises an Ethernet repeater, a second BMC, and a second management LAN port. The first management LAN port of the at least one switching Ethernet SSD chassis and the second management LAN port are connected. The first BMC collects status of the at least one of the one or more switches Ethernet SSD chassis from the second BMC via a connection between the first management LAN port and the second management LAN port and provide device information of the at least one of the one or more switches Ethernet SSD chassis and the at least one switching Ethernet SSD chassis to a system administrator. | 2021-09-16 |
20210286748 | SINGLE-PAIR TO MULTI-PAIR ETHERNET CONVERTER - Disclosed are embodiments that provide digital data communication between a single-pair Ethernet and a multi-pair Ethernet. Some embodiments include a single-pair Ethernet interface that is configured to operate in at least two modes. In a first mode, the single-pair Ethernet interface operates in a conventional manner. In a second mode, alternate pin configurations are employed to provide a low-cost interoperability between a single-pair Ethernet interface and a multi-pair Ethernet interface. For example, in the second mode, the single-pair Ethernet receives, via a first receive data pin, from a first transmit data pin of the multi-pair Ethernet interface, a data signal, and receives, via a second receive data pin, from a second transmit data pin of the multi-pair Ethernet interface, a second data signal. | 2021-09-16 |
20210286749 | METHODS WITH PLUGGABLE TIME SIGNAL ADAPTER MODULES FOR SELECTING A TIME REFERENCE - A small form-factor pluggable (SFP) time signal adapter module includes a printed circuit board, a cable connector mounted to the printed circuit board, and a differential receiver coupled to the cable connector, one or more of the plurality of wire traces, and an SFP edge connector. The printed circuit board has a plurality of wire traces and a plurality of pads of the SFP edge connector is at least coupled to two of the plurality of wire traces. The cable connector is coupled to at least one or more of the plurality of wire traces. The cable connector coupes to a connector of a cable to receive a differential time reference signal. The differential receiver receives and differentiates the differential time input signal to generate a single ended time reference signal that is coupled to a pad of the SFP edge connector. | 2021-09-16 |
20210286750 | READ OPERATION CIRCUIT, SEMICONDUCTOR MEMORY, AND READ OPERATION METHOD - Embodiments provide a read operation circuit, a semiconductor memory, and a read operation method. The read operation circuit includes: a data determination module configured to read read data from a memory bank, and determine whether to invert the read data according to the number of bits of low data in the read data to output global bus data for transmission through a global bus and inversion flag data for transmission through an inversion flag signal line; a data receiving module configured to determine whether to invert the global bus data according to the inversion flag data to output cache data; a parallel-to-serial conversion circuit configured to perform parallel-to-serial conversion on the cache data to generate output data of the DQ port; and a precharge module configured to set an initial state of the global bus to High. | 2021-09-16 |
20210286751 | Method For The Assignment Of Addresses By A Master Unit To A Number Of Slave Units - A method assigns addresses from a master unit to a number of slave units. The slave units are connected to the master unit for the transmission of information. An address output of the master unit is connected to an address input of a first slave unit and the address output of an n-th slave unit is connected to the address input of an n+1-th slave unit. The slave units, when a first level is applied to their address input, set the level at their address output to the first level and change to the first, “non-addressed” state, in the event of a transition of the level at their address input from the first level to a second level, change to the “addressable” state, upon receiving an address from the master unit, check the received address for validity in the “addressable” state and, acknowledge the reception to the master unit. | 2021-09-16 |
20210286752 | TECHNIQUES TO TRANSFER DATA AMONG HARDWARE DEVICES - Apparatuses, systems, and techniques to route data transfers between hardware devices. In at least one embodiment, a path over which to transfer data from a first hardware component of a computer system to a second hardware component of a computer system is determined based, at least in part, on one or more characteristics of different paths usable to transfer the data. | 2021-09-16 |
20210286753 | IMAGE SENSOR - [Problem to be Solved] To provide a communication device and a communication system that each enable transmission of a command and data of I | 2021-09-16 |
20210286754 | Method, Apparatus And System For Dynamic Control Of Clock Signaling On A Bus - In an embodiment, a host controller includes a clock control circuit to cause the host controller to communicate a clock signal on a clock line of an interconnect, the clock control circuit to receive an indication that a first device is to send information to the host controller and to dynamically release control of the clock line of the interconnect to enable the first device to drive a second clock signal onto the clock line of the interconnect for communication with the information. Other embodiments are described and claimed. | 2021-09-16 |
20210286755 | HIGH PERFORMANCE PROCESSOR - Implementations relate to a data processor that includes a data processing unit having a plurality of processing elements and a cache hierarchy including a plurality of levels of data caches. The data caches include a first level data cache connected to a second level data cache, and a main memory connected to the highest level cache of the cache hierarchy. At least one of the first level data cache or second level data cache is divided into a plurality of cache segments, and during operation of the data processor, at least some of the plurality of cache segments are excluded from cache operation. Each of the excluded cache segments is dedicated to an associated processing element as tightly coupled local access memory. | 2021-09-16 |
20210286756 | EXECUTION ENGINE FOR EXECUTING SINGLE ASSIGNMENT PROGRAMS WITH AFFINE DEPENDENCIES - The execution engine is a new organization for a digital data processing apparatus, suitable for highly parallel execution of structured fine-grain parallel computations. The execution engine includes a memory for storing data and a domain flow program, a controller for requesting the domain flow program from the memory, and further for translating the program into programming information, a processor fabric for processing the domain flow programming information and a crossbar for sending tokens and the programming information to the processor fabric. | 2021-09-16 |
20210286757 | LOGICAL PATHS FOR UNIFIED FILE AND BLOCK DATA STORAGE - A file storage application that processes file operations is communicably connected with a block storage application that processes block operations by establishing multiple communication sessions between the file storage application and the block storage application. Multiple logical volumes provided by the block storage application are exposed to the file storage application over the multiple communication sessions established between the file storage application and the block storage application using a total number of logical paths to the logical volumes that is equivalent to the total number of the logical volumes provided by the block storage application to the file storage application. | 2021-09-16 |
20210286758 | DEVICE TO DEVICE MIGRATION IN A UNIFIED ENDPOINT MANAGEMENT SYSTEM - Described herein are example methods and systems for enrolling a user device with an unified endpoint management system (“UEMS”) directly from another user device. The examples describe a first user device that is already enrolled with the UEMS and a second user device that is seeking to be enrolled. The two user devices can establish a direct connection with each other. The second user device can be authenticated by a user inputting the same migration password or pin at both user device. The first user device can generate and send a migration data file to the second user device. The migration data file can include settings, policies, software packages, and files managed by the UEMS. The second user device can copy settings, policies, and files, and install the applications from the migration data file. The second user device can notify an UEMS server of the device migration. | 2021-09-16 |
20210286759 | SYSTEMS AND METHODS FOR A SPECIALIZED COMPUTER FILE SYSTEM - A computer file system for managing data storage resources is provided. The system comprises storage server configured to receive data file from a client application, modify the file name to include an expiration stamp, upload the at least one data file to the data storage device, generate a file link associated with the at least one data file, and transmit the file link to the client application, wherein the at least one data file is retrievable by the end user via the file link. A maintenance server is communicatively coupled to the data storage device, the maintenance server configured to execute an erase operation to autonomously erase the at least one data file from the data storage device based on the expiration stamp. | 2021-09-16 |
20210286760 | MANAGING SNAPSHOTS STORED LOCALLY IN A STORAGE SYSTEM AND IN CLOUD STORAGE UTILIZING POLICY-BASED SNAPSHOT LINEAGES - An apparatus includes a processing device configured to identify a snapshot policy for creating a snapshot lineage comprising snapshots of a storage volume comprising data stored on a storage system, the snapshot lineage comprising (i) a local snapshot lineage stored on the storage system and (ii) at least one cloud snapshot lineage stored on cloud storage. The processing device is also configured to generate snapshots of the storage volume in accordance with the snapshot policy, to store the snapshots in the local snapshot lineage, and to copy snapshots from the local snapshot lineage to the at least one cloud snapshot lineage in accordance with the at least one snapshot policy. The processing device is further configured to provide an interface for managing the snapshot lineage by accessing, from the storage system, snapshots of the storage volume in the local snapshot lineage and the at least one cloud snapshot lineage. | 2021-09-16 |
20210286761 | GENERATING CONFIGURATION DATA ENABLING REMOTE ACCESS TO PORTIONS OF A SNAPSHOT LINEAGE COPIED TO CLOUD STORAGE - An apparatus comprises at least one processing device configured to select a snapshot lineage comprising one or more snapshots of a storage volume comprising data stored on one or more storage devices of a storage system, the snapshot lineage comprising at least one cloud snapshot lineage, the at least one cloud snapshot lineage comprising at least a subset of the one or more snapshots of the storage volume that have been copied to cloud storage of at least one cloud external to the storage system. The at least one processing device is also configured to generate configuration data for accessing the at least one cloud snapshot lineage. The at least one processing device is further configured to transfer the configuration data to at least one additional processing device to enable the at least one additional processing device to access the at least one cloud snapshot lineage. | 2021-09-16 |
20210286762 | Snapshot Management in Partitioned Storage - The present disclosure generally relates to a storage snapshot management system. When updated data is written to the memory device, rather than rewriting all of the data, only the updated data is written to a new namespace. A snapshot of the new namespace indicates which LBAs in the new namespace contain data. New namespaces are added each time data is updated. When the updated data is to be read, the data storage device reads the updated LBA from the new namespace, and also gathers the non-updated data from the previous namespace. Eventually, the number of namespaces for the data reaches a threshold, and thus some namespaces need to be evicted. To evict a namespace, the updated data in the namespace is moved to a different namespace, or the non-updated data is moved to a namespace that contains updated data. In either case, the now unused namespaces are evicted. | 2021-09-16 |
20210286763 | SUGGESTING A DESTINATION FOLDER FOR A FILE TO BE SAVED - A computer-implemented method according to one embodiment includes determining a starting folder within a file system, computing, for each child folder of the starting folder, a similarity metric indicating a level of similarity to a file, selecting two child folders of the starting folder having greatest similarity metrics, comparing a difference between the greatest similarity metrics of the two child folders to a predetermined threshold, and conditionally selecting the starting folder as a recommended folder to which the file is saved, based on the comparing. | 2021-09-16 |
20210286764 | TECHNOLOGIES FOR INTEGRATING CLOUD CONTENT ITEMS ACROSS PLATFORMS - An example method can include storing, on a CSM, a first content item and representations of second and third content items, the second content item having content/features enabled by a cloud service and designed for access through a native online application and the third content item having content/features supported by a local application and having additional features designed for access through a cloud service and native online application; when the first content item is invoked, presenting the content/features of the first content item; in response to a request to access the representation of the second or third content item, sending, to a cloud service, a request for the additional features of the third content item or the content/features of the second content item; and based on metadata received from a cloud service, providing the additional features/content of the third content item or the content/features of the second content item. | 2021-09-16 |
20210286765 | COMPUTER SYSTEM, FILE STORAGE AND DATA TRANSFER METHOD - To reduce the data transfer amount required for a byte-level transfer of difference data, and avoid increases of management data and in the number of sessions at the time of a byte-level transfer of differences. | 2021-09-16 |
20210286766 | VALIDATING STORAGE VIRTUALIZATION METADATA SUPPORTING REDIRECTION - A technique for validating metadata includes creating log entries for virtualization structures pointed to by mapping pointers in a mapping tree and processing the log entries in multiple passes. A current pass validates a current level of redirection and creates new log entries to be processed during a next pass. The new log entries represent a next level of redirection, and as many next passes are processed in sequence as there are next levels of redirection. | 2021-09-16 |
20210286767 | ARCHITECTURE, METHOD AND APPARATUS FOR ENFORCING COLLECTION AND DISPLAY OF COMPUTER FILE METADATA - Disclosed is a system for displaying and capturing file metadata of an application data file stored on a computer; said computer including a processor and memory; said memory storing an operating system for managing operations of the computer; said memory further storing a kernel of the operating system; said computer further including a user interface and at least one installed application installed on said computer for interacting with a user which, when executed by the processor under the control of the operating system, processes said application data file. | 2021-09-16 |
20210286768 | TECHNIQUES FOR DATA DEDUPLICATION - Techniques for processing data may include: receiving a data block stored in a data set, wherein a hash value is derived from the data block; determining, in accordance with selection criteria, whether the hash value is included in a subset; responsive to determining the hash value is included in the subset, performing processing that updates a table in accordance with the hash value and the data set, and determining, in accordance with the information in the table, whether to perform deduplication processing for the data block to determine whether the data block is a duplicate of another stored data block. The table may include an entry for the hash value. The entry may include information identifying data sets referencing the data block and, for each of the data sets, may specify a reference count denoting a number of times the data set references the data block. | 2021-09-16 |
20210286769 | SYSTEM AND METHODS FOR IMPLEMENTING A SERVER-BASED HIERARCHICAL MASS STORAGE SYSTEM - Setting up and supporting the computer infrastructure for a remote satellite office is a difficult task for any information technology department. To simplify the task, an integrated server system with a hierarchical storage system is proposed. The hierarchical storage system includes the ability to store data at an off-site cloud storage service. The server system is remotely configurable and thus allows the server to be configured and populated with data from a remote location. | 2021-09-16 |
20210286770 | SYSTEM AND METHODS FOR IMPLEMENTING A SERVER-BASED HIERARCHICAL MASS STORAGE SYSTEM - Setting up and supporting the computer infrastructure for a remote satellite office is a difficult task for any information technology department. To simplify the task, an integrated server system with a hierarchical storage system is proposed. The hierarchical storage system includes the ability to store data at an off-site cloud storage service. The server system is remotely configurable and thus allows the server to be configured and populated with data from a remote location. | 2021-09-16 |
20210286771 | Dropsite for Shared Content - Embodiments are provided for a dropsite. In some embodiments, information is received on a creation location and a date and time of creation of a content item, and a determination is made if (i) the date and time of creation is within a predefined span of time, and (ii) the creation location is within a predefined geographical area to permit association of the content item with a shared folder whose inclusion criteria match said date and time and geographic location. | 2021-09-16 |
20210286772 | TAPE UNMOUNTING PROTOCOL - Described are techniques for a tape unmounting protocol. The techniques include selecting a tape for unmounting from a plurality of tape drives, where the tape for unmounting includes a remaining capacity below a first threshold and a number of migrated files below a second threshold. The techniques further include unmounting the tape for unmounting from a tape drive. | 2021-09-16 |
20210286773 | COMMUNICATION MANAGEMENT APPARATUS, COMMUNICATION SYSTEM, COMMUNICATION METHOD, AND NON-TRANSITORY RECORDING MEDIUM - A communication management apparatus includes a memory and circuitry. The memory stores a hierarchical data structure in which each page of a plurality of pages forming a display screen shared by a plurality of communication terminals is associated with one or more objects included in the each page. The circuitry receives, from one communication terminal of the plurality of communication terminals, an operation request that requests an operation on a particular object of the one or more objects. When editing of the particular object and editing of data in a lower layer associated with a higher layer of the particular object are both allowed, the circuitry transmits a success notification to the one communication terminal to notify success of the operation on the particular object. | 2021-09-16 |
20210286774 | SYSTEM AND METHOD FOR INFORMATION STORAGE USING BLOCKCHAIN DATABASES COMBINED WITH POINTER DATABASES - A system and method for information storage using blockchain and pointer databases, comprising a computer with a blockchain manager and datastore manager, and blockchain data input, which connects over a network to a distributed blockchain ledger containing information such as personally-identifying data and a datastore system containing searchable information such as a DNS system on the persons entered into the blockchain, the datastore system also containing reference numbers for each searchable block in the blockchain, such that verification or identification can both be accomplished swiftly and securely of data in the blockchain such as for data verification to verify or identify persons submitting data to such a system. | 2021-09-16 |
20210286775 | DYNAMIC SELECTION OF DATA APPLY STRATEGY DURING DATABASE REPLICATION - Approaches presented herein enable replicating data records between a source database and a target database. More specifically, for a batch of change records in a table received from the source database, a first estimated replication duration needed to apply the batch as a bulk change to the target is determined. For the same batch, a second estimated replication duration needed to apply a set of changes in a single row of the table to the target is determined based on time penalties for each column in the row. A threshold quantity of rows at which the first duration equals a summed total of second durations for the quantity is calculated. The bulk change is selected if a number of rows in the batch exceeds the threshold. Applying change records singly is selected if the number of rows in the batch is less than the threshold. | 2021-09-16 |
20210286776 | APPARATUS, SYSTEMS, AND METHODS FOR CROWDSOURCING DOMAIN SPECIFIC INTELLIGENCE - The present disclosure provides apparatus, systems, and methods for crowdsourcing domain specific intelligence. The disclosed crowdsourcing mechanism can receive domain specific intelligence as a data processing rule module. For example, a data analytics system can request a crowd of software developers to provide a data processing rule module tailored to process a particular type of information from a particular domain. When the data analytics system receives the data processing rule module from one of the software developers for the particular domain, the data analytics system can use the received data processing rule module to process information associated with the particular domain. | 2021-09-16 |
20210286777 | DATA ACCESS AND RECOMMENDATION SYSTEM - System, method, and various embodiments for providing a data access and recommendation system are described herein. An embodiment operates by identifying a column access of one or more data values of a first column of a plurality of columns of a table of a database during a sampling period. A count of how many of the one or more data values are accessed during the column access are recorded. A first counter, corresponding to the first column and stored in a distributed hash table, is incremented by the count. The sampling period is determined to have expired. A load recommendation on how to load data values into the first column based on the first counter is computed. The load recommendation for implementation into the database for one or more subsequent column accesses is provided. | 2021-09-16 |
20210286778 | AUTOMATIC DRIFT DETECTION AND HANDLING - In various example embodiments, a system, computer readable medium and method for schema update engine dynamically updating a target data storage system. Incoming data records are received. A front-end schema of the incoming data records is identified. The front-end schema and the current target schema are compared. Based on identifying a difference between the front-end schema and the current target schema, the current target schema is updated in order to be identical to the front-end schema. The current target data file is closed and the incoming data records are stored. in a new target data file according to the updated target schema. | 2021-09-16 |
20210286779 | ASYNCHRONOUS PROCESSING OF LARGE-SCALE DATA IN A CLOUD COMPUTING ENVIRONMENT - System and methods are described for asynchronously processing large-scale data in a cloud computing environment. In one implementation, a method comprises receiving data from a plurality of data sources; aggregating the data within a data structure configured for managing large-scale data; identifying a plurality of data portions within the data structure; asynchronously processing a selected data portion based on at least one sub-process to generate at least one processed data object; and transmitting the at least one processed data object to a downstream process. | 2021-09-16 |
20210286780 | AUTO REINFORCED ANOMALY DETECTION - Examples of a data anomaly detection system are provided. The system may obtain a query and target data associated with a data anomaly detection requirement. The system may sort the target data into a plurality of data wedges comprising a plurality of events. The system may create a data pattern model for each of the plurality of data wedges. The system may identify a data threshold value and identify a data probity score for each of the plurality of events. The system may create a data probity index and identify a data anomaly cluster for the data pattern model. The system may generate a data anomaly detection result and initiate anomaly detection corresponding to the data anomaly detection requirement. The data anomaly detection result may include the data pattern model deficient of the data anomaly cluster relevant for resolution to the query. | 2021-09-16 |
20210286781 | SEGMENTED INDEX FOR DATA DEDUPLICATION - A deduplication index is generated having multiple entries, each entry storing a digest of a data block that was previously stored in non-volatile data storage together with a pointer to the location in non-volatile storage at which the data block was previously stored. The entries of the disclosed deduplication index are divided into multiple deduplication index segments. A resident subset of the deduplication index segments is stored in memory of the data storage system. A non-resident subset of the deduplication index segments is stored in non-volatile data storage of the data storage system. Data deduplication is performed for each subsequently received data block for which a digest is generated that matches any one of the digests in the entries of the deduplication index segments that are contained in the resident subset of the deduplication index segments. | 2021-09-16 |
20210286782 | DATA COMPLEMENTING SYSTEM AND DATA COMPLEMENTING METHOD - A data complementing system stores cell-region characteristic data that includes values of a plurality of data items regarding a cell region that is a region obtained by dividing the region into a mesh, information indicating a missing data item that is the data item of missing data being data missed in the cell-region characteristic data, external region characteristic data that includes values of a plurality of data items regarding an external region that is different from the region, and an external cell-region characteristic data that includes values of a plurality of data items regarding an external cell region obtained by dividing the external region into a mesh, generates a complement model for generating complement data indicating a value of the missing data item based on the external region characteristic data and the external cell-region characteristic data, and generates the complement data based on the complement model. | 2021-09-16 |
20210286783 | DEDUPLICATING DATA AT SUB-BLOCK GRANULARITY - A technique for performing data deduplication operates at sub-block granularity by searching a deduplication database for a match between a candidate sub-block of a candidate block and a target sub-block of a previously-stored target block. When a match is found, the technique identifies a duplicate range shared between the candidate block and the target block and effects persistent storage of the duplicate range by configuring mapping metadata of the candidate block so that it points to the duplicate range in the target block. | 2021-09-16 |
20210286784 | EVALUATING QUERY PERFORMANCE - An approach is provided for evaluating a performance of a query. A risk of selecting a low performance access path for a query is determined. The risk is determined to exceed a risk threshold. Based on the risk exceeding the risk threshold and using a machine learning optimizer, first costs of access paths for the query are determined. Using a cost-based database optimizer, second costs of the access paths are determined. Using a strong classifier operating on the first costs and the second costs, a final access path for the query is selected from the access paths. | 2021-09-16 |
20210286785 | GRAPH-BASED APPLICATION PERFORMANCE OPTIMIZATION PLATFORM FOR CLOUD COMPUTING ENVIRONMENT - Some embodiments are associated with application performance optimization in a cloud computing environment. A transaction observer platform may receive transaction information associated with execution of an application in the cloud computing environment. A classifier recorder and tagger platform, coupled to the transaction observer platform, may then automatically tag the transaction information. A graph engine relation builder platform, coupled to the transaction observer platform and the classifier recorder and tagger platform, may receive the tagged transaction information and automatically create graph information that represents execution of the application. A recommendation engine platform, coupled to the graph engine relation builder platform, may then receive the graph information and automatically generate and transmit an application performance optimization recommendation. | 2021-09-16 |
20210286786 | DATABASE PERFORMANCE TUNING METHOD, APPARATUS, AND SYSTEM, DEVICE, AND STORAGE MEDIUM - A database performance tuning method is provided, including: receiving a performance tuning request of tuning a configuration parameter of a target database; obtaining a status indicator of the target database; and inputting the status indicator of the target database into a deep reinforcement learning model, and outputting a recommended configuration parameter of the target database. The deep reinforcement learning model includes a first deep reinforcement learning network and a second deep reinforcement learning network. The first deep reinforcement learning network is configured to provide a recommendation policy for outputting a recommended configuration parameter according to a status indicator, and the second deep reinforcement learning network is configured to evaluate the recommendation policy provided by the first deep reinforcement learning network. | 2021-09-16 |
20210286787 | SYSTEM AND METHOD FOR SLOWLY CHANGING DIMENSION AND METADATA VERSIONING IN A MULTIDIMENSIONAL DATABASE ENVIRONMENT - In accordance with an embodiment, described herein are systems and methods for supporting slowly changing dimensions and metadata versioning in a multidimensional database, comprising. A system can comprise a computer that includes one or more microprocessors, and a multidimensional database server executing on the computer, wherein the multidimensional database server supports at least one hierarchical structure of data dimensions. A data dimension can slowly change over time. When such changes occur, metadata associated with the data dimension can be updated. Advantageously, a current snapshot of the data structure can allow searching of previous changes to the slowly changing dimension based upon the metadata. | 2021-09-16 |
20210286788 | SYSTEMS AND METHODS FOR SCALABLE DELOCALIZED INFORMATION GOVERNANCE - The invention relates to electronic indexing, and more particularly, to the indexing, in a cloud, data held in a cloud. Systems and methods of the invention index data by accessing the data in place in the cloud and breaking a job into work items and sending the work items to multiple cloud processes that can each determine whether to index data associated with the work item or to create a new work item and have a different cloud process index the data. Each cloud process is proximal to an item that it indexes. This gives the system scale as well as an internal load-balancing. | 2021-09-16 |
20210286789 | AREA ALLOCATION DEVICE, AREA ALLOCATION METHOD, AND NON-VOLATILE RECORDING MEDIUM - Provided are an area allocation device and the like that can efficiently allocate memory volume for processing of matrix operations. The area allocation device specifies array identifiers representing positions of elements storing a value different from a predetermined value in each array of subarray information in array information, arrays consisting of a plurality of element, the array information including a plurality of information representing the arrays, the subarray information corresponding to at least a part of the arrays; calculates a number of the specified array identifiers; and allocates a memory area having a memory volume depending the calculated number. | 2021-09-16 |
20210286790 | FAST IN-MEMORY TECHNIQUE TO BUILD A REVERSE CSR GRAPH INDEX IN AN RDBMS - In an embodiment, a computer obtains a mapping of a relational schema of a database to a graph data model. The relational schema identifies vertex table(s) that correspond to vertex type(s) in the graph data model and edge table(s) that correspond to edge type(s) in the graph data model. Each edge type is associated with a source vertex type and a target vertex type. Based on that mapping, a forward compressed sparse row (CSR) representation is populated for forward traversal of edges of a same edge type. Each edge originates at a source vertex and terminates at a target vertex. Based on the forward CSR representation, a reverse CSR representation of the edge type is populated for reverse traversal of the edges of the edge type. Acceleration occurs in two ways. Values calculated for the forward CSR are reused for the reverse CSR. Elastic and inelastic scaling may occur. | 2021-09-16 |
20210286791 | METHOD AND APPARATUS FOR PROCESSING LABEL DATA, DEVICE, AND STORAGE MEDIUM - The present disclosure provides a method and an apparatus for processing label data, a device and a storage medium, relates to a field of big data processing technology. The technical solution includes determining a segment identifier of a user based on user identification information, determining a bucket identifier of the user based on the segment identifier, storing label data of the user into a data bucket associated with the bucket identifier and aggregating the label data in the data bucket to bitmap data for storage. | 2021-09-16 |
20210286792 | HASHED BALANCED TREE DATA STRUCTURE - Aspects create a tree data structure that indexes a collection of documents present in a data repository at a point in time. The tree data structure includes a plurality of nodes. For each such node, a respective root hash value of that node is determined. The root hash value of a leaf node is determined from hash value(s) for element(s) of that node that are keyed to documents in the collection. The root hash value of a parent node is determined from a root hash value for each of its child nodes. For a given document that is purported to be a target document present in the data repository at the point in time, processing is performed that uses the tree data structure in facilitating verification that the given document is the target document. This includes providing a cryptographic proof to demonstrate whether the given document is the target document. | 2021-09-16 |
20210286793 | INDEXING STORED DATA OBJECTS USING PROBABILISTIC FILTERS - A system and method index data objects in an object store according to structure found in data records from which the objects are themselves formed. Embodiments order structured data records, aggregate slices of consecutively-ordered records into a single corresponding data object, store the data object in the object store, and place the associated object handle into a search tree with all the other such handles. The search tree is indexed using probabilistic set-inclusion filters, such as Bloom filters, not on the handles themselves but on the indexed fields of the records within each slice. For data sets having enough data records, and thus search trees that are deep enough, the aggregate false positive rate for depth-first searches on the tree becomes infinitesimal due to the multiplicate property of the false positive rates for independent Bloom filters. Searches are rapid on even moderately tuned filters. | 2021-09-16 |
20210286794 | DATA TREE CHECKPOINT AND RESTORATION SYSTEM AND METHOD - Systems and methods for storing nodes, preferably, leaf nodes, of a data tree structure into storage are disclosed, and in one or more aspects restoring the leaf nodes from storage, preferably to memory. Copying the nodes into storage includes in an embodiment share-latching a first node of a data tree to be copied; copying the first node that is share-latched into storage; determining if there is a sibling second node linked to the first node; following a link between the first copied node and the sibling second node, share-latching the sibling second node, unlatching the first copied node, and copying the sibling second node into storage. Restoring includes copying the leaf nodes from storage, updating the leaf nodes, and creating/recreating the data tree. | 2021-09-16 |
20210286795 | DATABASE INDEX AND DATABASE QUERY PROCESSING METHOD, APPARATUS, AND DEVICE - A method including determining a database table for which a database index is to be created, wherein the database table comprises a spatial field for storing spatial data; acquiring, for the database table, a spatial filter condition comprising a spatial field ID; and generating, according to the spatial filter condition, a tree-structured spatial index for the database table, wherein a leaf node of the tree-structured spatial index stores therein spatial data meeting the spatial filter condition and its primary key ID. The efficiency of queries related to spatial data is thus enhanced. | 2021-09-16 |
20210286796 | METHODS AND SYSTEMS FOR DATA STRUCTURE OPTIMIZATION - Methods and systems for optimizing a data structure are disclosed. An example method can comprise categorizing, based on travel information associated with a vehicle, locations according to at least one of a first category and a second category. An example method can comprise generating search criteria configured to select first data for locations categorized with the first category and second data for locations categorized with the second category. The first data can be more detailed than the second data. An example method can comprise receiving information based on the search criteria and providing the information to the vehicle. | 2021-09-16 |
20210286797 | PREVENTING UNNECESSARY UPLOAD - A computer-implemented method that can prevent an upload of a data set upon detection of a modification of the data set. The method includes storing a first portion of a file in a buffer while being in a receiving mode. Upon determining that applying one of a predefined transformation performed to the first portion of an existing data set will result in reproducing a content of the buffer: generating a transformation notification signal; and upon receiving a stop message, stopping the receiving mode which results in using the existing file on the server. | 2021-09-16 |
20210286798 | Method and System for Graph-Based Problem Diagnosis and Root Cause Analysis for IT Operation - A computer-implemented method, system, and non-transitory machine readable medium for a graph-based analysis for an Information Technology (IT) operations includes generating a temporal graph by extracting one or more of operation objects, relations and attributes from operation data of workloads distributed across a plurality of levels of the IT operation within a predetermined time window. Anomalies are detected from the extracted operation data and annotating corresponding objects in the graph. A directional impact between corresponding objects on the temporal graph is determined, and the temporal graph is refined based on the determined directional impact. Accessible paths in the temporal graph indicating error propagation are searched, and potential causes for the detected anomalies in the temporal graph are identified. A list of the potential causes of the anomalies is generated, and a root cause ranked for each of the corresponding objects in the temporal graph. | 2021-09-16 |
20210286799 | AUTOMATED TRANSACTION ENGINE - Various embodiments of the present technology generally relate to automated tools for tracking, recording, restoring and auditing transactions. In accordance with various embodiments, applications and servers can provide a variety of features including, but not limited to, behind the scene monitoring activity (transaction or business) and recording, persisting client activity, ubiquitous autosave, business workflow and approval lifecycles, error correction at the business level, management of the state of the transaction without the user having to manage the activity, tracking posted and unposted transactions (e.g., business state), and the like. The applications can communicate with a unified transaction engine that combines awareness of database transaction state along with business transaction states. As a result, the end-user and/or developer do not have to be concerned about the underlying differences between the database transaction state and business transaction states, and can focus on where their transaction is in its lifecycle. | 2021-09-16 |
20210286800 | DEVICE AND METHOD FOR ANOMALY DETECTION - A computer-implemented method for classifying whether an input signal, which comprises image and/or audio data, is anomalous or not with respect to a second data distribution using an anomaly classifier. The method includes: providing the input signal to the anomaly classifier; in the anomaly classifier, providing the input signal to a reference detector and a second detector; obtaining a reference value from the reference detector based on the input signal, the reference value characterizing the likelihood of the input signal to belong to a reference data distribution; obtaining a second value from the second detector based on the input signal, the second value characterizing the likelihood of the input signal to belong to the second data distribution; and providing an output signal, which characterizes a classification of the input signal as anomalous or not based on a comparison of the reference value and the second value. | 2021-09-16 |
20210286801 | RECORD TRANSMITTING METHOD AND DEVICE - The present application relates to a method and a device for transmitting record. The method comprises: selecting a target record from a current record according to a difference between the current record and a corresponding record in a historical record; transmitting the target record to a receiving end. The target record to be transmitted is selected according to the difference between the current record and the corresponding record in the historical record, instead of by directly transmitting a complete current record to the receiving end, such that the flexibility of transmission can be improved. | 2021-09-16 |
20210286802 | SCALABLE LOCKING TECHNIQUES - Systems and methods for scalable locking. A method includes adding a first lock entry representing a pending lock to a first tree, the first lock entry indicating a range to be locked; checking at least a portion of at least one second tree to determine whether a conflicting lock exists for the first lock entry among at least one second lock entry based on the range to be locked, wherein each of the first tree and the at least one second tree is a data structure including a plurality of nodes representing at least a plurality of attributes, wherein the plurality of attributes of the at least one second tree includes the at least one second lock entry; committing the pending lock when no conflicting lock exists; and resolving the pending lock based on a resolution of the conflicting lock when a conflicting lock exists. | 2021-09-16 |
20210286803 | IMMUTABLE AND DECENTRALIZED STORAGE OF COMPUTER MODELS - The present disclosure relates generally to storing computer models, and more specifically to a platform for achieving replicability of a computer model (e.g., a trained machine-learning algorithm) by storing and providing access to data associated with the computer model using an immutable and decentralized ledger system (e.g., a blockchain ledger) and a distributed database. An exemplary computer-enabled method for storing a computer model, the method comprises: receiving data associated with the computer model; generating one or more asset files based on the data associated with the computer model; generating one or more hash values corresponding to the one or more asset files; generating one or more of location trackers corresponding to the one or more asset files; generating a ledger entry comprising the one or more hash values and the one or more location trackers; and adding the ledger entry to a blockchain ledger. | 2021-09-16 |
20210286804 | TARGETED SWEEP METHOD FOR KEY-VALUE DATA STORAGE - A computer-implemented method for targeted sweep of a key-value data storage is provided. The method comprises before a write transaction to a database having a key value store commits, and before each of one or more write commands of the write transaction are persisted to the key value store, writing an entry for each of the one or more write commands to an end of a targeted sweep queue, the entry comprising metadata including: data identifying a cell to which the write command relates, a start timestamp of the write transaction, and information identifying a type of the write transaction. | 2021-09-16 |
20210286805 | GENERATION OF TEST DATASETS FOR GUARDED COMMANDS - Systems and techniques that facilitate automated generation of relevant and adequate test datasets based on guarded commands are provided. In various embodiments, a query generation component can generate a query language query based on a first guarded command. In various aspects, an execution component can execute the query language query on a data table to return one or more datasets for testing the first guarded command. In various embodiments, the query generation component can comprise an initialization component that can initialize conditions of a WHERE clause of the query language query based on the first guarded command. In various instances, the query generation component can further comprise a transformation component that can transform the conditions of the WHERE clause of the query language query based on a sequence of guarded commands on which the first guarded command depends. In various cases, the query generation component can further comprise a translation component that can convert the transformed conditions of the WHERE clause of the query language query into query language syntax. | 2021-09-16 |
20210286806 | PERSONAL INFORMATION INDEXING FOR COLUMNAR DATA STORAGE FORMAT - Techniques are described herein for indexing personal information in columnar data storage format based files. In an embodiment, row groups of rows that comprise a plurality of columns are stored in a set of files. Each column of a row group is stored in a chunk of column pages in the set of files. A regular expression index that indexes a particular column in the set of files is stored for each row group. The regular expression index identifies column pages in the chunk of the particular column that include a particular column value that satisfies a regular expression specified in a query. The regular expression specified in the query in evaluated against the particular column using the regular expression index. | 2021-09-16 |
20210286807 | GATEWAY DEVICE AND NON-TRANSITORY COMPUTER-READABLE MEDIUM - An in-vehicle gateway device includes a CPU and a memory, and the CPU includes an ID acquisitor configured to acquire a data ID associated with data to be received from an in-vehicle network, and a decider configured to derive a plurality of indices from the data ID, specify a reference destination in a reference table stored in the memory based on a plurality of derived indices, and decide a processing content related to data associated with the data ID based on information stored in a specified reference destination. | 2021-09-16 |
20210286808 | ELECTRONIC DEVICE AND METHOD FOR PROVIDING INFORMATION ON WORK AND PERSONAL LIFE - A method and an electronic device for providing information associated with work and personal life are provided. An electronic device according to various embodiments may include a user interface, a processor electrically connected to the user interface, and a memory electrically connected to the processor. The memory stores instructions which when executed by the processor cause the processor to determine a first zone and a second zone different from the first zone, based on location information and time information of the electronic device, obtain a first time during which the electronic device is located in the first zone and a second time during which the electronic device is located in the second zone, and display information associated with the first time and the second time on the user interface. | 2021-09-16 |
20210286809 | SYSTEM FOR GENERATING PREDICATE-WEIGHTED HISTOGRAMS - Embodiments of the present invention provide a method, computer program-product, and system for generating predicate-weighted histograms in a database management system. Further, the methods, computer program-products and systems in accordance with the present invention generate histograms that are biased towards the predicate literals of the queries that are submitted to the database management system. The resulting histograms will improve query performance by generating histograms with greater resolution near predicate literals that represent the queries submitted to the database management system. | 2021-09-16 |
20210286810 | Method And Apparatus For Generating Context Category Dataset - The present disclosure provides an apparatus for and method of generating a context category dataset. According to some embodiments, the present disclosure provides a context category dataset generating apparatus and method which predict a context category to which a user-inputted hashtag belongs, receive from the user the user's context category to which the hashtag belongs, and generate and update the context category dataset. | 2021-09-16 |
20210286811 | CONTINUOUS CLOUD-SCALE QUERY OPTIMIZATION AND PROCESSING - Runtime statistics from the actual performance of operations on a set of data are collected and utilized to dynamically modify the execution plan for processing a set of data. The operations performed are modified to include statistics collection operations, the statistics being tailored to the specific operations being quantified. Optimization policy defines how often optimization is attempted and how much more efficient an execution plan should be to justify transitioning from the current one. Optimization is based on the collected runtime statistics but also takes into account already materialized intermediate data to gain further optimization by avoiding reprocessing. | 2021-09-16 |
20210286812 | DISTRIBUTED JOIN FILTERS IN A DYNAMIC DISTRIBUTED DATA PROCESSING SERVICE - A method includes: a first node and a second node in a distributed computing system each creating a respective partial join filter; the first node and the second node each transmitting its respective partial join filter to a third node in the distributed computing system; the third node creating a final join filter by combining the respective partial join filters of the first node and the second node; the third node retrieving target data from a data source of the third node by applying the final join filter to the data source of the third node; and the third node transmitting the retrieved target data to a controlling node in the distributed computing system. | 2021-09-16 |
20210286813 | AUTOMATED INFORMATION TECHNOLOGY SERVICES COMPOSITION - Computer-implemented methods and systems are provided for identifying IT service compositions corresponding to subsets of a set R of IT service requirements. Such a method includes providing a data structure including, for a set S of IT services, a master graph having master nodes representing respective subsets of like services in S, interconnected by master edges each representing an integration-need between nodes interconnected by that edge. The method further comprises, for each service composition being a set of services, integrated by integration components and spanning all master nodes, in the composition subgraph, comparing the composite attributes of services and integration components in that composition with the requirements in R′ to select at least one preferred service composition for R′, and outputting composition data defining each preferred service composition. | 2021-09-16 |