24th week of 2022 patent applcation highlights part 51 |
Patent application number | Title | Published |
20220188208 | METHODS FOR CONFIGURING SPAN OF CONTROL UNDER VARYING TEMPERATURE - A method may include, in response to a change in an operating parameter of a processing unit, modifying a signal pathway to a processing circuit component of the processing unit, and communicating with the processing circuit component via the signal pathway. | 2022-06-16 |
20220188209 | ANOMALY DETECTION METHOD, SYSTEM, AND PROGRAM - The present invention provides an anomaly detection method, an anomaly detection system, and an anomaly detection program that can detect an anomaly at high accuracy by using log output quantity distributions generated for to different aggregate units and different devices. An anomaly detection system according to one example embodiment of the present invention has: a reference distribution, which is a time-series distribution of a log output quantity acquisition unit that acquires a plurality of distributions generated for each device that outputs logs and for each unit of a time range in which logs are aggregated; and an anomaly detection unit that detects an anomaly by using the plurality of distributions. | 2022-06-16 |
20220188210 | Non-Intrusive Interaction Method and Electronic Device - A non-intrusive interaction method includes an electronic device that obtains a description file of an application, where the description file indicates a function to be implemented by the application, and is defined using a non-intrusive protocol description; determines a first component based on the description file, where the first component is a component that is in components of the electronic device and that can implement the function that needs to be implemented by the application, and the component is configured based on a non-intrusive protocol to provide a device capability service and can implement an independent function; and runs, based on the description file to provide the device capability service for the application, the first component to implement the function. | 2022-06-16 |
20220188211 | MANAGING COMPUTING CAPACITY IN RADIO-BASED NETWORKS - Disclosed are various embodiments for managing computing capacity in radio-based networks and associated core networks. In one embodiment, it is determined that a set of computing hardware implementing a radio-based network for a customer has an excess capacity. At least one action is implemented to reallocate the excess capacity of the computing hardware. | 2022-06-16 |
20220188212 | PREDICTIVE PERFORMANCE INDICATOR FOR STORAGE DEVICES - Systems and methods for predictive performance indicators for storage devices are described. The data storage device may process host storage operations and maintenance operations that impact real-time performance. A performance value and corresponding threshold may be determined. Increases in maintenance operations and resulting changes in the performance value may be predicted. When the predicted change in performance value crosses the performance threshold, the host device may be notified. | 2022-06-16 |
20220188213 | SELECTING AUTOMATION SCRIPTS USING REINFORCED LEARNING - A system can evaluate multiple candidate scripts. The system receives a problem statement and a sample solution script. The system selects an additional script based on the sample solution script, and compiles a list of candidates including the sample and additional scripts. Then, for each of the candidates, the system simulates execution of the script and scores performance of the script. The system then presents results of the execution. | 2022-06-16 |
20220188214 | DYNAMIC DISTRIBUTED TRACING INSTRUMENTATION IN A MICROSERVICE ARCHITECTURE - A tracing operation is initiated on a service, wherein the service comprises a plurality of method calls. A span is generated comprising timing information associated with the service, wherein the span comprises a plurality of nested spans associated with the plurality of method calls. A determination is made as to whether one or more method calls of the plurality of method calls are causing the service to underperform in view of the plurality of nested spans. In response to determining that the one or more method calls of the plurality of method calls are causing the service to underperform, a remedial action associated with the one or more method calls is performed. | 2022-06-16 |
20220188215 | Predictive Test Case Coverage - A code base is parsed to identify methods having changes in a code base since a last code commit. Thereafter, a call graph is traversed to identify test cases implicated by the identified methods having changes in the code base. The call graph can be a directed call graph comprising a plurality of connected nodes in which a first subset of the connected nodes are method nodes representing each method in the code base in which unidirectional edges connecting method nodes correspond to invocations by a calling method to a callee method, and in which a second subset of the connected nodes are test case nodes representing each of a plurality of available test cases to test the code base. The test case nodes are each coupled to one or more method nodes by unidirectional edges that correspond to the test case coverage of the method. | 2022-06-16 |
20220188216 | DEVICE AND METHODS FOR PROCESSING BIT STRINGS - A device for processing bit strings of a program flow including a data memory and an interface that is designed to output a second bit string, and a bit string manipulator that is designed to analyze the first bit string at a predetermined bit string section for information that indicates a target state of the program flow, and to manipulate the first bit string in the bit string section to obtain the second bit string. | 2022-06-16 |
20220188217 | METHODS AND SYSTEMS FOR CONTENT MANAGEMENT AND TESTING - Computer-implemented systems and methods are disclosed for deploying documents in a live environment. The systems and methods can provide a configuration environment including a testing environment and a staging environment that can be used to configure documents that can implement software as a system. The documents can provide users with various services, that can be accessed by the documents in a testing staging environment and a live environment. The documents can be used to edit configuration files that can correspond with an entity or a patient. After being edited, a diff utility can be used to calculate and provide differences between a modified configuration file and an original configuration file. Non-transitory computer readable storage media for storing instructions that use the methods are also disclosed. | 2022-06-16 |
20220188218 | TESTING IN A DISASTER RECOVERY COMPUTER SYSTEM - According to an aspect, a computer-implemented method includes configuring a disaster recovery computer system as a test environment of a mainframe computer system as a mirror image of a production environment, where the disaster recovery computer system is a backup of a primary production computer system. Test cases are executed in the test environment of the disaster recovery computer system. A stress and load impacts can be monitored on a plurality of computer system resources of the disaster recovery computer system based on execution of the test cases. The test environment can be disabled, and the disaster recovery computer system can be reconfigured as a production system based on a failure of the primary production computer system. | 2022-06-16 |
20220188219 | SYSTEM TESTING INFRASTRUCTURE WITH HIDDEN VARIABLE, HIDDEN ATTRIBUTE, AND HIDDEN VALUE DETECTION - Inputs to a system under test (SUT) are modeled as a collection of attribute-value pairs. A set of testcases is executed using an initial set of test vectors that provides complete n-wise coverage of the attribute-value pairs. For each execution of the testcases, for each attribute-value pair, a non-binary success rate (S | 2022-06-16 |
20220188220 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR PREDICTIVE CONFIGURATION MANAGEMENT OF A SOFTWARE TESTING SYSTEM - Methods, apparatuses, systems, computing devices, computing entities, and/or the like are provided. An example method may include receiving a requirement request data object, generating at least one of a predicted complexity attribute or a predicted work track attribute corresponding to the requirement request data object, generating at least one predicted defect description attribute or at least one predicted test case description attribute corresponding to the requirement request data object, and transmitting a prediction data object that includes at least one of the predicted complexity attribute, the predicted work track attribute, the at least one predicted defect description attribute, or the at least one predicted test case description attribute. In some examples, the client device is configured to perform one or more software testing operations corresponding to the software testing task based at least in part on the prediction data object. | 2022-06-16 |
20220188221 | REGRESSION TESTING METHOD AND REGRESSION TESTING APPARATUS - A regression testing method and a regression testing apparatus in a software test field are provided. In technical solutions provided in this application, when regression testing is performed, a testing environment of the regression testing is stored, and subsequent regression testing is performed based on the testing environment. The technical solutions provided in this application ensure consistency of testing environments between previous regression testing and latter regression testing, thereby improving accuracy of test results of the regression testing. | 2022-06-16 |
20220188222 | ELECTRONIC APPARATUS, METHOD, AND STORAGE MEDIUM - According to one embodiment, an electronic apparatus includes a controller. The control unit includes an instruction executer configured to generate or acquire data, an issuer configured to accept a request and issues a time stamp, a first updater configured to update a first counter value according to a first operation, a second updater configured to update a second counter value in accordance with issuance of the time stamp, a first non-volatile memory to hold the first counter value and a secret key, and a volatile register to hold the second counter value. The time stamp is a message authentication code or a digital signature issued from the first and second counter values and the data. The second counter value is not stored in the first non-volatile memory. | 2022-06-16 |
20220188223 | MEMORY SUB-SYSTEM WRITE SEQUENCE TRACK - A system includes a memory device and a processing device communicatively coupled to the memory device. The processing device is to write data to a number of groups of memory cells of the memory device in a physically non-contiguous manner. The processing device is further to track a sequence in which the number of groups of memory cells were written with the data. In response to a trigger event, the processing device is further to identify at least a portion of the number of groups of memory cells having data received over a predefined period preceding the trigger event based at least in part on the tracked sequence. | 2022-06-16 |
20220188224 | CRYPTOGRAPHIC SEPARATION OF MMIO ON DEVICE - Technologies for cryptographic separation of MMIO operations with an accelerator device include a computing device having a processor and an accelerator. The processor establishes a trusted execution environment. The accelerator determines, based on a target memory address, a first memory address range associated with the memory-mapped I/O transaction, generates a second authentication tag using a first cryptographic key from a set of cryptographic keys, wherein the first key is uniquely associated with the first memory address range. An accelerator validator determines whether the first authentication tag matches the second authentication tag, and a memory mapper commits the memory-mapped I/O transaction in response to a determination that the first authentication tag matches the second authentication tag. Other embodiments are described and claimed. | 2022-06-16 |
20220188225 | LOW-COST ADDRESS MAPPING FOR STORAGE DEVICES WITH BUILT-IN TRANSPARENT COMPRESSION - An infrastructure for mapping between logic block addresses (LBAs) and physical block addresses (PBAs). A disclosed method includes: receiving a request the specifies an LBA; determining an applicable zone based on the LBA from a set of zones, wherein the set of zones expose an LBA address space of the storage device; identifying at least one tree from a set of trees having a root node associated with the applicable zone; traversing the at least one tree to identify a set of leaf nodes based on the LBA, wherein each leaf node points to an mpage; and determining corresponding PBA information for the LBA by examining mapping information contained in each mpage. | 2022-06-16 |
20220188226 | DYNAMIC PROGRAM-VERIFY VOLTAGE ADJUSTMENT FOR INTRA-BLOCK STORAGE CHARGE LOSS UNIFORMITY - An amount of threshold voltage distribution shift is determined. The threshold voltage distribution shift corresponds to an amount of time after programming of a reference page of a block of a memory device. A program-verify voltage is adjusted based on the amount of threshold voltage distribution shift to obtain an adjusted program-verify voltage. Using the adjusted program-verify voltage, a temporally subsequent page of the block is programmed at a time corresponding to the amount of time after the programming of the reference page. | 2022-06-16 |
20220188227 | SYSTEM AND METHOD FOR LOCAL CACHE SYNCHRONIZATION - A computer-implemented method for synchronizing local caches is disclosed. The method may include receiving a content update which is an update to a data entry stored in local caches of each of a plurality of remote servers. The method may include transmitting the content update to a first remote server to update a corresponding data entry in a local cache of the first remote server. Further, the method may include generating an invalidation command, indicating the change in the corresponding data entry. The method may include transmitting the invalidation command from the first remote server to the message server. The method may include generating, by the message server, a plurality of partitions based on the received invalidation command. The method may include transmitting, from the message server to each of the remote servers, the plurality of partitions, so that the remote servers update their respective local caches. | 2022-06-16 |
20220188228 | CACHE EVICTIONS MANAGEMENT IN A TWO LEVEL MEMORY CONTROLLER MODE - Systems, apparatuses, and methods provide for a memory controller to manage cache evictions and/or insertions in a two level memory controller mode that uses a dynamic random access memory as a transparent cache for a persistent memory. For example, a memory controller includes logic to map cached data in the dynamic random access memory to an original address of copied data in the persistent memory. The cached data in the dynamic random access memory is tracked as to whether it is dirty data or clean data with respect to the copied data in the persistent memory. Upon eviction of the cached data from the dynamic random access memory, a writeback of the cached data that has been evicted to the persistent memory is bypassed when the cached data is tracked as dirty data. | 2022-06-16 |
20220188229 | CACHE GROUPING FOR INCREASING PERFORMANCE AND FAIRNESS IN SHARED CACHES - A method includes monitoring one or more metrics for each of a plurality of cache users sharing a cache, and assigning each of the plurality of cache users to one of a plurality of groups based on the monitored one or more metrics. | 2022-06-16 |
20220188230 | Cache Management Method and Apparatus - A data management method is applied to a computing system. The computing system includes a plurality of NUMA nodes, each NUMA node includes a processor and a memory, and each memory is used to store a data block. In the method, a processor in a NUMA node receives an operation request for a data block, and the processor processes the data block, and allocates a replacement priority of the data block in cache space of the NUMA node based on an access attribute of the data block, where the access attribute of the data block includes a distance between a home NUMA node of the data block and the NUMA node. | 2022-06-16 |
20220188231 | LOW-BIT DENSITY MEMORY CACHING OF PARALLEL INDEPENDENT THREADS - A first data item is programmed to a first memory page of a first block included in a cache that resides in a first portion of a memory device. The first data item is associated with a first processing thread. A second memory page including a second data item associated with the first processing thread is identified. The second memory page is contained by a second block of the cache. The first data item and the second data item are copied to a second portion of the memory device. The first memory page and each of the one or more second memory pages are designated as invalid. | 2022-06-16 |
20220188232 | UNIFORM CACHE SYSTEM FOR FAST DATA ACCESS - A uniform cache system and method for fast data access are disclosed. The system and method include a plurality of compute units (CUs) and a plurality of L0 caches. The plurality of CUs and the plurality of L0 caches are arranged in a network configuration where each one of the plurality of CUs is surrounded by a first group of the plurality of L0 caches and each of the plurality of L0 caches is surrounded by a L0 cache group and CU group. Operationally, ones of the plurality of CUs, upon a request for data, queries the surrounding first group of the plurality of L0 caches to satisfy the request. On a condition that the first group of the plurality of L0 caches fails to satisfy the data request, each of the first group of the plurality of L0 caches query a second group of adjacent L0 caches to satisfy the request. On a condition that the second group of adjacent L0 caches fails to satisfy the data request, each of the second group of adjacent L0 caches propagating the query to the next group of L0 caches. The system may iterate subsequent propagations of the query to subsequent next groups of L0 caches. | 2022-06-16 |
20220188233 | MANAGING CACHED DATA USED BY PROCESSING-IN-MEMORY INSTRUCTIONS - A system-on-chip configured for eager invalidation and flushing of cached data used by PIM (Processing-in-Memory) instructions includes: one or more processor cores; one or more caches and an I/O (input/output) die comprising logic to: receive a cache probe request, wherein the cache probe request including a physical memory address associated with a PIM instruction, and the PIM instruction is to be offloaded to a PIM device for execution; and issue, based on the physical memory address, a cache probe to one or more of the caches prior to receiving the PIM instruction for dispatch to the PIM device. | 2022-06-16 |
20220188234 | STORAGE DEVICE AND OPERATING METHOD THEREOF - A storage device includes: a memory device including a plurality of planes, and a plurality of cache buffers and data buffers; and a memory controller for controlling the memory device to transmit first data and second data from first plane and second plane into the respective first cache buffer and second cache buffer, and control the first cache buffer and the second cache buffer to transmit the first data and the second data to the memory controller. In response to a read request for third data from a host while the first data is transmitting from the first cache buffer to the memory controller, the memory controller transmits a cache read command to the memory device such that the memory device reads the third data after the first data is completely transmitted to the memory controller, before the second data is transmitted from the second cache buffer. | 2022-06-16 |
20220188235 | DYNAMICALLY ADJUSTING PARTITIONED SCM CACHE MEMORY TO MAXIMIZE PERFORMANCE - A method for dynamically adjusting cache memory partition sizes within a storage system includes computing a read hit ratio for data accessed in each cache partition and an average read hit ratio for all the cache partitions over a time interval. The cache memory includes a higher performance portion (DRAM) and lower performance portion (SCM). The method increases or decreases the partition size for each cache partition by comparing the read hit ratio for the partition to the average read hit ratio for all the partitions. Each cache partition includes maximum and minimum partition sizes, and read hit and read access counters. The SCM portion of the cache memory includes cache partitions reserved for storing data of a specific type, or data used for a specific purpose or with a specific software application. A corresponding storage controller and computer program product are also disclosed. | 2022-06-16 |
20220188236 | STORAGE DEVICE USING BUFFER MEMORY IN READ RECLAIM OPERATION - A storage device includes a nonvolatile memory device, a memory controller, and a buffer memory. The memory controller determines a first memory block of the nonvolatile memory device, which is targeted for a read reclaim operation, and reads target data from a target area of the first memory block. The target data are stored in the buffer memory. The memory controller reads at least a portion of the target data stored in the buffer memory in response to a read request corresponding to at least a portion of the target area. | 2022-06-16 |
20220188237 | UNMAP OPERATION TECHNIQUES - Methods, systems, and devices for unmap operation techniques are described. A memory system may include a volatile memory device and a non-volatile memory device. The memory system may receive a set of unmap commands that each include a logical block address associated with unused data. The memory system may determine whether one or more parameters associated with the set of unmap commands satisfy a threshold. If the one or more parameters satisfy the threshold, the memory system may select a first procedure for performing the set of unmap commands different from a second procedure (e.g., a default procedure) for performing the set of unmap commands and may perform the set of unmap commands using the first procedure. If the one or more parameters do not satisfy the threshold, the memory system may perform the set of unmap commands using the second procedure. | 2022-06-16 |
20220188238 | FLASH MEMORY SYSTEM AND FLASH MEMORY DEVICE THEREOF - A flash memory system and a flash memory thereof are provided. The flash memory device includes a NAND flash memory and a control circuit. The NAND flash memory chip includes a cache memory, a page buffer; and an NAND flash memory array. The NAND flash memory array includes a plurality of pages, wherein each page includes a plurality of sub-pages, each sub-page has a sub-page length. The cache memory is composed of a plurality of sub cache and each sub cache corresponds to different pages of the NAND flash memory array. The page buffer is composed of a plurality of sub-page buffers and each sub-page buffer corresponds to different pages of the NAND flash memory array. The control circuit is coupled to the host and the NAND flash memory, and performs an access operation in units of one sub-page. | 2022-06-16 |
20220188239 | MEMORY CLEANING METHOD, INTELLIGENT TERMINAL AND READABLE STORAGE MEDIUM - The present disclosure provides a memory cleaning method, a smart terminal, and a readable storage medium. When the smart terminal is switched from a first display state to a second display state, an application to be cleaned is determined. A space to be cleaned is determined from a running memory and cache space occupied during running of the application to be cleaned. Files are removed from each of the determined spaces to be cleaned. In this way, an application to be cleaned is determined when the smart terminal is switched from a first display state to a second display state, so that an application to be cleaned can be directly cleaned in the background, and applications can be cleaned in real time without affecting the user's normal operation, which contributes to more timely cleaning of applications and an improved user experience. | 2022-06-16 |
20220188240 | ELASTIC BUFFER IN A MEMORY SUB-SYSTEM FOR DEBUGGING INFORMATION - A processing device in a memory system determines to send system state information associated with the memory device to a host system and identifies a subset of a plurality of event entries from a staging buffer based on one or more filtering factors, the plurality of event entries corresponding to events associated with the memory device. The processing device further sends the subset of the plurality of event entries as the system state information to the host system over a communication pipe having limited bandwidth. | 2022-06-16 |
20220188241 | Ganaka: A Computer Operating on Models - This invention deals with a data modelling computer and memory system (extended to a database)—which will be referred to as an Ganaka (computer in Sanskrit). Ganaka is especially useful in processing uncertain data, and Big Data, both of which are major issues in the data processing today, and will be referred to as point data in all that follows. | 2022-06-16 |
20220188242 | MULTI-TIER CACHE FOR A MEMORY SYSTEM - Methods, systems, and devices for a multi-tier cache for a memory system are described. A memory device may include memory cells configured as cache storage and memory cells configured as main storage. The cache storage may be a multi-tier cache and may include sets of different types of memory cells or memory cells operated as different types of memory cells, with different latencies, storage densities, or other performance characteristics. The memory device or a controller or host system for the memory device may determine the set of memory cells within the multi-tier cache to which a set of data is to be written, or may move the set of data within the multi-tier cache or between the multi-tier cache and the main storage, based on one or more of a variety of performance considerations. | 2022-06-16 |
20220188243 | SEMICONDUCTOR MEMORY APPARATUS, MEMORY MODULE, AND MEMORY SYSTEM INCLUDING MEMORY MODULE - A memory module may include J memory chips configured to input/output data in response to each of a plurality of translated address signals; and an address remapping circuit configured to generate a plurality of preliminary translated address signals by adding first correction values to a target address signal provided from an exterior of the memory module, and to generate the plurality of translated address signals by shifting all bits of each of the plurality of preliminary translated address signals so that K bits included in a bit string of each of the plurality of preliminary translated address signals are moved to other positions of each bit string. | 2022-06-16 |
20220188244 | DYNAMIC LOGICAL PAGE SIZES FOR MEMORY DEVICES - Methods, systems, and devices for dynamic logical page sizes for memory devices are described. A memory device may use an initial set of logical pages each having a same size and one or more logical-to-physical (L2P) tables to map logical addresses of the logical pages to the physical addresses of corresponding physical pages. As commands are received from a host device, the memory device may dynamically split a logical page to introduce smaller logic pages if the host device accesses data in chunk sizes smaller than the size of the logical page that is split. The memory device may maintain one or more additional L2P tables for each smaller logical page size that is introduced, along with one or more pointer tables to map between L2P tables and entries for larger logical page sizes and L2P tables and entries associated with smaller logical page sizes. | 2022-06-16 |
20220188245 | PAGE TABLE STRUCTURE - A page table structure for address translation may include a relative type of page table entry, for which an address pointer to a next-level page table entry or a translated address may be specified using a relative offset value indicating an offset of the address pointer relative to a reference-point base address. | 2022-06-16 |
20220188246 | EXCLUSION REGIONS FOR HOST-SIDE MEMORY ADDRESS TRANSLATION - Methods, systems, and devices for exclusion regions for host-side memory address translation are described. In some examples, a host system may be configured to identify regions of logical addresses to be excluded from operating according to logical-to-physical (L2P) address mapping by the host system (e.g., for access commands), including such techniques that may be associated a host performance boosting (HPB) functionality. The host system may signal an indication for a memory system to inhibit communication of L2P mapping table information to the host system for the identified regions, which may inhibit, suppress, or exclude HPB functionality for those identified regions. In some examples, the memory system may continue to support HPB functionality by communicating L2P mapping table information for other regions, such as regions of logical addresses that may be read relatively frequently or may otherwise benefit from address translation at the host system. | 2022-06-16 |
20220188247 | STATUS CHECK USING CHIP ENABLE PIN - Methods, systems, and devices for status check using chip enable pin are described. An apparatus may include a memory device, a pin coupled with the memory device, and a driver coupled with the pin and configured to bias the pin to a first a voltage or a second voltage based on a status of the memory device. The status may indicate, for example, whether the memory device is available to receive a command. The driver may bias the pin to a first voltage based on a first status of the memory device indicating that the memory device is busy. Additionally, or alternatively, the driver may bias the pin to a second voltage based on a second status of the memory device indicating that the memory device is available to receive the command. In some cases, the pin may be an example of a chip enable pin. | 2022-06-16 |
20220188248 | IN-LINE DATA PACKET TRANSFORMATIONS - In-line data packet transformations. A transformation engine obtains data to be transformed and determines a transformation to be applied to the data. The determining uses an input/output control block that includes at least one field to be used in determining the transformation to be applied. Based on determining the transformation to be applied, the transformation is performed. | 2022-06-16 |
20220188249 | MEMORY APPLIANCE COUPLINGS AND OPERATIONS - System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from the first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA. | 2022-06-16 |
20220188250 | MULTIPLE MEMORY TYPE SHARED MEMORY BUS SYSTEMS AND METHODS - Techniques for implementing and/or operating an apparatus, which includes a host system, a memory system, and a shared memory bus. The memory system includes a first memory type that is subject to a first memory type-specific timing constraint and a second memory type that is subject to a second memory type-specific timing constraint. Additionally, the shared memory bus is shared by the first memory type and the second memory type. Furthermore, the apparatus utilizes a first time period to communicate with the first memory type via the shared memory bus at least in part by enforcing the first memory type-specific timing constraint during the first time period and utilizes a second time period to communicate with the second memory type via the shared memory bus at least in part by enforcing the second memory type-specific timing constraint during the second time period. | 2022-06-16 |
20220188251 | OPERATING METHOD OF TRANSACTION ACCELERATOR, OPERATING METHOD OF COMPUTING DEVICE INCLUDING TRANSACTION ACCELERATOR, AND COMPUTING DEVICE INCLUDING TRANSACTION ACCELERATOR - A transaction accelerator may be connected between at least one host device and a bus, and a method of operating the transaction accelerator may include receiving a first transaction request from the at least one host device, transmitting the first transaction request to the bus, and transmitting a first transaction response corresponding to the first transaction request to the at least one host device, in response to the transmitting the first transaction request to the bus. | 2022-06-16 |
20220188252 | APPARATUS AND METHOD FOR INTERRUPT CONTROL - Provided are an apparatus and a method for controlling an interrupt rate for a processor based on processor utilization. Accordingly, it is possible to improve an I/O response latency and improve energy efficiency. | 2022-06-16 |
20220188253 | TRANSLATION SYSTEM FOR FINER GRAIN MEMORY ARCHITECTURES - Systems and techniques for a translation device that is configured to enable communication between a host device and a memory technology using different communication protocols (e.g., a communication protocol that is not preconfigured in the host device) is described herein. The translation device may be configured to receive signals from the host device using a first communication protocol and transmit signals to the memory device using a second communication protocol, or vice-versa. When converting signals between different communication protocols, the translation device may be configured to convert commands, map memory addresses to new addresses, map between channels having different characteristics, encode data using different modulation schemes, or a combination thereof. | 2022-06-16 |
20220188254 | METHODS FOR IDENTIFYING TARGET SLAVE ADDRESS FOR SERIAL COMMUNICATION INTERFACE - A method for programming and controlling of a plurality of slave devices serially connected in a daisy chain configuration using a master device is disclosed. The method includes broadcasting, from the master device, an initialization data packet to the plurality of slave devices to assign each slave device in the plurality of slave devices a slave address that is unique to said each slave device; storing, in each slave device, the assigned slave address, defining a data packet , wherein the data packet comprises a target slave address, a read/write command, a register address, a increment value, and a start address; and transmitting the data packet serially to one or more of the plurality of slave devices until the target address in the data packet matches the slave address stored in one of the plurality of slave devices. | 2022-06-16 |
20220188255 | METHODS FOR IDENTIFYING TARGET SLAVE ADDRESS FOR SERIAL COMMUNICATION INTERFACE - A method for programming and controlling of a plurality of slave devices serially connected in a daisy chain configuration using a master device includes assigning a unique slave address to each slave device in the plurality of slave devices by sending an initialization data packet from the master device serially through the plurality of slave devices; storing, in each of the plurality of slave devices, the assigned slave address; defining a data packet; and transmitting the data packet serially to one or more of the plurality of slave devices. The data packet has a target slave address, a read/write command, a start address, and optionally a register address and an increment value. | 2022-06-16 |
20220188256 | IDENTIFIERS FOR CONNECTIONS BETWEEN HOSTS AND STORAGE DEVICES - In some examples, an adapter device includes a bridge to determine that a storage device includes a plurality of bus controllers, where the plurality of bus controllers are communicatively coupled to respective adapter devices. The bridge determines a quantity of supported connections over the network to the storage device, and in response to determining that the storage device comprises the plurality of bus controllers, the bridge computes an identifier based on the quantity of supported connections and to which respective bus controller of the plurality of bus controllers the adapter device is connected, and assigns the identifier to a connection from the host to the storage device. | 2022-06-16 |
20220188257 | ENUMERATION OF PERIPHERAL DEVICES ON A SERIAL COMMUNICATION BUS - A controller enumerates a plurality of devices while operating in a daisy-chain mode of operation and then causes the devices to operate in a parallel mode of operation in which the devices are individually addressed. | 2022-06-16 |
20220188258 | ELECTRONIC DEVICE INCLUDING A STRUCTURE IN WHICH AN INSERTABLE DEVICE IS INSERTABLE AND METHOD FOR OPERATING THE SAME - An electronic device is provided. The electronic device includes a connector into which a first communication device can be inserted, a second communication device, a memory, and at least one processor, and the at least one processor may be configured to perform control such that first power is transferred to the first communication device connected through the connector, transmit and/or receive first data to and/or from a network by a use of the first communication device, obtain information related to an operation of the first communication device from the first communication device to store the obtained information related to the operation in the memory, and refrain from transferring the first power to the first communication device, transmit and/or receive second data by a use of the second communication device, refrain from using the second communication device, perform control such that second power is transferred to the first communication device, and transfer the obtained information related to the operation, stored in the memory, to the first communication device through the connector. | 2022-06-16 |
20220188259 | DATA TRANSFER SYSTEM AND SYSTEM HOST - A data transfer system, which comprises a system host and an adaptor including a local host, is provided. The adaptor is connectable to a local device inserted into the adaptor and includes a switch unit configured to perform address translation and Requestor (Req) ID translation for data transfers between the local device and the system host. The system host checks, when the local device is inserted or removed while the system host is in operation, a type of a protocol applied to the local device and reloads a device driver based on a result of the check. The device driver includes a pre-sleep state storage configured to store an insertion and removal state of the local device immediately before the system host and the adaptor enter a sleep state. | 2022-06-16 |
20220188260 | BUS ARRANGEMENT AND METHOD FOR OPERATING A BUS ARRANGEMENT - A bus arrangement includes a coordinator, a first subscriber having a first optical display, a second subscriber having a second optical display, a third subscriber having a third optical display, and a bus that couples the coordinator to the first, second, and third subscribers. In a standard operating phase, the first subscriber is configured to display first local information of the first subscriber on the first optical display, the second subscriber is configured to display second local information of the second subscriber on the second optical display, and the third subscriber is configured to display third local information of the third subscriber on the third optical display. The coordinator is configured to switch from a standard operating phase to a display operating phase based on detecting a fault in the first subscriber. | 2022-06-16 |
20220188261 | TRIGGER/ARRAY FOR USING MULTIPLE CAMERAS FOR A CINEMATIC EFFECT - An apparatus includes a plurality of output ports and a processor. The output ports may each be configured to connect to a respective trigger device and generate an output signal to activate the respective trigger device. The processor may be configured to determine a number of the trigger devices connected to the output ports, determine a timing between each of the number of the trigger devices connected, convert the timing for each of the trigger devices to fit a standard timing using offset values specific to each of the trigger devices and perform a trigger routine to trigger the output signal for each of the trigger devices connected. The trigger routine may activate each of the trigger devices connected according to an event. The offset values may delay triggering the trigger devices to ensure that the trigger devices are sequentially activated at intervals that correspond consistently with the standard timing. | 2022-06-16 |
20220188262 | AUTO-ENUMERATION OF PERIPHERAL DEVICES ON A SERIAL COMMUNICATION BUS - Each device on a bus auto-enumerates at power up or reset to assign a unique address to the device based on the resistance value of an external resistor. A current source supplies a current to a terminal to which a resistor is coupled. Each device has a resistor attached with a different resistance value. Each device senses the voltage at the terminal and the voltage corresponds to the unique device address on the bus. Following enumeration, the devices on the bus are individually addressable using their unique address. | 2022-06-16 |
20220188263 | TIME SENSITIVE NETWORKING DEVICE - The present disclosure generally relates to a device, method, or system for time sensitive networking. In an example, the device can include a time-sensitive networking controller and a scheduler. The device also includes an enhanced gate control list maintained on the time-sensitive networking controller to include a direct memory access address, a launch time, and a pre-fetch time for a data packet. The device may also include a transmitter of the time-sensitive networking controller to transmit the data packet retrieved using the direct memory access address at the launch time identified by the scheduler. | 2022-06-16 |
20220188264 | ENERGY EFFICIENT MICROPROCESSOR WITH INDEX SELECTED HARDWARE ARCHITECTURE - An SoC maintains the full flexibility of a general-purpose microprocessor while providing energy efficiency similar to an ASIC by implementing software-controlled virtual hardware architectures that enable the SoC to function as a virtual ASIC. The SoC comprises a plurality of “Stella” Reconfigurable Multiprocessors (SRMs) supported by a Network-on-a-Chip that provides efficient data transfer during program execution. A hierarchy of programmable switches interconnects the programmable elements of each of the SRMs at different levels to form their virtual architectures. Arithmetic, data flow, and interconnect operations are also rendered programmable. An architecture index” points to a storage location where pre-determined hardware architectures are stored and extracted during program execution. The programmed architectures are able to mimic ASIC properties such as variable computation types, bit-resolutions, data flows, and amount and proportions of compute and data flow operations and sizes. Once established, each architecture remains in place as long as needed. | 2022-06-16 |
20220188265 | Loop Thread Order Execution Control of a Multi-Threaded, Self-Scheduling Reconfigurable Computing Fabric - Representative apparatus, method, and system embodiments are disclosed for configurable computing. A representative system includes an interconnection network; a processor; and a plurality of configurable circuit clusters. Each configurable circuit cluster includes a plurality of configurable circuits arranged in an array; a synchronous network coupled to each configurable circuit of the array; and an asynchronous packet network coupled to each configurable circuit of the array. A representative configurable circuit includes a configurable computation circuit and a configuration memory having a first, instruction memory storing a plurality of data path configuration instructions to configure a data path of the configurable computation circuit; and a second, instruction and instruction index memory storing a plurality of spoke instructions and data path configuration instruction indices for selection of a master synchronous input, a current data path configuration instruction, and a next data path configuration instruction for a next configurable computation circuit. | 2022-06-16 |
20220188266 | Systems and Methods for Dynamic Content Optimization at the Network Edge Using Shared Customizable Functions - Provided is an edge compute platform (“ECP”) for serving optimized content from local cache or from output of a shared customizable function executed by a compute device at the network edge on behalf of different customer content such that the function is not redundantly deployed for different customer content, and is not be executed each time the same variant of the optimized content is requested. The ECP may canonicalize first transformation parameters of a received original request according to a transformation parameter definition of a particular function that is implicated by the original request, may generate second transformation parameters with a different ordering than the first transformation parameters as a result of the canonicalization, may generate a variant of the original file by inputting the second transformation parameters to the particular function, and may provide the variant in response to the original request. | 2022-06-16 |
20220188267 | EMBEDDED REFERENCE COUNTS FOR FILE CLONES - Techniques for efficiently managing a file clone from a filesystem which supports efficient volume snapshots are provided. In some embodiments, a system may receive an instruction to remove the file clone from the filesystem. The file clone may be a point-in-time copy of metadata of an original file. The system may further—for a file map entry in a filesystem tree associated with the file clone, the file map entry indicating a data block—decrement a reference count in a reference count entry associated with the file map entry. The reference count entry may be stored in the filesystem tree according to a key and the key may comprise an identification of the original file. The system may further reclaim the data block in a storage system when the reference count is zero. | 2022-06-16 |
20220188268 | MANAGING NETWORK SHARES UTILIZING FILESYSTEM SNAPSHOTS COMPRISING METADATA CHARACTERIZING NETWORK SHARES - An apparatus comprises at least one processing device comprising a processor coupled to a memory. The at least one processing device is configured to identify one or more network shares of a filesystem. The at least one processing device is also configured to store, in the filesystem, at least one network share metadata file comprising metadata characterizing the identified one or more network shares of the filesystem. The at least one processing device is further configured to generate a snapshot of the filesystem, the generated snapshot comprising the at least one network share metadata file. The generated snapshot is utilizable for performing a recovery of the filesystem and the identified one or more network shares using at least a portion of the metadata from the at least one network share metadata file. | 2022-06-16 |
20220188269 | REORDERING FILES - A method includes, for files in a storage system requested in sequence by an application, identifying a pre-file and identifying a post-file requested after the pre-file. The method also includes incrementing a pre-read count for the pre-file in file attributes associated with the pre-file and incrementing a post-read count for the post-file in file attributes associated with the post-file. The method includes selecting a position in a save list for each file based on the pre-read and post-read counts and saving the files on tape media according to the relative positions of the files in the save list. A computer program product includes one or more computer readable storage media and program instructions collectively stored on the one or more computer readable storage media. The program instructions includes program instructions to perform foregoing method. A system includes a processor and logic configured to perform the foregoing method. | 2022-06-16 |
20220188270 | SYSTEMS AND METHODS FOR DATA TRACING - The disclosed embodiments include systems and methods for tracing data. A disclosed tracing system can receive a request to access original data. The request can be associated with access request metadata. The tracing system can generate a tracer based at least in part on the access request metadata and can generate an altered version of the first data by inserting the tracer into the original data. The tracing system can update a database or log file associated with the first data to include association information based on the access request metadata. The tracing system can then provide the altered version of the first data in response to the request. | 2022-06-16 |
20220188271 | Framework for allowing complementary workloads/processes to bring in heavy load on a file collaboration platform - A data processing system for processing requests for features at a file collaboration platform implements receiving, at the file collaboration platform, a request from a client device to invoke a requested service on one or more files, wherein the requested service is associated with a feature provided by the file collaboration platform; determining a current operating status of the file collaboration platform; obtaining a feature-specific policy associated with the feature associated with the request; determining whether the requested service is allowed by the file collaboration platform based on the current operating status of the file collaboration platform and the feature-specific policy associated with the feature; allocating capacity to the request at the file collaboration platform for performing the request responsive to determining that the requested service is allowed; and sending a first message to the client device indicating that the client device may invoke the requested service. | 2022-06-16 |
20220188272 | Tracking Users Modifying a File - Techniques are provided for tracking users modifying, writing, or editing a file. In an example, a file system maintains a first-in-first-out queue that logs a finite set of users that have most-recently modified a file. This queue can be maintained in an extended attribute of an Mode that corresponds to a file. Where a user modifies a file, and the user is currently identified in the queue, the user can be removed from the queue. Where the user modifies a file, is not currently identified in the queue, and the queue is full, an oldest user in the queue can be removed from the queue. Then, the user can be added to the back of the queue. | 2022-06-16 |
20220188273 | PER-NODE METADATA FOR CUSTOM NODE BEHAVIORS ACROSS PLATFORMS - Technologies for implementing customized behaviors for content items are provided. An example method can include receiving, from a user account registered with a content management system, a request to access a content item managed by the content management system for the user account, the content item having one or more behaviors configured for an attribute associated with the content item and/or the content item associated with the attribute; obtaining, from a representation of a remote state of content items associated with the user account, metadata defining the attribute associated with the content item; based on the metadata, determining the one or more behaviors configured for the attribute and/or the content item associated with the attribute; and applying the one or more behaviors to the content item. | 2022-06-16 |
20220188274 | FILE PROCESSING METHOD AND APPARATUS BASED ON ONLINE WORKING SYSTEM, AND STORAGE MEDIUM - Disclosed are a file processing method and apparatus based on an online working system, and a storage medium. The method is applicable to a virtual application server, and includes: performing an interaction with a virtual application client of a terminal device and launching an application according to the interaction; and obtaining and opening, in the application, a file managed by the online working system based on browser/server architecture, and sending an interface in which the file is opened to the terminal device. The online working system is deployed on an online working system server. And the obtaining the file includes: transmitting the file from a first storage device related to the online working system server to a second storage device related to the virtual application server. | 2022-06-16 |
20220188275 | Flexible Permission Management Framework For Cloud Attached File Systems - A method of managing file permissions in a remote file storage system includes defining permissions for the remote file storage system and controlling access to objects on the remote file storage system according to the permissions of the remote file storage system. The permissions are transferred to a client file storage system remote from the remote file storage system, and access to objects on the client file storage system is controlled according to the permissions of the remote file storage system. A remote file storage system includes a permissions file generator operative to generate a permissions file, which is transmitted to a client file storage system for enforcement at the client file storage system. | 2022-06-16 |
20220188276 | METADATA JOURNAL IN A DISTRIBUTED STORAGE SYSTEM - A plurality of computing devices are communicatively coupled to each other via a network, and each of the plurality of computing devices is operably coupled to one or more of a plurality of storage devices. Each computing device is operable to compress one or more blocks of data and append a journal in front of the data. The journal and the data are written concurrently to flash memory. Each computing device is also operable to maintain a metadata registry that records changes in the flash memory. In the event of a power failure, the journal and previous journals may be used to verify the state of the metadata registry. | 2022-06-16 |
20220188277 | APPARATUS, SYSTEM, AND METHOD FOR MANAGING AN OBJECT-BASED FILE SYSTEM - An apparatus, system, and method for managing an object-based file system, and in particular for providing modifying access to an external application to a file system being managed in a user access restricted state, such as e.g. in the read only state. The apparatus including a network interface configured to communicably connect to one or more client computers via a communication network, and a file system management section, implemented at least in part by processor, configured to manage the object-based file system comprising plural file objects, each file object being associated with a respective file of the file system, the file system management section being configured to serve file access requests of a file serving protocol received via the interface. The file system management section is further configured to provide a block access object of the object-based file system being mountable as a block storage device. | 2022-06-16 |
20220188278 | AUTOMATED ONLINE UPGRADE OF DATABASE REPLICATION - In an approach to improve online database replication by automating the upgrading of a database replications system online. Additionally, embodiments of the present invention stop an upgrade using a first incremental update strategy on data of a source database, identify an earliest open transaction from a first database to a second database, and identify a last committed log record identifier. Further, embodiments of the present invention execute an adaptive apply strategy on transactions including the earliest open transaction until the last committed log record identifier is reached by the adaptive apply strategy, and resume, by upgrade controller, the upgrade with a second incremental update strategy. | 2022-06-16 |
20220188279 | SYSTEMS AND METHODS FOR CREATING AND TRACKING IMPLEMENTATION OF A CONSOLIDATION OF DATA DURING A MIGRATION FROM ONE OR MORE SOURCE SYSTEMS TO ONE TARGET SYSTEM - Systems and methods for creating and tracking implementation of a consolidation of data during a migration from one or more source systems to one target system are described herein. One system includes a computing device, having a processor and memory, and instructions stored in memory that are executable by the processor wherein a set of software objects and their attributes are identified based on metadata of the target system, a first subset of the software objects is selected to identify information needing to be migrated from a first source system of the one or more source systems to the target system, and a second subset of the software objects is selected to identify information needing to be migrated from a first source system of the one or more source systems to the target system, and wherein the first and second subsets are merged together to form a merged subset. | 2022-06-16 |
20220188280 | MACHINE LEARNING BASED PROCESS AND QUALITY MONITORING SYSTEM - This technical solution relates to big data computer processing field, in particular, to the system of automatic quality monitoring of data obtained from different sources in real time. | 2022-06-16 |
20220188281 | AUTOMATED TRANSFORMATION DOCUMENTATION OF MEDICAL DATA - Systems, methods, and storage media useful in a healthcare cloud computing platform to transform, deduplicate and store medical data from third-party databases to a patient's primary medical record in the healthcare cloud computing platform. Exemplary implementations may: load and read data from third-party databases, and determine if it is duplicative of what is in the patient's primary record. Other embodiments, provide a method for ranking medical data from two different third-party databases to determine which medical data should be written to the patient's primary medical record in the healthcare computing platform. | 2022-06-16 |
20220188282 | TECHNICAL SYSTEM SETTINGS USING A SHARED DATABASE - In some implementations, there is provided a method including receiving, by a centralized controller, data from a plurality of database tables at a plurality of database instances at a cloud service, wherein the data is received via a plurality of database views on the plurality of database tables; in response to receiving the data, performing, by the centralized controller, a union view of the data obtained from the plurality of database views; storing, by the centralized controller, the union view of the data as configuration metadata; and performing, by the centralized controller, at least one calculation view to update a value of the configuration metadata and to provide the updated value to at least one of the plurality of database tables at the cloud service. Related systems and articles of manufacture are also disclosed. | 2022-06-16 |
20220188283 | AUTOMATIC DISCOVERY OF EXECUTED PROCESSES - Data is gathered from a log file on a first application server, a log file on a second application server, a database on a database server, or any suitable combination thereof. By correlating the data from different sources, XP-Functions that execute in sequence on a single application server are identified and combined into a sequence referred to as an executable process chain (XP-Chain). The automatic process discovery server reconstructs end-to-end processes out of XP-Chains, even when the XP-Chains are executed on different application servers, based on log files and database data. A test script may be generated for an identified end-to-end process. By running the test script, proper functioning of the end-to-end process may be confirmed. Existing test scripts may be disabled for a formerly identified end-to-end process that is no longer found to be executed. | 2022-06-16 |
20220188284 | SYSTEMS AND METHODS USING GENERIC DATABASE SEARCH MODELS - A computer system includes one or more database search models configured to search data contained in a plurality of database tables. The one or more database search models can include a plurality of structural containers and one or more search enabling containers. The plurality of structural containers can represent objects having a structural relationship and contain property data of the objects. The property data of the objects can be obtained from the plurality of database tables. The plurality of structural containers can be shared by the one or more database search models. The one or more search enabling containers can correspond to the one or more database search models and specify a scope for searching data and a format for presenting search results. | 2022-06-16 |
20220188285 | CONTAINER STORAGE MANAGEMENT SYSTEM - The present disclosure relates to computer-implemented methods, software, and systems for generating a hierarchy of metadata tables for a database comprising containers including tables. The tables are identified by table names and assigned to containers. A first table is assigned to two containers and may define two table instances of the first table. The hierarchy of metadata tables includes a first metadata table defining mappings between container identifiers, table names, table sections, and unique identifiers for corresponding data within table sections of table instances defined with the table names mapped to the container identifiers. In response to receiving a request to generate a replication of table content, a second metadata table is generated to identify a unique set of table instances from the set of the containers based on evaluating the first metadata table. The unique set of table instances comprises data from the database storage without repetition. | 2022-06-16 |
20220188286 | Data Catalog Providing Method and System for Providing Recommendation Information Using Artificial Intelligence Recommendation Model - A data catalog providing method configured to provide functions related to management and retrieval for data sets stored in a database is provided. The data catalog providing method provides recommendation information for a user by collecting log data of users querying a data set by using a data catalog, and using AI (Artificial Intelligence) recommendation model, based on log data and/or data sets. The AI recommendation model, which is learned based on the collected log data, generates recommendation information by using different recommendation algorithms according to an amount of the accumulated log data. | 2022-06-16 |
20220188287 | TABLE DATA PROCESSING USING A CHANGE TRACKING STREAM - A system includes one or more processors and data storage containing instructions executable by the one or more processors to perform operations. The operations include storing table data in a plurality of partitions of a storage device. Metadata is retrieved from a first partition of the plurality of partitions. The metadata includes a plurality of change tracking entries stored as a change tracking stream. A lineage of modifications made to the table data is determined using the plurality of change tracking entries. A report of one or more transactions performed on the table data is generated. The one or more transactions are included in the lineage of modifications. | 2022-06-16 |
20220188288 | IDENTIFYING AND RESOLVING DIFFERENCES BETWEEN DATASTORES - Techniques for efficiently maintaining consistency of data items across storage partitions are disclosed using a hierarchical multi-level hash tree. Copies of a data item may be associated with corresponding attributes that are used to generate hash values for the data item. Hash values of the attributes may then be used to label nodes in a multi-level hash tree. Differences between the replicated copies of a data item may be quickly identified by comparing hash values associated with successively lower peer nodes in corresponding hash trees. Once identified, systems may update versions of a data item that are no longer current. | 2022-06-16 |
20220188289 | ONLINE FILE SYSTEM CONSISTENCY CHECK FOR CONTAINER DATA ON A CLUSTERED FILESYSTEM - Online file system consistency check for container data on a clustered file system is provided via identifying inodes (index nodes) of a group of files in a clustered file system based on a cyber-resiliency for the clustered file system; grouping the inodes based on a buffer size allocated to a FSCK (File System Consistency Check) operation; passing the inodes of to the FSCK operation in a single iteration when a total size of the inodes is less than the buffer size; or when the total size of the inodes is greater than the buffer size, identifying inodes that belong to a first container and that belong to a second container; passing the inodes that belong to the first container to the FSCK operation in a first iteration; and passing, after the first iteration completes, the inodes that belong to the second container to the FSCK operation in a second iteration. | 2022-06-16 |
20220188290 | ASSIGNING AN ANOMALY LEVEL TO A NON-INSTRUMENTED OBJECT - Examples described herein provide a computer-implemented method that includes defining a key performance indicator associated with a non-instrumented object of a processing system. The method further includes determining a current anomaly level of the key performance indicator for an instrumented object having a relationship with the non-instrumented object. The method further includes assigning an anomaly level to the non-instrumented object based on the current anomaly level. | 2022-06-16 |
20220188291 | VBLOCK METADATA MANAGEMENT - Various embodiments set forth techniques for managing metadata associated with a vblock, In some embodiments, one or more computer-readable media store instructions that, when executed by one or more processors, cause the one or more processors to perform steps including receiving a request to write data to a live vblock, wherein the request to write data is a first write request for the live vblock; accessing a merged metadata record associated with the live vblock, wherein the merged metadata record comprises metadata corresponding to metadata in metadata records for all but a last snapshot included in a set of snapshots having a metadata record; adding metadata associated with the request to write data to a metadata record for the live vblock; merging a metadata record for the last snapshot into the merged metadata record; and updating a first identifier of the merged metadata record to identify the live vblock. | 2022-06-16 |
20220188292 | DATA PROCESSING METHOD, APPARATUS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM - The present disclosure provides a data processing method, apparatus, electronic device and readable storage medium and relates to the technical field of data processing and particularly to the technical field of big data. In the present disclosure, the data repetition judgment processing is performed on the raw data according to the address information of each object in the address information of at least two objects in the acquired raw data provided by the user, and then the POI information of the electronic map is obtained according to the data repetition judgment processing result of the data repetition judgment processing, so that the repeated data in the raw data can be output according to the POI information of the electronic map and the data repetition judgment processing result. | 2022-06-16 |
20220188293 | SYSTEM AND METHOD FOR CONSISTENCY CHECKS IN CLOUD OBJECT STORES USING MICROSERVICES - A microservice or serverless process consistency check process comprising locating all the necessary metadata and data objects in the cloud by storing the data objects in the cloud and synchronously mirroring the metadata, which is separately stored in local storage, to the cloud. The process generates a list of data objects in the cloud as “Set A” and the list of metadata objects in the same prefix range as the data objects as “Set B.” The consistency check then verifies whether all objects in Set A are referred to by objects in set B. In the case where there are gaps between the sets, non-existent objects are marked as missing, and unreferenced objects are marked as orphan objects. The list of missing and orphan objects is then sent back to the backup server for analysis and further processing. | 2022-06-16 |
20220188294 | Look-Ahead Staging for Accelerated Data Extraction - Disclosed herein are system, method, and computer program product embodiments for utilizing look-ahead-staging (LAS) to accelerate data extraction from a source system to a target system. An embodiment operates receiving a data change for a data extraction from a producer job at the source system. The embodiment stores the data change in a staging area of a persistent storage together with a respective sequence identifier. The embodiment receives a request for a next package of data changes in the staging area from a consumer job at the target system. The embodiment generates the next package from the staging area. The embodiment transmits the next package to the consumer job. The embodiment receives a commit notification for the next package from the consumer job. The embodiment then removes the data changes in the next package from the staging area in response to receiving the commit notification. | 2022-06-16 |
20220188295 | DYNAMIC MANAGEMENT OF BLOCKCHAIN RESOURCES - A processor may define an available resource set in the blockchain network. The available resource set may be the one or more peers. The processor may collect one or more metrics associated with the one or more peers in the blockchain network. The processor may analyze the one or more metrics and may identify a first workload level for the one or more peers. The processor may determine an optimal status for a first particular peer of the one or more peers, based in part on the available resource set and the first workload level. The processor may compare the optimal status to a current status of the first particular peer. The processor may determine if the optimal status and the current status are different. The processor may execute a status change of the first particular peer from the current status to the optimal status. | 2022-06-16 |
20220188296 | FLOW CONTROL FOR PROBABILISTIC RELAY IN A BLOCKCHAIN NETWORK - The invention relates to method for adjusting the minimum and maximum number of peer nodes that a node on the blockchain network will connect with. The adjustment takes in to account the bandwidth and processing capability of the node. Bandwidth capacity of a node is determined based on a maximum data amount processable by the node over a time period. Data is monitored passing through interfaces of the node, to and from peer nodes, and a profile factor of the node is determined from the difference between the input data to output data. Over a plurality of time periods monitoring said data the data analysed is used to set a minimum number of peer nodes and a maximum number of peer nodes connectable to the node according to said monitored data and the maximum number of peers connectable to the node. The method enables a node to adjust the number of connections according to performance limitation factors, such as bandwidth availability and processing performance. With the number of peer node connections determined, the node can further determine a correlation matrix between the interfaces and peer nodes to which it is connected. The matrix can be compiled with correlation coefficients representing the correlation between data processed at each interface of said node. The invention also resides in a corresponding computer readable storage medium, electronic device, node of a blockchain network or blockchain network having such a node. | 2022-06-16 |
20220188297 | TASK SCHEDULING USING A STREAM OF COMMITTED TRANSACTIONS - A method includes generating a task using a plurality of logical statements embedded in a database, the plurality of logical statements corresponding to a data modification. Database data is ingested into a staging table that is configured within the database. The task is executed based on applying the data modification to a first set of partitions storing the database data and generating a second set of partitions. The second set of partitions store modified data corresponding to the database data. A stream of committed transactions is advanced at least in part by adding an entry into the stream. The entry corresponds to committed transactions performed on the first set of partitions during the data modification. A data processing task is scheduled for execution on the modified data based on the advancing of the stream offset. | 2022-06-16 |
20220188298 | SCALABLE, SECURE, EFFICIENT, AND ADAPTABLE DISTRIBUTED DIGITAL LEDGER TRANSACTION NETWORK - The present disclosure relates to systems, methods, and non-transitory computer readable storage media for implementing a scalable, secure, efficient, and adaptable distributed digital ledger transaction network. Indeed, the disclosed systems can reduce storage and processing requirements, improve security of implementing computing devices and underlying digital assets, accommodate a wide variety of different digital programs (or “smart contracts”), and scale to accommodate billions of users and associated digital transactions. For example, the disclosed systems can utilize a host of features that improve storage, account/address management, digital transaction execution, consensus, and synchronization processes. The disclosed systems can also utilize a new programming language that improves efficiency and security of the distributed digital ledger transaction network. | 2022-06-16 |
20220188299 | COMPOSITE VIEWS IN A MASTER DATA MANAGEMENT SYSTEM - A computer-implemented method includes determining, by a computer device, composite view rules for combining first data from a first data record and second data from a second data record to create a composite view of an entity in a master data management system; receiving, by the computer device, the first data; receiving, by the computer device, the second data; creating, by the computer device, the composite view from the first data and the second data based on the composite view rules; physically materializing the composite view on a storage device; preserving, by the computer device, the first data record; and preserving, by the computer device, the second data record. | 2022-06-16 |
20220188300 | IN-DOCUMENT SEARCH METHOD AND DEVICE FOR QUERY - The present invention relates to an in-document search method and device for a query vector, and an object of the present invention is to improve the accuracy of a response by generating sentence data corresponding to data in a table form stored in database. The in-document search method for a query vector includes a step A of receiving a user query from a user terminal, a step B of generating a user query vector for the user query, a step C of extracting candidate table data based on the user query vector in a data storage module, a step D of searching for a response corresponding to the user query vector in the candidate table data, and a step E of providing the response to the user terminal. | 2022-06-16 |
20220188301 | PERMUTATION-BASED CLUSTERING OF COMPUTER-GENERATED DATA ENTRIES - A computer-generated data entry is received. The computer-generated data entry is segmented into a set of tokens. A plurality of different token permutation groupings are determined. Each of the different token permutation groupings includes a different subset of tokens from the set of tokens of the computer-generated data entry. For the computer-generated data entry, a corresponding token permutation grouping identifier is determined for each grouping of the plurality of different token permutation groupings. It is determined whether the computer-generated data entry belongs to any data entry cluster among a plurality of previously identified data entry clusters based on a search performed using the token permutation grouping identifiers of the computer-generated data entry. | 2022-06-16 |
20220188302 | RETRIEVING CONTEXT FROM PREVIOUS SESSIONS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for retrieving and using contextual data from previous conversation sessions in conversational searches. In one aspect, a method includes receiving a first query for a first user session, determining that the first query refers to one or more tags in a first repository, the first repository associating respective identifiers to respective tags, each identifier representing a corresponding user session, determining one or more particular identifiers associated with the one or more tags in the first repository, retrieving particular contextual data associated with the determined particular identifiers in a second repository, the second repository associating respective identifiers to respective contextual data associated with corresponding user sessions represented by the respective identifiers, and performing an action responsive to the first query based on the retrieved particular contextual data. | 2022-06-16 |
20220188303 | DETERMINATION OF RESULT DATA FOR SMALL MULTIPLES BASED ON SUBSETS OF A DATA SET - According to examples, an apparatus may include a processor and a memory on which is stored machine-readable instructions that when executed by the processor, may cause the processor to receive a request for result data from a requestor and determine queries to create the result data. The processor may determine a subset of a data set based on the queries. The subset of the data set may be displayed in small multiples by the requestor. The processor may output the subset of the data set as the result data to the requestor. In some examples, the processor may receive a request for additional result data from the requestor. The processor may determine a second subset of the data set to be displayed in the small multiples and output the second subset of the data set as the additional result data to the requestor. | 2022-06-16 |
20220188304 | METHOD AND SYSTEM FOR HANDLING QUERY IN IOT NETWORK - A method for handling a query using a first Internet of Things (IoT) device is provided. The method includes retrieving, by a first IoT device, information related to events corresponding to second IoT devices, upon receiving a query from a user. The method includes modifying the query based on the information related to the events, and executing the modified query to provide a response to the user, upon not receiving a response for the query from the server. The method includes modifying a response received for the query based on the information related to the events, and delivering the modified response to the user, upon receiving the response for the query from the server. The method includes responding to follow-up queries received from the server for providing a response to the user, upon receiving the follow-up queries for the query from the server. | 2022-06-16 |
20220188305 | Machine Learning Systems And Methods For Interactive Concept Searching Using Attention Scoring - Machine learning systems and methods for interactive concept searching using attention scoring are provided. The system receives textual data. The system identifies one or more word representations of the textual data. The system further receives a concept. The system determines a score indicative of a likelihood of each of the one or more word representations being representative of the concept using an attention scoring process having a temperature variable. The system generates a dataset for training and evaluation based at least in part on the score. The dataset includes the one or more word representations and concept. The system further processes the dataset to train one or more deep active learning models capable of the interactive concept search. | 2022-06-16 |
20220188306 | EXECUTING ONE QUERY BASED ON RESULTS OF ANOTHER QUERY - Systems and methods are disclosed for performing multiple queries in a single graphical user interface (GUI) displayed in a client browser. The client browser causes the display of a first user interface field in a first area of the GUI, where the first user interface field can be used to enter or edit a first query. The client browser also causes first query results generated by a data intake and query system executing the first query to be displayed in the first area. The client browser further causes the display of a second user interface field in a second area of the GUI, where the second user interface field can be used to enter or edit a second query. The client browser also causes second query results generated by the data intake and query system executing the second query to be displayed in the second area. | 2022-06-16 |
20220188307 | DATA ANALYSIS APPARATUS, METHOD AND SYSTEM - According to one embodiment, a data analysis apparatus includes a processor. The processor acquires, for a plurality of products as analysis targets, manufacturing data including at least one manufacturing condition for each product. The processor calculates, based on a bias of state data representing a degree that the product is in a specific state in at least one item that can be taken concerning one manufacturing condition extracted from the manufacturing data, an index value representing a degree that a cause of the specific state of the product is the manufacturing condition. | 2022-06-16 |