50th week of 2021 patent applcation highlights part 41 |
Patent application number | Title | Published |
20210390013 | ALERTING SYSTEM HAVING A NETWORK OF STATEFUL TRANSFORMATION NODES - An alerting system is provided that includes a network of transformation nodes, and a state change processors. The transformation nodes include input transformation nodes, output transformation nodes, and intermediate nodes that connect the input and output transformation nodes. Each input transformation node can receive an events stream, and is coupled to one of the output transformation nodes by one or more intermediate transformation nodes. Each transformation node (except the input transformation nodes) can receive state updates from those transformation nodes that it subscribes to. Each output transformation node can generate a check result when stored state information for each of the transformation nodes that the output transformation node subscribes to collectively indicates that the check result should be generated. Each output transformation node is coupled to one of the state change processors that can determine whether the check results should trigger an action, and if so, can then perform an action. | 2021-12-16 |
20210390014 | PARITY PROTECTION - A variety of applications can include apparatus and/or methods that provide parity data protection to data in a memory system for a limited period of time and not stored as permanent parity data in a non-volatile memory. Parity data can be accumulated in a volatile memory for data programmed via a group of access lies having a specified number of access lines in the group. A read verify can be issued to selected pages after programming finishes at the end of programming via the access lines of the group. With the programming of the data determined to be acceptable at the end of programming via the last of the access lines of the group, the parity data in the volatile memory can be discarded and accumulation can begin for a next group having a specified number of access lines. Additional apparatus, systems, and methods are disclosed. | 2021-12-16 |
20210390015 | TECHNIQUES FOR CORRECTING ERRORS IN CACHED PAGES - A method of correcting errors in a data storage system including a first node, a second node, and shared persistent storage (the first and second nodes being configured to process data storage requests) is provided. The method includes (a) reading cached pages from a first cache disposed within the first node, the cached pages being cached versions of respective persistent pages stored in the shared persistent storage; (b) in response to determining that one of the cached pages is corrupted, requesting that the second node return to the first node a corresponding remote page from a second cache disposed within the second node, the cached page and the remote page each caching a same persistent page of the shared persistent storage; and (c) in response to determining that the remote page received from the second node by the first node is not corrupted, correcting the cached page using the remote page. | 2021-12-16 |
20210390016 | SELECTIVE SAMPLING OF A DATA UNIT DURING A PROGRAM ERASE CYCLE BASED ON ERROR RATE CHANGE PATTERNS - A processing device, operatively coupled with the memory device, is configured to determine a first error rate associated a first set of pages of a plurality of pages of a data unit of a memory device, and a second error rate associated with a second set of pages of the plurality of pages of the data unit, determine a first pattern of error rate change for the data unit based on the first error rate and the second error rate, and responsive to determining that the first pattern of error rate change corresponds to a predetermined second pattern of error rate change, perform an action pertaining to defect remediation with respect to the data unit. | 2021-12-16 |
20210390017 | Methods and Systems for Implementing Redundancy in Memory Controllers - The present disclosure relates to methods and systems for implementing redundancy in memory controllers. The disclosed systems and methods utilize a row of memory blocks, such that each memory block in the row is associated with an independent media unit. Failures of the media units are not correlated, and therefore, a failure in one unit does not affect the data stored in the other units. Parity information associated with the data stored in the memory blocks is stored in a separate memory block. If the data in a single memory block has been corrupted, the data stored in the remaining memory blocks and the parity information is used to retrieve the corrupted data. | 2021-12-16 |
20210390018 | STREAMING ENGINE WITH ERROR DETECTION, CORRECTION AND RESTART - Disclosed embodiments relate to a streaming engine employed in, for example, a digital signal processor. A fixed data stream sequence including plural nested loops is specified by a control register. The streaming engine includes an address generator producing addresses of data elements and a steam head register storing data elements next to be supplied as operands. The streaming engine fetches stream data ahead of use by the central processing unit core in a stream buffer. Parity bits are formed upon storage of data in the stream buffer which are stored with the corresponding data. Upon transfer to the stream head register a second parity is calculated and compared with the stored parity. The streaming engine signals a parity fault if the parities do not match. The streaming engine preferably restarts fetching the data stream at the data element generating a parity fault. | 2021-12-16 |
20210390019 | DATA RECOVERY USING BITMAP DATA STRUCTURE - Examples of the present disclosure describe implementing bitmap-based data replication when a primary form of data replication between a source device and a target device cannot be used. According to one example, a temporal identifier may be received from the target device. If the source device determines that the primary replication method is unable to be used to replicate data associated with the temporal identifier, a secondary replication method may be initiated. The secondary replication method may utilize a recovery bitmap identifying data blocks that have changed on the source device since a previous event. | 2021-12-16 |
20210390020 | USER AUTHORIZATION FOR FILE LEVEL RESTORATION FROM IMAGE LEVEL BACKUPS - Embodiments provide systems, methods, and computer program products for enabling user authorization to perform a file level recovery from an image level backup of a virtual machine without the need for access control by an administrator. Specifically, embodiments enable an access control mechanism for controlling access to stored image level backups of a virtual machine. In an embodiment, the virtual machine includes a backup application user interface that can be used to send a restoration request to a backup server. The restoration request can include a machine identifier and a user identifier of the user logged onto the virtual machine. The backup server includes a backup application that determines whether or not the machine identifier contained in the restoration request can be matched to a machine identifier of a virtual machine present in one of the virtual machine backups stored on the backup server. | 2021-12-16 |
20210390021 | PREDICTIVE FOG COMPUTING IN AN EDGE DEVICE - A method includes: determining an amount of available storage in a user mobile device; predicting an amount of storage in the device that will be required for a future operation of the device; identifying an amount of data stored on the device that has not been previously backed up to an external storage device that is external to the device; backing up to an external backup device, a portion of the data that has not been previously backed up, the external backup device being external to the device; and deleting from the device the data that is backed up to the external backup device. A sum of an amount of the data deleted and the amount of available storage in the device is greater than the predicted amount of storage, and the backing up is performed after the predicting and automatically while the device is connected to a network. | 2021-12-16 |
20210390022 | SYSTEMS, METHODS, AND APPARATUS FOR CRASH RECOVERY IN STORAGE DEVICES - A method of operating a storage device may include establishing a connection between a host and the storage device, detecting a crash of the storage device, suspending, based on detecting the crash, processing commands from the host through the connection, recovering from the crash of the storage device, and resuming, based on recovering from the crash, processing commands from the host through the connection. The method may further include notifying the host of the crash based on detecting the crash of the storage device. Notifying the host may include sending an asynchronous event notification to the host through the connection. Notifying the host may include asserting a controller status indicator. The method may further include receiving a reset from the host, and resetting a storage interface based on receiving the reset from the host. The method may further include maintaining the connection through a communication interlace. | 2021-12-16 |
20210390023 | ACTIVE-ACTIVE ENVIRONMENT CONTROL - The present disclosure provides a method, system, and device for security object synchronization at multiple nodes of an active-active environment. To illustrate, a source node may generate a corresponding security object sync request for each of multiple target nodes. The source node may send the security object sync request to the target nodes via a source queue and, for each target node, a corresponding distribution queue. A distribution queue may be closed based on an acknowledgement received from a corresponding target node, after a time period, or after a number of transmission attempts. A synchronization log may be maintained to indicate which security object sync requests have been delivered to which target nodes. In some implementations, the source node and the target nodes are part of an active-active environment that may be synchronized in time so the nodes resolve conflicts between received security object updates initiated from two different nodes. | 2021-12-16 |
20210390024 | AGGREGATE GHASH-BASED MESSAGE AUTHENTICATION CODE (MAC) OVER MULTIPLE CACHELINES WITH INCREMENTAL UPDATES - Embodiments are directed to aggregate GHASH-based message authentication code (MAC) over multiple cachelines with incremental updates. An embodiment of a system includes a controller comprising circuitry, the controller to generate an error correction code for a memory line, the memory line comprising a plurality of first data blocks, generate a metadata block corresponding to the memory line, the metadata block comprising the error correction code for the memory line and at least one metadata bit, generate an aggregate GHASH corresponding to a region of memory comprising a cacheline set comprising at least the memory line, encode the first data blocks and the metadata block, encrypt the aggregate GHASH as an aggregate message authentication code (AMAC), provide the encoded first data blocks and the encoded metadata block for storage on a memory module comprising the memory line, and provide the AMAC for storage on a device separate from the memory module. | 2021-12-16 |
20210390025 | Execution Sequence Integrity Monitoring System - A method of verifying execution sequence integrity of an execution flow includes receiving, by a local monitor of an automated device monitoring system from one or more sensors of an automated device, a unique identifier for each function in a subset of an execution flow for which the local monitor is responsible for monitoring. The method includes combining the received unique identifiers to generate a combination value, applying a hashing algorithm to the combination value to generate a temporary hash value, retrieving, from a data store, a true hash value, determining whether the temporary hash value matches the true hash value, and in response to the temporary hash value not matching the true hash value, generating a fault notification. The true hash value represents a result of applying the hashing algorithm to a combination of actual unique identifiers associated with each function in the subset. | 2021-12-16 |
20210390026 | METHOD AND DEVICE FOR PROCESSING INFORMATION, AND STORAGE MEDIUM - A method for processing information includes: detecting whether an application is launched; in response to detecting that the application is launched, determining a source for launching the application and detecting action information of the application; and displaying the source and the action information of the application. | 2021-12-16 |
20210390027 | HIERARCHICAL EVALUATION OF MULTIVARIATE ANOMALY LEVEL - Embodiments may include techniques for hierarchical evaluation of the anomaly level of a system and its sub-components using domain knowledge, so as to provide improved accuracy and explainability compared to conventional methods. For example a method of anomaly detection in a hierarchical computer network may comprise defining a tree-like topological structure which describes how the hierarchical computer network comprises sub-components, wherein each node of the tree-like topological structure represents a sub-component of the hierarchical computer network, and wherein at least some of the sub-components are monitored to generate signals indicating an operational condition of each sub-component, collecting a plurality of time-series of maximum absolute anomaly scores for each monitored signal, and computing an anomaly score for a root node of the tree-like topological structure. | 2021-12-16 |
20210390028 | PERFORMANCE BENCHMARKING FOR REAL-TIME SOFTWARE AND HARDWARE - A system and method determines a unique performance benchmark for specific computer object code for a particular microprocessor. By generating multiple unique benchmarks for a single, same code module on multiple different processors, the method determines which processor is optimal for the code module. By generating for a single designated processor a performance benchmark for each code modules of multiple modules, where the multiple modules have a same/similar functionality but variations in detailed code or algorithms, the system and method identifies code variation(s) which is/are optimal for the single designated processor. The system and method may entail first extracting selected features of object code (as actually executed) into a code profile, and then generating the performance benchmark based on the code profile and in machine-level timing data for the selected microprocessor. In this way, code security is achieved by fire-walling the object code from the second stage of the method. | 2021-12-16 |
20210390029 | METHOD AND APPARATUS FOR TESTING STRESS BASED ON CLOUD SERVICE - The present disclosure discloses a method and apparatus for testing a stress based on a cloud service, and relates to the field of cloud computing technology, and further to the field of cloud operation and maintenance technology. A particular implementation comprises: acquiring, based on first stress test information of a business system in a cloud service, a number of expected stress test nodes corresponding to the business system; creating edge computing nodes, a number of the edge computing nodes being identical to the number of the expected stress test nodes; and performing a stress test on the business system by using the edge computing nodes, to acquire second stress test information of the business system. | 2021-12-16 |
20210390030 | SYSTEM AND METHOD FOR OPTIMIZING TECHNOLOGY STACK ARCHITECTURE - A system is configured for determining a technology stack in a software application to perform a work project. The system receives and evaluates the work based on its characteristics. A plurality of technology stacks is generated by implementing different combinations of technology stack components. The technology stack components include application servers and webservers. Each of the technology stacks is simulated performing the work project. Based on the simulation results of each technology stack, a performance of each technology stack is evaluated. The system identifies a first technology stack performing at a level higher than a performance threshold and at a highest performance level among the plurality of technology stacks. The system deploys the first technology stack in the software application to perform the work project. | 2021-12-16 |
20210390031 | TRACE CHAIN INFORMATION QUERY METHOD AND DEVICE - This application provides a trace chain information query method, including: receiving, by a trace chain server, first trace chain information sent by a first service node and second trace chain information sent by a second service node, where the first service node is a service node in a first trace chain, the second service node is a service node in a second trace chain, both the first trace chain and the second trace chain are generated as triggered by a same user operation, the first trace chain information includes a group identifier, the second trace chain information includes the group identifier, and the group identifier is used to indicate the user operation; and finding, by the trace chain server, the first trace chain information and the second trace chain information based on the group identifier. | 2021-12-16 |
20210390032 | SYSTEMS, METHODS AND COMPUTER READABLE MEDIUM FOR VISUAL SOFTWARE DEVELOPMENT QUALITY ASSURANCE - A computer-implemented method for identifying discrepancies between a design image of a user interface for an application and a screenshot of the user interface as displayed by the application includes performing a first comparison between the design image and the screenshot to identify one or more discrepancies between the images, excluding from the discrepancies those corresponding to visual elements on the screenshot that include dynamic content, and generating an image of the screenshot, wherein the image includes a visual indication of every discrepancy detected by the second comparison as the identified discrepancies between the design image and the screenshot. | 2021-12-16 |
20210390033 | ACCELERATING DEVELOPMENT AND DEPLOYMENT OF ENTERPRISE APPLICATIONS IN DATA DRIVEN ENTERPRISE IT SYSTEMS - This disclosure relates generally to accelerating development and deployment of enterprise applications where the applications involve both data driven and task driven components in data driven enterprise information technology (IT) systems. The disclosed system is capable of determining components of the application that may be task-driven and/or those components which may be data-driven using inputs such as business use case, data sources and requirements specifications. The system is capable of determining the components that may be developed using task-driven and data-drive paradigms and enables migration of components from the task driven paradigm to the data driven paradigm. Also, the system trains a reinforcement learning (RL) model for facilitating migration of the identified components from the task driven paradigm to the data driven paradigm. The system is further capable of integrating the migrated and existing components to accelerate development and deployment an integrated IT application. | 2021-12-16 |
20210390034 | INFORMATION PROCESSING DEVICE, NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING APPLICATION STARTUP PROGRAM, AND APPLICATION STARTUP METHOD - A computer-based method of an application startup includes: in response to an instruction to perform a reading processing configured to load an application program, determining whether an analysis result of an annotation included in a source code of the application program is stored in a storage device being non-volatile; and in response to a determination that the analysis result is stored in the storage device, starting the application program by using the analysis result stored in the storage device without executing an analysis processing of the annotation. | 2021-12-16 |
20210390035 | TECHNIQUES FOR TRANSPARENTLY EMULATING NETWORK CONDITIONS - In various embodiments, a network emulation application emulates network conditions when testing a software application. In response to a request to emulate a first set of network conditions for a first client device that is executing the software application, causing a kernel to implement a first pipeline and to automatically input network traffic associated with the first client device to the first pipeline instead of a default bridge. In response to a request to emulate a second set of network conditions for a second client device that is executing the software application, causing the kernel to implement a second pipeline and to automatically input network traffic associated with the second client device to the second pipeline instead of the default bridge. Each of the pipelines perform one or more traffic shaping operations on at least a subset of the network traffic input into the pipeline. | 2021-12-16 |
20210390036 | SYSTEM AND METHOD FOR TEST IMPACT ANALYSIS OF COMPUTER PROGRAMS - System and method for testing changes to binary code of a computer program include: collecting test coverage data from an executed set of tests of an original computer program; calculating a baseline report containing correlations between the executed set of tests and blocks of binary code of the original computer program; determining binary code changes between the original computer program and a modified version of the computer program; identifying one or more tests to be executed for verifying the binary code changes. | 2021-12-16 |
20210390037 | TEST CASE GENERATION FOR SOFTWARE DEVELOPMENT USING MACHINE LEARNING - A device is further configured to determine a location within a spatial domain for a first program. The device is further configured to determine a first distance threshold value that corresponds with a first distance away from the location of the first program within the spatial domain. The device is further configured to determine distances between the location of the first program and locations of other programs from the plurality of programs and to identify one or more programs from the plurality of programs that are less than the first distance threshold value. The device is further configured to identify the one or more programs from the plurality of programs that are less than the first distance threshold value. | 2021-12-16 |
20210390038 | AUTOMATIC EVALUATION OF TEST CODE QUALITY - Techniques and solutions are described for automatically evaluating test code. In one technique, test code quality is evaluated by comparing assertions in test code with output values in target code tested by the test code. Output values that are not associated with assertions, or an insufficient number or variety of assertions can indicate that a test can be improved. In another technique, test quality is assessed by dynamically changing target code or test data used with a test. Room for test improvement can be indicated if test code provides a passing result despite changes to test data used with the test or changes to target code executed in conducting the test. | 2021-12-16 |
20210390039 | SCHEDULED TESTS FOR ENDPOINT AGENTS - Techniques for scheduled tests for endpoint agents are disclosed. In some embodiments, a system/process/computer program product for providing scheduled tests for endpoint agents includes receiving a test configuration for scheduled tests that includes a set of conditions for dynamically selecting endpoint agents that match the set of conditions in the test configuration, wherein a plurality of endpoint agents are deployed to a plurality of endpoint devices; identifying one or more of the plurality of endpoint agents that match the set of conditions in the test configuration; assigning the scheduled tests associated with the test configuration to the matching endpoint agents for execution of the scheduled tests based on the test configuration, wherein test results are based on the scheduled tests executed on each of the matching endpoint agents for monitoring network activity; and receiving uploaded results of the scheduled tests executed on the matching endpoint agents, wherein the uploaded results of the scheduled tests executed on the matching endpoint agents are processed for generating graphical visualizations and/or alerts of the monitored network activity. | 2021-12-16 |
20210390040 | SYSTEMS AND METHODS FOR SOFTWARE INTEGRATION VALIDATION - A method and apparatus for providing a document-integrated software integration validation by a service provider system are described. The method includes serving an interactive integration guide user interface to a user system that displays information for an application programming interface (API) integration test scenario. The method also includes determining correctness of API usage of a software application that performs operations integrating services of a service provider system using APIs of the service provider system, the software application developed by the user system. Furthermore, the method includes serving an updated integration guide to the user system updating the display of the interactive integration guide UI indicating each operation in the test scenario that was performed correctly and indicating each operation in the test scenario that was not performed correctly. | 2021-12-16 |
20210390041 | MIDDLEWARE FOR TRANSPARENT USER INTERFACE TESTING - A method and apparatus for performing a user interface test by a middleware server including determining a state change of a portion of the user interface, receiving a test command indicative of a user interface functional test from a test interface, determining an auxiliary test associated with the test command, generating an altered test command requesting performance of the user interface functional test and the auxiliary test, transmitting the altered test command to the user interface, receiving a functional result from the user interface in response to the altered test command, generating an altered test result indicative of the functional result, and transmitting the altered test result to the test interface. | 2021-12-16 |
20210390042 | METHOD AND APPARATUS FOR TESTING DIALOGUE PLATFORM, AND STORAGE MEDIUM - A method and an apparatus for testing a dialogue platform, and a storage medium are proposed. The specific solution is that: creating at least one simulation test instance, the simulation test instance comprises a plurality of test task information, each test task information comprises test numbers, ringing simulation data, and call simulation data; sending the test numbers to the dialogue platform to start a test; sending the ringing simulation data to the dialogue platform, to receive task states fed back by the dialogue platform; sending the call simulation data to the dialogue platform, to receive dialogue data fed back by the dialogue platform; and performing a dialogue test on the dialogue platform based on the test tasks, the task states corresponding to the test tasks, and the dialogue data. | 2021-12-16 |
20210390043 | Storage System and Method for Enabling a Software-Defined Dynamic Storage Response - A storage system and method for enabling a software-defined dynamic storage response are provided. In one embodiment, a controller of a storage system is configured to receive an expected response time from a host; in response to receiving the expected response time from the host, cache a logical-to-physical address table entry of a wordline; and store the cached logical-to-physical address table entry of the wordline as metadata in a next wordline along with host data. Other embodiments are provided. | 2021-12-16 |
20210390044 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A memory system is provided to include memory devices and a controller including cores controlling the memory devices, respectively. The controller determines whether to perform a global wear-leveling operation based on a write count of the plurality of memory devices corresponding to each of the plurality of cores, performs a barrier operation for a request from a host when the global wear-leveling operation is determined to be performed, updates mapping information for mapping a core to memory device information by swapping the mapping information between different cores based on the write count of each of the plurality of cores and closes an open block assigned to each of the plurality of cores and then assigning a new open block to each of the plurality of cores based on the updated mapping information. | 2021-12-16 |
20210390045 | Optimizing Garbage Collection Based On Survivor Lifetime Prediction - A predictive method for scheduling of the operations is described. The predictive method utilizes data generated from computing an expected lifetime of the individual files or objects within the container. The expected lifetime of individual files or objects can be generated based on machine learning techniques. Operations such as garbage collection are scheduled at an epoch where computational efficiencies are realized for performing the operation. | 2021-12-16 |
20210390046 | METHOD OF OPERATING A MEMORY WITH DYNAMICALLY CHANGEABLE ATTRIBUTES - A feature can be defined to allow data attributes to be dynamically assigned to data in a storage device. For example, a feature referred to as a “datagroup” is introduced. A datagroup is defined as a grouping of a range of local block addresses. A storage device can be divided into a number of datagroups. Each datagroup can have its own data attributes configuration, which can have a specified number of bits. A new command is defined to allow a host to dynamically assign attributes of datagroups of a storage device. For example, the command can provide for dynamically assigning datagroup attributes by sending a byte-mapping table in the command from the host to the storage device. | 2021-12-16 |
20210390047 | METHOD FOR AI MODEL TRANSFERRING WITH ADDRESS RANDOMIZATION - A method to transfer an artificial intelligence (AI) model includes identifying a plurality of layers of an AI model, wherein each layer of the plurality of layers is associated with a memory address. The method further includes randomizing the memory address associated with each layer of the plurality of layers, and transferring the plurality of layers with the randomized memory addresses to a data processing accelerator to execute the AI model. | 2021-12-16 |
20210390048 | METHOD AND SYSTEM FOR FACILITATING LOG-STRUCTURE DATA ORGANIZATION - One embodiment provides a system which facilitates organization of data. During operation, the system identifies an original data chunk stored in a non-volatile memory of a storage device, wherein the original data chunk is a logical chunk which includes original logical block addresses. The system stores a first mapping of the original logical block addresses to original physical block addresses in a first data structure. The system assigns new logical block addresses to be included in a new data chunk. The system creates, in a second data structure based on an order of the assigned new logical block addresses, a mapping of the new logical block addresses to valid original logical block addresses. The system stores, based on the first data structure and the second data structure, a second mapping of the new logical block addresses to the original physical block addresses. | 2021-12-16 |
20210390049 | MEMORY MODULES AND METHODS OF OPERATING SAME - A memory module includes a first memory device, a second memory device, and a processing buffer circuit that is connected to the first memory device and the second memory device (independently of each other) and a host. A processing buffer circuit is provided, which includes a processing circuit and a buffer. The processing circuit processes at least one of data received from the host, data stored in the first memory device, or data stored in the second memory device based on a processing command received from the host. The buffer is configured to store data processed by the processing circuit. The processing buffer circuit is configured to communicate with the host in compliance with a DDR SDRAM standard. | 2021-12-16 |
20210390050 | SHADOW CACHES FOR LEVEL 2 CACHE CONTROLLER - An apparatus including a CPU core and a L1 cache subsystem coupled to the CPU core. The L1 cache subsystem includes a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem coupled to the L1 cache subsystem. The L2 cache subsystem includes a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller. The L2 controller receives an indication from the L1 controller that a cache line A is being relocated from the L1 main cache to the L1 victim cache; in response to the indication, update the shadow L1 main cache to reflect that the cache line A is no longer located in the L1 main cache; and in response to the indication, update the shadow L1 victim cache to reflect that the cache line A is located in the L1 victim cache. | 2021-12-16 |
20210390051 | HARDWARE COHERENCE FOR MEMORY CONTROLLER - A system includes a non-coherent component; a coherent, non-caching component; a coherent, caching component; and a level two (L2) cache subsystem coupled to the non-coherent component, the coherent, non-caching component, and the coherent, caching component. The L2 cache subsystem includes a L2 cache; a shadow level one (L1) main cache; a shadow L1 victim cache; and a L2 controller. The L2 controller is configured to receive and process a first transaction from the non-coherent component; receive and process a second transaction from the coherent, non-caching component; and receive and process a third transaction from the coherent, caching component. | 2021-12-16 |
20210390052 | Memory Interface Having Multiple Snoop Processors - A memory interface for interfacing between a memory bus and a cache memory, comprising: a plurality of bus interfaces configured to transfer data between the memory bus and the cache memory; and a plurality of snoop processors configured to receive snoop requests from the memory bus; wherein each snoop processor is associated with a respective bus interface and each snoop processor is configured, on receiving a snoop request, to determine whether the snoop request relates to the bus interface associated with that snoop processor and to process the snoop request in dependence on that determination. | 2021-12-16 |
20210390053 | Host-Assisted Memory-Side Prefetcher - Methods, apparatuses, and techniques related to a host-assisted memory-side prefetcher are described herein. In general, prefetchers monitor the pattern of memory-address requests by a host device and use the pattern information to determine or predict future memory-address requests and fetch data associated with those predicted requests into a faster memory. In many cases, prefetchers that can make predictions with high performance use appreciable processing and computing resources, power, and cooling. Generally, however, producing a prefetching configuration that the prefetcher uses involves more resources than making predictions. The described host-assisted memory-side prefetcher uses the greater computing resources of the host device to produce at least an updated prefetching configuration. The memory-side prefetcher uses the prefetching configuration to predict the data to prefetch into the faster memory, which allows a higher-performance prefetcher to be implemented in the memory device with a reduced resource burden on the memory device. | 2021-12-16 |
20210390054 | CACHE MANAGEMENT CIRCUITS FOR PREDICTIVE ADJUSTMENT OF CACHE CONTROL POLICIES BASED ON PERSISTENT, HISTORY-BASED CACHE CONTROL INFORMATION - A cache management circuit that includes a predictive adjustment circuit configured to predictively generate cache control information based on a cache hit-miss indicator and the retention ranks of accessed cache lines to improve cache efficiency is disclosed. The predictive adjustment circuit stores the cache control information persistently, independent of whether the data remains in cache memory. The stored cache control information is indicative of prior cache access activity for data from a memory address, which is indicative of the data's “usefulness.” Based on the cache control information, the predictive adjustment circuit controls generation of retention ranks for data in the cache lines when the data is inserted, accessed, and evicted. After the data has been evicted from the cache memory and is later accessed by a subsequent memory request, the persistently stored cache control information corresponding to that memory address increases the information available for determining the usefulness of data. | 2021-12-16 |
20210390055 | STREAMING ENGINE WITH SEPARATELY SELECTABLE ELEMENT AND GROUP DUPLICATION - A streaming engine employed in a digital data processor specifies a fixed read only data stream defined by plural nested loops. An address generator produces address of data elements. A steam head register stores data elements next to be supplied to functional units for use as operands. An element duplication unit optionally duplicates data element an instruction specified number of times. A vector masking unit limits data elements received from the element duplication unit to least significant bits within an instruction specified vector length. If the vector length is less than a stream head register size, the vector masking unit stores all 0's in excess lanes of the stream head register (group duplication disabled) or stores duplicate copies of the least significant bits in excess lanes of the stream head register. | 2021-12-16 |
20210390056 | METHOD AND APPARATUS FOR MANAGING MEMORY IN MEMORY DISAGGREGATION SYSTEM - Disclosed herein is an apparatus for managing disaggregated memory, which is located in a virtual machine in a physical node. The apparatus is configured to select, depending on the proportion of valid pages, direct transfer between remote memory units or indirect transfer via local memory for each of the memory pages of the source remote memory to be migrated, among at least one remote memory unit used by the virtual machine, to transfer the memory pages of the source remote memory to target remote memory based on the direct transfer or the indirect transfer, and to release the source remote memory. | 2021-12-16 |
20210390057 | QUALITY OF SERVICE DIRTY LINE TRACKING - Systems, apparatuses, and methods for generating a measurement of write memory bandwidth are disclosed. A control unit monitors writes to a cache hierarchy. If a write to a cache line is a first time that the cache line is being modified since entering the cache hierarchy, then the control unit increments a write memory bandwidth counter. Otherwise, if the write is to a cache line that has already been modified since entering the cache hierarchy, then the write memory bandwidth counter is not incremented. The first write to a cache line is a proxy for write memory bandwidth since this will eventually cause a write to memory. The control unit uses the value of the write memory bandwidth counter to generate a measurement of the write memory bandwidth. Also, the control unit can maintain multiple counters for different thread classes to calculate the write memory bandwidth per thread class. | 2021-12-16 |
20210390058 | DYNAMIC CACHE CONTROL MECHANISM - An apparatus to facilitate dynamic cache control is disclosed. The apparatus includes one or more processors to profile execution characteristics of a graphics workload at a processing resource to generate profile data indicating a quantity of cache hits that occur at a cache memory and apply one or more cache settings to the cache memory based on the profile data. | 2021-12-16 |
20210390059 | Cache Memory Architecture - Various implementations described herein are directed to device. The device may include a first tier having a processor and a first cache memory that are coupled together via control logic to operate as a computing architecture. The device may include a second tier having a second cache memory that is coupled to the first cache memory. Also, the first tier and the second tier may be integrated together with the computing architecture to operate as a stackable cache memory architecture. | 2021-12-16 |
20210390060 | HIERARCHICAL MEMORY SYSTEMS - Apparatuses, systems, and methods for hierarchical memory systems are described. A hierarchical memory system can leverage persistent memory to store data that is generally stored in a non-persistent memory, thereby increasing an amount of storage space allocated to a computing system at a lower cost than approaches that rely solely on non-persistent memory. An example method includes initiating a read request associated with an address from an input/output device, redirecting the read request to a hierarchical memory component, generating, by the hierarchical memory component, an interrupt message to send to a hypervisor, gathering, at the hypervisor, address register access information from the hierarchical memory component, and determining a physical location of data associated with the read request. | 2021-12-16 |
20210390061 | SEMICONDUCTOR DEVICE AND ARITHMETIC PROCESSING DEVICE - A semiconductor device includes an address translation device configured to identify a plurality of address translation tables which is used for address translation having a plurality of stages; and an adder configured to identify a stage in the address translation when executing the address translation, wherein the address translation device configured to perform cache control for information of a first address translation table used in a last stage of the address translation when the stage is the final stage. | 2021-12-16 |
20210390062 | MANAGEMENT METHOD OF CACHE FILES IN STORAGE SPACE AND RECORDING DEVICE FOR STORING CACHE FILES - A management method of cache files in storage space, adapted to a storage space storing a plurality of cache files, the management method comprises: forming a cache file status list which records a plurality of file names and a plurality of file status; determining whether a storage condition of the storage space is in a healthy condition; assigning a plurality of corresponding tags to the plurality of file status when the storage condition is not in the healthy condition, and forming a sorted cache file list; and deleting the last file name from the sorted cache file list and the cache file from the storage space corresponding to the file name, wherein the sorted cache file list records the file names which are sorted from a file name of a cache file that should be kept most to another file name of another cache file that should be deleted most. | 2021-12-16 |
20210390063 | Technologies for Secure I/O with Accelerator Devices - Technologies for secure I/O data transfer with an accelerator device include a computing device having a processor and an accelerator. The processor establishes a trusted execution environment. The trusted execution environment may generate an authentication tag based on a memory-mapped I/O transaction, write the authentication tag to a register of the accelerator, and dispatch the transaction to the accelerator. The accelerator performs a cryptographic operation associated with the transaction, generates an authentication tag based on the transaction, and compares the generated authentication tag to the authentication tag received from the trusted execution environment. The accelerator device may initialize an authentication tag in response to a command from the trusted execution environment, transfer data between host memory and accelerator memory, perform a cryptographic operation in response to transferring the data, and update the authentication tag in response to transferrin the data. Other embodiments are described and claimed. | 2021-12-16 |
20210390064 | TRACKING MOVEMENTS OF ENROLLED PERIPHERAL DEVICES - A peripheral device is tracked between connections to host devices. A peripheral driver is dynamically configured and associated with a peripheral of a host device. A current association between the peripheral device and a current host device is maintained for purposes of providing the peripheral driver of the peripheral device on the current host device to remotely executing applications. The association is dynamically changed/updated based on a connection between the peripheral device and a given host device. | 2021-12-16 |
20210390065 | MEMORY DEVICE AND METHOD OF OPERATING THE SAME - A memory device includes an input/output circuit configured to receive a status read command from a memory controller, a toggle counter configured to count a number of toggles of a signal received from the memory controller, and a status register configured to store status information of the memory device and configured to output the status information to the input/output circuit. The memory device also includes a status output controller configured to determine whether the number of toggles counted by the toggle counter corresponds to a reference number of toggles and configured to control the status register to transmit the status information to the memory controller through the input/output circuit, in response to the status read command. | 2021-12-16 |
20210390066 | REMOTE MEMORY SELECTION - A multi-path fabric interconnected system with many nodes and many communication paths from a given source node to a given destination node. A memory allocation device on an originating node (local node) requests an allocation of memory from a remote node (i.e., requests a remote allocation). The memory allocation device on the local node selects the remote node based on one or more performance indicators. The local memory allocation device may select the remote node to provide a remote allocation of memory based on one or more of: latency, availability, multi-path bandwidth, data access patterns (both local and remote), fabric congestion, allowed bandwidth limits, maximum latency limits, and, available memory on remote node. | 2021-12-16 |
20210390067 | SEMICONDUCTOR DEVICE - A semiconductor device is configured so that two or more master devices access a slave device via a bus. The semiconductor device includes: a priority generation circuit that generates a priority based on a transfer amount between a specific master device and a specific slave device; and an arbitration circuit that performs an arbitration based on the priority when competition of the accesses occurs. | 2021-12-16 |
20210390068 | DUAL-TREE BACKPLANE - An information handling system may include at least one processor; a first and a second backplane, wherein the first and second backplanes are Peripheral Component Interconnect Express (PCIe) backplanes; and a physical storage resource. The physical storage resource may be coupled to the at least one processor via a first port of the physical storage resource and via the first backplane, and the physical storage resource may be further coupled to the at least one processor via a second port of the physical storage resource and via the second backplane. | 2021-12-16 |
20210390069 | REMOTELY CONTROLLED TECHNICIAN SURROGATE DEVICE - A remote technical support system includes an edge device that operates as a highly secured conduit for a technician to view, access, and control a target device via a secure protocol over a connection medium between the edge device and the target device. The edge device's architecture allows it to selectively present numerous peripheral devices to the target device. The architectural components of the edge device can be controlled by a technician through a secure connection with a trusted server which allows authorized to access the edge device. The edge device also relays technician commands to and obtains diagnostic information from the target device and communicates feedback to the technician over the secure connection. The commands may be relayed to the target via the one or more selectively connected USB peripherals. | 2021-12-16 |
20210390070 | UNIVERSAL INDUSTRIAL I/O INTERFACE BRIDGE - A universal industrial I/O interface bridge is provided. The universal industrial I/O interface bridge may be placed between a host and I/O interface cards to translate and manage electronic communications from these and other sources. Embodiments of the application may include (1) an improved hardware module, (2) an I/O discovery process to dynamically reprogram the universal industrial I/O interface bridge depending on the attached I/O card, (3) an abstraction process to illustrate the universal industrial I/O interface bridge and the physical I/O interfaces, (4) an alert plane within the universal industrial I/O interface bridge to respond to I/O alert pins, and (5) a secure distribution process for a firmware update of the universal industrial I/O interface bridge. | 2021-12-16 |
20210390071 | DRAM COMMAND STREAK MANAGEMENT - A memory controller includes a command queue and an arbiter for selecting entries from the command queue for transmission to a DRAM. The arbiter transacts streaks of consecutive read commands and streaks of consecutive write commands. The arbiter has a current mode indicating the type of commands currently being transacted, and a cross mode indicating the other type. The arbiter is operable to monitor commands in the command queue for the current mode and the cross mode, and in response to designated conditions, send at least one cross-mode command to the memory interface queue while continuing to operate in the current mode. In response to an end streak condition, the arbiter swaps the current mode and the cross mode, and transacts the cross-mode command. | 2021-12-16 |
20210390072 | ROTATABLE PORT UNITS - Example of systems with rotatable port units are described. In an example, a system includes a control unit, a first port unit with a first set of ports coupled to the control unit, and a second port unit with a second set of ports coupled to the control unit. The second port unit is mounted on the first port unit and is rotatable with respect to the first port unit. The control unit is to enable a subset of ports from the second set of ports and the first set of ports based on a rotational position of the second port unit with respect to the first port unit. | 2021-12-16 |
20210390073 | ELECTRONIC DEVICE, INFORMATION PROCESSING SYSTEM AND METHOD - According to one embodiment, in a first state, a control circuit determines, based on first information and second information, information on a request that includes a setting of a transmission circuit of a host to be set as an initial setting in a second state. The first state is a state of communicating with a host at a first communication speed conforming to a first specification. The second state is a state of communicating with the host at a second communication speed conforming to a second specification. The second communication speed is different from the first communication speed. The first information is information on a request of a setting of the transmission circuit of the host. The second information is information on a quality of a signal received by a reception circuit, which has been transmitted from the transmission circuit of the host. | 2021-12-16 |
20210390074 | INTERFACE DEVICE AND METHOD OF OPERATING THE SAME - A method of operating an interface device including a first elastic buffer is provided. The method of operating the interface device includes initializing one or more parameters associated with clock signals for a data transmission or reception of the interface device, checking whether the interface device is in a predetermined mode for adjusting the one or more parameters, adjusting, upon determination that the interface device is in the predetermined mode, the one or more parameters associated with the clock signals of the interface device based on how much of the first buffer or the second buffer is filled with data, and performing the data transmission or reception based on the adjusted one or more parameters associated with the clock signals. | 2021-12-16 |
20210390075 | EFFICIENT USAGE OF ONE-SIDED RDMA FOR LINEAR PROBING - Systems and methods for reducing latency of probing operations of remotely located linear hash tables are described herein. In an embodiment, a system receives a request to perform a probing operation on a remotely located linear hash table based on a key value. Prior to performing the probing operation, the system dynamically predicts a number of slots for a single read of the linear hash table to minimize total cost for an average probing operation. The system determines a hash value based on the key value and determines a slot of the linear hash table to which the hash value corresponds. After predicting the number of slots, the system issues an RDMA request to perform a read of the predicted number of slots from the linear hash table starting at the slot to which the hash value corresponds. | 2021-12-16 |
20210390076 | APPARATUSES AND METHODS FOR MAP REDUCE - The present disclosure relates to a method and an apparatus for map reduce. In some embodiments, an exemplary processing unit includes: a 2-dimensional (2D) processing element (PE) array comprising a plurality of PEs, each PE comprising a first input and a second input, the first inputs of the PEs in a linear array in a first dimension of the PE array being connected in series and the second inputs of the PEs in a linear array in a second dimension of the PE array being connected in parallel, each PE being configured to perform an operation on data from the first input or second input; and a plurality of reduce tree units, each reduce tree unit being coupled with the PEs in a linear array in the first dimension or the second dimension of the PE array and configured to perform a first reduction operation. | 2021-12-16 |
20210390077 | SYSTEMS AND TOOLS FOR DATA ARCHIVING - Systems and methods to select an object instance from a database storage to archive to an external storage based on an archiving configuration and attribute values of the object instance, transmit the selection to an application associated with the object instance, determine, based on a response received from the application, to archive the object instance, mark the object instance as ready for archiving, identify the object instance as ready for archiving, convert the object instance to an object notation format, transmit the converted object instance to a cloud application for storage in an external storage, in response to a determination that the storage in the external storage is successful, create an index object in the database storage including a subset of fields of the object instance and a link to the converted object instance stored in the external storage, and mark the object instance in the database storage as archived. | 2021-12-16 |
20210390078 | METHODS, DEVICES AND SYSTEMS FOR MIGRATING AN ACTIVE FILESYSTEM - A computer-implemented method of migrating metadata from a donor filesystem D having a rooted tree structure to a beneficiary filesystem B while processing commands that operate on the metadata may comprise, while a command to operate on the metadata is not received, replicating the donor filesystem D at the beneficiary filesystem B by sequentially copying metadata of nodes of the donor filesystem D to the beneficiary filesystem B. When a command is received to operate on the metadata, the command may be executed at both the donor filesystem D and the beneficiary filesystem B when all arguments of the command are present in both the donor filesystem D and the beneficiary filesystem B. When none of arguments are present in the beneficiary filesystem B, the command may be executed at the donor filesystem D only. When only some of the arguments are present in the beneficiary filesystem B, the command may be enqueued at least until all arguments of the command are present in the beneficiary filesystem B. When all arguments thereof are present in the beneficiary filesystem B, the enqueued commands may be dequeued and scheduled for execution. | 2021-12-16 |
20210390079 | SEARCH CAPACITY FOR LOCAL AND/OR REMOTE DOCKER SUB-SYSTEMS - A method is used in searching a docker system to understand the objects therein. An application executing on a computer system creates a snapshot of the docker system. The application receives search criteria for objects in the docker system. The application generates a recursive search based on the search criteria. The application applies the recursive search to the snapshot of the docker system. The application displays results of the recursive search. | 2021-12-16 |
20210390080 | ACTIONS BASED ON FILE TAGGING IN A DISTRIBUTED FILE SERVER VIRTUAL MACHINE (FSVM) ENVIRONMENT - An example system includes a plurality of FSVMs executing at two or more computing nodes configured to cooperatively manage a distributed VFS and a system manager configured to provide a tag based on a pattern and an action associated with the tag to the plurality of FSVMs. The plurality of FSVMs are further configured to scan files of the VFS to tag files including the pattern and tag and to take the action with respect to files in the VFS having the tag. | 2021-12-16 |
20210390081 | PER ROW DATABASE RESYNCHRONIZATION - A method of controlling resynchronization of a source database and a target database may comprise detecting that a connection between the source database and the target database has been restored. Based on the detecting, the method may also comprise identifying a first edit flag for a first row in a first table on the source database. Based on the identifying, the method may also comprise sending the first row from the source database to the target database. Based on the sending, the method may also comprise clearing the first edit flag for the first row. | 2021-12-16 |
20210390082 | DISTRIBUTED FILE SYSTEM AND METHOD FOR ACCESSING A FILE IN SUCH A SYSTEM - An aspect of the invention relates to a method for a plurality of clients to access a file in a distributed file system, the file being replicated on at least one other server, the method comprising the steps of:
| 2021-12-16 |
20210390083 | SHARE REPLICATION BETWEEN REMOTE DEPLOYMENTS - Provided herein are systems and methods for an efficient method of replicating share objects to remote deployments. For example, the method may comprise modifying a share object of a first account of a data exchange into a global object wherein the share object includes grant metadata indicating share grants to a set of objects of a database. The method may further comprise creating, in a second account of the data exchange, a local replica of the share object on the remote deployment based on the global object, wherein the second account is located in a remote deployment. The set of objects of the database may be replicated to a local database replica on the remote deployment and the share grants may be replicated to the local replica of the share object. | 2021-12-16 |
20210390084 | Computer-Based Systems and Methods for Risk Detection, Visualization, and Resolution Using Modular Chainable Algorithms - Computer-based systems and methods for risk/reward detection, visualization, and resolution using modular chainable algorithms are provided. The system allows for computer-based modeling of large data sets with improving processing speed and utilizing fewer computational resources. The modular chainable algorithms included embedded program code executable by a processor for performing a data modeling or analytic function on source data, visualization code for visualizing output of the program code, and workflow code for automatically performing one or more actions relating to the data modeling or analytic function. | 2021-12-16 |
20210390085 | SYSTEMS AND METHODS OF DATA MIGRATION IN MULTI-LAYER MODEL-DRIVEN APPLICATIONS - Systems and methods for data migration in multi-layer model-driven applications is provided. The traditional systems and methods simply provide for comparison based migration approaches, and thus face severe challenges in case of model-driven applications, wherein continuous capturing of transformations in model changes is required. Embodiment of the proposed disclosure provide for a changelog based data migration methodology by modelling, a model-driven application conceptual model; generating, a plurality of optimized data models from the modelling; extracting, from each of the plurality of optimized data models, at least one changelog capturing one or more model changes and transformations in each of the plurality of optimized data models; and executing the data migration using each of an executing changelog. | 2021-12-16 |
20210390086 | METHOD AND SYSTEM FOR LEXICAL DATA PROCESSING - There is disclosed a method and system to operate a software application entirely based on a unitary lexicon data structure (LDS) record comprising a plurality of data field definition blocks stored in memory, with one LDS record for each lexicon term. The LDS is used to develop computerized lexicons and deploy them for use to operate a lexical application with all data displayed for viewing and input by the user on a single screen to which all desired data items come, rather than the user navigating to fields statically located on a multitude of screens. Each LDS record contains a whole set of data in memory, with data duplicated across LDS records in order to bypass the need for the application to interoperate with a database to input and display related data. There is a graphical icon also of a unitary format into which all data is input and displayed. Input data items are related to one another in the icon, regardless of whether a relational database is configured to interoperate with the system. | 2021-12-16 |
20210390087 | APPLICATION SUGGESTION FEATURES - This application relates to features for a mobile device that allow the mobile device to assign utility values to applications and thereafter suggest applications for a user to execute. The suggested application can be derived from a list of applications that have been assigned a utility by software in the mobile device. The utility assignment of the individual applications from the list of applications can be performed based on the occurrence of an event, an environmental change, or a period of frequent application usage. A feedback mechanism is provided in some embodiments for more accurately assigning a utility to particular applications. The feedback mechanism can track what a user does during a period of suggestion for certain applications and thereafter modify the utility of applications based on what applications a user selects during the period of suggestion. | 2021-12-16 |
20210390088 | DIGITAL PROCESSING SYSTEMS AND METHODS FOR AUTOMATIC APPLICATION OF SUB-BOARD TEMPLATES IN COLLABORATIVE WORK SYSTEMS - Systems, methods, and computer-readable media for representing data via a multi-structured table are disclosed. The systems and methods may involve maintaining a main table having a first structure and containing a plurality of rows; receiving a first electronic request for establishment of a first sub-table associated with the main table, wherein the electronic request includes column heading definitions and wherein the column heading definitions constitute a second structure; storing the second structure in memory as a default sub-table structure; associating the first sub-table with a first row in the main table; receiving a second electronic request for association of a second sub-table with a second row of the main table; performing a lookup of the default sub-table structure following receipt of the second electronic request; applying the default sub-table structure to the second sub-table. | 2021-12-16 |
20210390089 | CODE DICTIONARY GENERATION BASED ON NON-BLOCKING OPERATIONS - Techniques related to code dictionary generation based on non-blocking operations are disclosed. In some embodiments, a column of tokens includes a first token and a second token that are stored in separate rows. The column of tokens is correlated with a set of row identifiers including a first row identifier and a second row identifier that is different from the first row identifier. Correlating the column of tokens with the set of row identifiers involves: storing a correlation between the first token and the first row identifier, storing a correlation between the second token and the second row identifier if the first token and the second token have different values, and storing a correlation between the second token and the first row identifier if the first token and the second token have identical values. After correlating the column of tokens with the set of row identifiers, duplicate correlations are removed. | 2021-12-16 |
20210390090 | AUTOMATIC CREATION AND SYNCHRONIZATION OF GRAPH DATABASE OBJECTS - A request is received to create a graph database from one or more relational databases. For each relational database, data objects in the relational database are identified. For each data object, a graph data object corresponding to the data object is created. The graph data object is linked to the data object. A set of associated data objects in the relational database are determined, and for each associated data object, an associated graph data object is created if a graph data object corresponding to the data object does not exist. For each created graph data object, a graph data relation object is created that represents a relationship between the graph data object and the associated graph data object. Created graph data objects, associated graph data objects, and graph data relation objects are stored in the graph database. The graph database is provided to one or more applications. | 2021-12-16 |
20210390091 | INTERACTIVE CONTINUOUS IN-DEVICE TRANSACTION PROCESSING USING KEY-VALUE (KV) SOLID STATE DRIVES (SSDS) - Various aspects include an interactive continuous in-device KV transaction processing system and method. The system includes a host device and a KV-SSD. The KV-SSD includes a command handler module to receive and process command packets from the host device, to identify KV input/output (I/O) requests associated with a KV transaction, and to prepare a per-transaction index structure. The method includes receiving a command packet from a host device, and determining, by the command handler module, whether a transaction tag associated with the KV transaction is embedded in the command packet. Based on determining that the transaction tag is not embedded in the command packet, the method includes processing one or more KV I/O requests using a main KV index structure. Based on determining that the transaction tag is embedded in the command packet, the method includes individually processing the one or more KV I/O requests using a per-transaction index structure. | 2021-12-16 |
20210390092 | Model File Management Method and Terminal Device - A model file management method includes that a terminal device receives a storage address of a target model file package from a server and the terminal device obtains the target model file package based on the storage address of the target model file package, where the target model file package is based on a parameter of a model file package locally stored in the terminal device and a parameter of a model file package managed by the server. In an artificial intelligence (AI) field, an application may implement a specific function by using an AI model file. An application is decoupled from an AI model file such that the terminal device performs centralized management on a general model file. | 2021-12-16 |
20210390093 | BLOCKCHAIN-BASED RECORDING AND QUERYING OPERATIONS - Implementations of this specification provide blockchain-based recording and querying methods and apparatuses. An example method includes operations performed by an access gateway, including receiving, from a first service system, user data including a user identifier of a user; transmitting, to an identifier hash system, a first hash request for the user identifier; receiving, from the identifier hash system, a hash digest of the user identifier; replacing the user identifier in the user data with the hash digest of the user identifier, and packaging the user data into a storage transaction; transmitting, to a blockchain, the storage transaction; receiving, from the blockchain, a result of the storage transaction having been performed by a smart contract published by the first service system on the blockchain; and providing, to the first service system, the result of the storage transaction. | 2021-12-16 |
20210390094 | TASK SCHEDULING AND QUERYING IN DATABASE SYSTEMS - Systems, methods, and devices for executing a task on database data in response to a trigger event are disclosed. A method includes executing a transaction on a table comprising database data, wherein executing the transaction comprises generating a new table version. The method includes, in response to the transaction being fully executed, generating a change tracking entry comprising an indication of one or more modifications made to the table by the transaction and storing the change tracking entry in a change tracking stream. The method includes executing a task on the new table version in response to a trigger event. | 2021-12-16 |
20210390095 | NOTIFYING MODIFICATIONS TO EXTERNAL TABLES IN DATABASE SYSTEMS - The subject technology receives a notification that a modification has been made to an external table, the modification comprising inserting at least one row of new data to the external table, the at least one row corresponding to a first micro-partition that includes a first portion of data from the external table prior to the inserting. The subject technology, in response to the notification indicating the modification to the external table, generates a new micro-partition different from the first micro-partition, the new micro-partition including the inserted at least one row of new data and the first portion of data from the external table. The subject technology generates a refreshed materialized view based at least in part on the generated new micro-partition such that the refreshed materialized view comprises a representation of the external table after the modification has been made. | 2021-12-16 |
20210390096 | METHOD AND SYSTEM FOR DATA CONVERSATIONS - A method for querying and analyzing datasets via natural language processing (NLP) with context propagation is disclosed. In one embodiment, a computer-implemented method includes receiving, by a user interface, at least one of an utterance or a structured query language statement. The method includes identifying zero or more previous data conversation steps indicated by the utterance. The method includes determining an effective schema targeted by the utterance. The method includes generating, based on the utterance and the effective schema, an intermediate structured query language statement that is representative of the utterance. The method includes generating an executable structured query language statement based on the intermediate structured query language statement and zero or more previous structured query language statements. The method includes executing the executable structured query language statement for the data query engine schema. The method includes communicating, via the user interface, a result set and metadata. | 2021-12-16 |
20210390097 | DETECTING RELATIONSHIPS ACROSS DATA COLUMNS - There is a need for more effective and efficient detection of cross-data-column relationships. This need can be addressed by, for example, techniques for detecting cross-data-column data relationships that utilize at least one of feature-based similarity models and deep-learning-based similarity models. The cross-data-column data relationships may be displayed to an end-user using a cross-column relationship detection user interface. | 2021-12-16 |
20210390098 | QUERY ENGINE IMPLEMENTING AUXILIARY COMMANDS VIA COMPUTERIZED TOOLS TO DEPLOY PREDICTIVE DATA MODELS IN-SITU IN A NETWORKED COMPUTING PLATFORM - Various embodiments relate generally to data science and data analysis, computer software and systems, and network communications to interface among repositories of disparate datasets and computing machine-based entities configured to access datasets, and, more specifically, to a computing and data storage platform configured to provide one or more computerized tools to deploy predictive data models based on in-situ auxiliary query commands implemented in a query, and configured to facilitate development and management of data projects by providing an interactive, project-centric workspace interface coupled to collaborative computing devices and user accounts. For example, a method may include activating a query engine, implementing a subset of auxiliary instructions, at least one auxiliary instruction being configured to access model data, receiving a query that causes the query engine to access the model data, receiving serialized model data, performing a function associated with the serialized model data, and generating resultant data. | 2021-12-16 |
20210390099 | METHOD AND SYSTEM FOR ADVANCED DATA CONVERSATIONS - A method for querying and analyzing datasets via natural language processing (NLP) that can maintain context is disclosed. According to one embodiment, a computer-implemented method includes receiving, by a user interface, at least one of an utterance or a structured query language statement. The method includes identifying zero or more previous data conversation steps indicated by the utterance. The method includes determining, based on the utterance and the zero or more previous data conversation steps, an effective schema targeted by the utterance. The method includes generating, based on the utterance and the effective schema, an intermediate structured query language statement that is representative of the utterance. The method includes generating an executable structured query language statement based on the intermediate structured query language statement. The method includes executing the executable structured query language statement for the data query engine schema. The method includes communicating a result set and metadata. | 2021-12-16 |
20210390100 | QUERY GENERATION FROM A NATURAL LANGUAGE INPUT - A query generation system receives, from a first device, a first input and a first project identifier and receives, from a second device, a second input and a second project identifier. The first and second inputs are the same and are in a natural language format that is not compatible with a downstream database management system. The system generates, based on the first input, a first database query. The system generates, based on the second input, a second database query. The first and second database queries are compatible with the downstream database management system. The system receives a first response to the first database query and a second response to the second database query from the downstream database management system. The system transmits the first response to the first device and the second response to the second device. | 2021-12-16 |
20210390101 | QUERY GENERATION FROM A NATURAL LANGUAGE INPUT - A query generation system receives, from a first device, a first input and a first project identifier and receives, from a second device, a second input and a second project identifier. The first and second inputs are the same and are in a natural language format that is not compatible with a downstream database management system. The system generates, based on the first input, a first database query. The system generates, based on the second input, a second database query. The first and second database queries are compatible with the downstream database management system. The system receives a first response to the first database query and a second response to the second database query from the downstream database management system. The system transmits the first response to the first device and the second response to the second device. | 2021-12-16 |
20210390102 | QUERY GENERATION FROM A NATURAL LANGUAGE INPUT - A query generation system receives, from a first device, a first input and a first project identifier and receives, from a second device, a second input and a second project identifier. The first and second inputs are the same and are in a natural language format that is not compatible with a downstream database management system. The system generates, based on the first input, a first database query. The system generates, based on the second input, a second database query. The first and second database queries are compatible with the downstream database management system. The system receives a first response to the first database query and a second response to the second database query from the downstream database management system. The system transmits the first response to the first device and the second response to the second device. | 2021-12-16 |
20210390103 | FEDERATED SEARCH OF HETEROGENEOUS DATA SOURCES - A method enables federated search of a plurality of heterogeneous external data sources from a data analytics tool. With a mapping of one or more identified data connectors, a client search call, as formulated in a first data model, is translated to one or more external search calls formulated in one or more alternate data models of the heterogeneous external data sources. With the mappings of the one or more identified connectors, each response to the one or more external search calls is reformulated from the one or more alternate data models to the first data model to yield one or more client search result objects. The client search result objects are merged to a data warehouse. The client search call, as formulated in the first data model, is executed against the data warehouse. Results of the executed client search call are sent to the data analytics tool. | 2021-12-16 |
20210390104 | Optimal Admission Control For Caches - The technology is directed to cache admission control. One or more processors may categorize access requests for data items in a cache storage into a plurality of categories and collect information on the access requests over time. Based on the collected information, a utility value for caching data items in each category of the plurality of categories may be determined. Newly requested data items may be admitted into the cache storage in an order according to the corresponding utility values of their respective categories. | 2021-12-16 |
20210390105 | Caching Techniques for a Database Change Stream - Techniques are disclosed relating to caching techniques for processing a database change stream. A computer system may receive change records from a change stream that includes a plurality of records indicating changes to a database table. The change stream may include change records for multiple shards and be accessible by providing one or more position indicators for one or more of the multiple shards to request one or more change records and an updated position indicator. The system may store, for changes to a set of one or more shards, one or more cache entries that include respective groups of change records. The system may request a portion of the change stream by providing a received position indicator. The system may provide one or more cached change records from a cache entry that matches the provided position indicator. | 2021-12-16 |
20210390106 | AUTOMATED ANNOTATION SYSTEM FOR ELECTRONIC LOGGING DEVICES - An automated annotation system to automatically designate annotations to records within a report, which may perform operations that include: designating an annotation to a location, the annotation comprising a text string; identifying a record of the location within a report; selecting the annotation based on the record of the location within the report; and applying the text string of the annotation to the record within the report, according to certain example embodiments. | 2021-12-16 |
20210390107 | CONTENT METADATA SERVICE FOR LIFECYCLE MANAGEMENT OF DIGITAL CONTENT - Methods, systems, and computer-readable storage media for receiving, by a content transfer service of a content management system and from a source system, a first content file comprising first content and first content metadata, the first content metadata being stored in a first format, processing the first content file using a set of metadata retrieval definitions to extract file-type-specific metadata from the first content metadata and map at least a portion of the file-type-specific metadata to a first uniform content metadata file having a second format that is different from the first format, each metadata retrieval definition comprising a computer-executable, declarative procedure, and transferring, by the content transfer service, the first content file and the first uniform content metadata file to a target system, the target system consuming the content at least partially based on the first uniform content metadata file. | 2021-12-16 |
20210390108 | SMART CONTENT RECOMMENDATIONS FOR CONTENT AUTHORS - Techniques describes herein include using software tools and feature vector comparisons to analyze and recommend images, text content, and other relevant media content from a content repository. A digital content recommendation tool may communicate with a number of back-end services and content repositories to analyze text and/or visual input, extract keywords or topics from the input, classify and tag the input content, and store the classified/tagged content in one or more content repositories. Input text and/or input images may be converted into vectors within a multi-dimensional vector space, and compared to a plurality of feature vectors within a vector space to identify relevant content items within a content repository. Such comparisons may include exhaustive deep searches and/or efficient tag-based filtered searches. Relevant content items (e.g., images, audio and/or video clips, links to related articles, etc.), may be retrieved and presented to a content author and embedded within original authored content. | 2021-12-16 |
20210390109 | EXTRACTING AND POSTING DATA FROM AN UNSTRUCTURED DATA FILE - Disclosed herein are system, method, and computer program product embodiments for extracting and posting data from an unstructured data file to a database table. In an embodiment, a server receives a request to extract and post data from an unstructured data. The server extracts the data from the unstructured data file. The server identifies a set of columns from the structured format of the extracted data. Each column of the set of columns corresponds with a set of data elements from the extracted data. The server identifies a pattern of a set of possible patterns corresponding with each column of the set of columns. Furthermore, the server maps each column of the set of columns with a database column. The server stores each set of data elements of each respective column in the respective database column. | 2021-12-16 |
20210390110 | DATASET SIMPLIFICATION OF N-DIMENSIONAL SIGNALS CAPTURED FOR ASSET TRACKING - Methods, systems, and devices for dataset simplification of N-dimensional signals captured for asset tracking are provided. An example method involves obtaining raw data from a data source onboard an asset and determining whether obtainment of the raw data results in satisfaction of a data logging trigger. The method further involves, when the data logging trigger is satisfied, performing a dataset simplification algorithm on a target set of data within the raw data to generate a simplified set of data, wherein the target set of data contains a time-variant N-dimensional signal, N>=1, and the dataset simplification algorithm is generalized for all N>=1. The method further involves transmitting the simplified set of data to a server. | 2021-12-16 |
20210390111 | MACHINE ASSISTED DATA AGGREGATION - Systems and method for use in assisting a user in data aggregation tasks. A system determines the type of data needed by the user to complete the data aggregation task and, based on an indication of the data needed, queries multiple data sources. The results from the multiple data sources are then collated and aligned as necessary. Inconsistencies in the data are resolved or flagged to the user for attention. A completed form or a presentation set of data is then presented to the user for validation. | 2021-12-16 |
20210390112 | APPLICATION-BASED DATA TYPE SELECTION - Methods, Systems, and Apparatuses related to application-based data type selection are described. A processing device perform operations to monitor performance characteristics associated with various applications executed by a host computing device to determine that a threshold performance level has been reached or exceeded. Operations to convert a data type utilized by the various applications from a first format that supports arithmetic operations to a first level of precision to a second format that supports arithmetic operations to a second level of precision can be performed based, at least in part, on the determination. | 2021-12-16 |