15th week of 2021 patent applcation highlights part 40 |
Patent application number | Title | Published |
20210109834 | INSERTING PROBABILISTIC MODELS IN DETERMINISTIC WORKFLOWS FOR ROBOTIC PROCESS AUTOMATION AND SUPERVISOR SYSTEM - Probabilistic models may be used in a deterministic workflow for robotic process automation (RPA). Machine learning (ML) introduces a probabilistic framework where the outcome is not deterministic, and therefore, the steps are not deterministic. Deterministic workflows may be mixed with probabilistic workflows, or probabilistic activities may be inserted into deterministic workflows, in order to create more dynamic workflows. A supervisor system may be used to monitor an ML model and raise an alarm, disable an RPA robot, bypass an RPA robot, or roll back to a previous version of the ML model when an error is detected by a data drift detector, a concept drift detector, or both. | 2021-04-15 |
20210109835 | ROOT CAUSE DISCOVERY SYSTEM - A system and method for determining enterprise metrics of an enterprise application is described. The system receives a root cause definition that identifies enterprise user metrics and predefined parameters for the enterprise user metrics. The enterprise user metrics identify operation metrics of the enterprise application by users of the enterprise. The system stores the root cause definition in a library of root causes definitions. The system receives a selection of a plan that identifies an operation attribute of the enterprise application. The system identifies a root cause from the library of root causes definitions based on the plan. The system generates a recommendation based on the identified root cause. | 2021-04-15 |
20210109836 | USER INTERFACES FOR CONTROLLING OR PRESENTING DEVICE USAGE ON AN ELECTRONIC DEVICE - In some embodiments, an electronic device presents indications of usage metrics for the device. In some embodiments, an electronic device sets, configures and/or enforces device usage limits. In some embodiments, an electronic device limits access to certain applications during certain periods of time. In some embodiments, an electronic device suppresses auxiliary functions of certain applications when an application usage limit or restriction criteria associated with those applications is reached. In some embodiments, an electronic device manages restriction settings with permission optionally provided by another electronic device. | 2021-04-15 |
20210109837 | DIGITAL TWIN WORKFLOW SIMULATION - Systems, methods and computer program products for simulating workflows and activities of physical assets using digital twin models. User-defined simulations are performed by selectin digital twin components being analyzed during the simulation, concentrating the analysis on the selectively defined components and bypassing components that will not be simulated. Users can design the digital twin simulation using one or more available digital twin models. The model can be the most current digital twin model, a previous version of a model or a hybridized model comprising components or portions from multiple versions of the available digital twins. Users can further customize simulations by selecting components or sections of the digital twin model to selectively bypass during the simulation or provide overriding values for non-simulated portions of the digital twin which can be used as entry criteria inputted into the next simulated section or component of the digital twin, to complete the simulation. | 2021-04-15 |
20210109838 | INFORMATION PROCESSING APPARATUS, METHOD, AND NON-TRANSITORY RECORDING MEDIUM - An information processing apparatus, a method, and a non-transitory recording medium. The information processing apparatus includes circuitry to acquire an operation log of a target device, generate a first vector of a distributed representation indicating the operation log, and calculate the first vector and a vector of a distributed representation based on another operation log to identify the first vector. | 2021-04-15 |
20210109839 | Technology For Dynamically Tuning Processor Features - A processor comprises a microarchitectural feature and dynamic tuning unit (DTU) circuitry. The processor executes a program for first and second execution windows with the microarchitectural feature disabled and enabled, respectively. The DTU circuitry automatically determines whether the processor achieved worse performance in the second execution window. In response to determining that the processor achieved worse performance in the second execution window, the DTU circuitry updates a usefulness state for a selected address of the program to denote worse performance. In response to multiple consecutive determinations that the processor achieved worse performance with the microarchitectural feature enabled, the DTU circuitry automatically updates the usefulness state to denote a confirmed bad state. In response to the usefulness state denoting the confirmed bad state, the DTU circuitry automatically disables the microarchitectural feature for the selected address for execution windows after the second execution window. Other embodiments are described and claimed. | 2021-04-15 |
20210109840 | WIRELESS DEBUGGER AND WIRELESS DEBUGGING SYSTEM - Embodiments of the present disclosure provide a wireless debugger and a wireless debugging system. The wireless debugger includes: a processor, a wireless communication module, and a first peripheral interface; the processor is electrically connected to the wireless communication module and the first peripheral interface, respectively; the processor, is configured to receive debugging instructions through the wireless communication module, and the debugging instructions are used to instruct debugging/stop debugging a target board; the processor, is further configured to parse the debugging instructions and convert the parsed debugging instructions so that the debugging instructions are adapted to a protocol of the first peripheral interface; and the processor, is further configured to transmit the converted debugging instructions to the to-be-debugged target board through the first peripheral interface. Debugging control is convenient and reliable. | 2021-04-15 |
20210109841 | APPLICATION CONTAINERIZATION BASED ON TRACE INFORMATION - The present disclosure provides a computer-implemented method, computer system and computer program product for application containerization. According to the computer-implemented method, an application to be containerized can be traced. Information obtained in the tracing can be analyzed to determine one or more features of the application. An image template for the application can be created based on the one or more features. Then, a container image for the application can be built based on the image template. | 2021-04-15 |
20210109842 | GENERATION OF EXPLANATORY AND EXECUTABLE REPAIR EXAMPLES - A method may include obtaining a first violation in a first portion of a first software program and obtaining a first proposed patch to remediate the first violation. The method may include identifying a second software program with a second portion that includes a second violation. The method may include simplifying the second portion of the second software program by removing one or more elements in the second portion that are identified as extraneous. The method may include applying the first proposed patch for the first violation to the simplified second portion to generate a repaired simplified second portion. The method may include obtaining an executable repaired simplified second portion from the repaired simplified second portion. The method may include presenting the second violation and the executable repaired simplified second portion as an example of how the first proposed patch would affect the first violation and the first software program. | 2021-04-15 |
20210109843 | PREDICTIVE SOFTWARE FAILURE DISCOVERY TOOLS - A method for predicting software failure characteristics is discussed. The method includes accessing failure data and context data related to a failed software call into a software stack. The failure data indicates software call information. The context data indicates characteristics including functionality of the failed software call. The method includes accessing cluster data on previous software failures. The cluster data includes analyzed failure data and analyzed context data on software call traces. The analyzed failure data is provided by first and second failure tools. The cluster data for software calls is generated through respective instances of the software stack for each software call trace. The method also includes correlating the failed software call trace to a particular software call trace of the cluster data. The correlating is based at least on analysis of clusters indicated by the cluster data, the failure data, and the context data. | 2021-04-15 |
20210109844 | GENERATING AND ATTRIBUTING UNIQUE IDENTIFIERS REPRESENTING PERFORMANCE ISSUES WITHIN A CALL STACK - Various embodiments discussed herein enable unique identifiers or hash values to be generated that uniquely identify performance issues and associated call stack units, which may be attributed to a user or team of users. A performance issue for a currently running process can be detected. A particular location within a call stack of the process indicating where the performance issue was detected can be determined. A quantity of call stack frames within the particular location that account for a threshold proportion of the performance issue can be determined. A hash value that uniquely identifies the performance issued can be generated based at least in part on the particular location and the quantity of call stack frames within the particular location that account for the threshold proportion of the performance issue. | 2021-04-15 |
20210109845 | Automated Device Test Triaging System and Techniques - Methods and apparatus are provided for testing computing devices. A host computing device is provided for testing devices under test (DUTs) using a test suite that includes first and second tests. The DUTs can include a first group of DUTs with a first DUT and a second group of DUTs with a second DUT. The first and second groups of DUTs can share a common design. The host computing device can determine that the DUTs execute the first test before the second test. The host computing device can receive failing first test results for the first DUT. The host computing device can determine, based on the first test results and that the first and second DUT groups share a common design, to execute the second test before the first test and can subsequently instruct the second DUT to execute the second test before the first test. | 2021-04-15 |
20210109846 | End User Remote Enterprise Application Software Testing - A system and method for remote testing of enterprise software applications (ESA) allows one or more testers to remotely access an ESA and remotely test the ESA. In at least one embodiment, the ESA resides in a testing platform that includes one more computers that are provisioned for testing. “Provisioning” a computer system (such as one or more servers) refers to preparing, configuring, and equipping the computer system to provide services to one or more users. In at least one embodiment, the computer system is provisioned to create an ESA operational environment in accordance with a virtual desktop infrastructure (VDI) template interacting with virtualization software. | 2021-04-15 |
20210109847 | SYSTEM FOR AUTOMATED ERROR ANALYSIS IN AN APPLICATION TESTING ENVIRONMENT USING ROBOTIC PROCESS AUTOMATION - Systems, computer program products, and methods are described herein for automated error analysis in an application testing environment using robotic process automation. The present invention is configured to electronically receive one or more exceptions from one or more automated test scripts; determine one or more exception types associated with the one or more exceptions; and initiate an exception handling bot configured to handle the one or more exceptions based on at least the one or more exception types. | 2021-04-15 |
20210109848 | SYSTEM AND METHOD FOR IMPLEMENTING AN AUTOMATED REGRESSION TESTING MODULE - Various methods, apparatuses/systems, and media for implementing an automated testing module are disclosed. A processor creates a draft test suite that incorporates a plurality of features, each feature including a test scenario that comprises steps that describe the test scenario in a human readable form. The processor also compiles the steps of the test scenario into a single step in a reusable format; receives a request to perform a testing for an application; de-compiles the single step, in response to received request, to create a complete list of steps used in the scenario; generates a final test suite based on the de-complied single step in response to the received request; and automatically executes the final test suite to test the application without rewriting code. | 2021-04-15 |
20210109849 | EXTENSIBLE MEMORY DUAL INLINE MEMORY MODULE - Disclosed herein is an extensible memory subsystem comprising a dual in-line memory module (DIMM) that includes a dynamic random-access memory (DRAM) having a basic memory space, a DIMM memory controller coupled to the DRAM, a memory interface configured to couple the DIMM to a DIMM connector of a computing device, and a first extension interface configured to couple the DIMM to a first remote memory module having a first remote memory space, wherein the DIMM memory controller is configured to map a DIMM memory space comprising the basic memory space of the DRAM and the first remote memory space of the first remote memory module, the DIMM memory space being accessible by the computing device upon the DIMM being coupled to the computing device via the memory interface, and a first remote memory module coupled to the DIMM via the first extension interface of the DIMM. | 2021-04-15 |
20210109850 | PROCESSING SYSTEM AND EXECUTE IN PLACE CONTROL METHOD - A processing system includes a memory, a processor circuit, and an execute-In-Place (XIP) controller circuit. The processor circuit is configured to output a command The XIP controller circuit is configured to determine a predicted address of the memory to be read by a next operation of the processor circuit in response to the command, in order to prefetch data from the memory according to the predicted address. | 2021-04-15 |
20210109851 | MODIFICATION-FREQUENCY-BASED TIERED DATA STORAGE AND GARBAGE COLLECTION SYSTEM - A modification-frequency-based tiered data storage garbage collection system includes a storage device coupled to a host engine. The storage device includes a data storage and garbage collection engine and storage subsystems. The data storage and garbage collection engine receives first modified data from the host engine that provides a modification to first current data stored in a first data storage element provided by one of the storage subsystems and grouped in a first superblock associated with a first data modification frequency range. The data storage and garbage collection engine then determines a first frequency of modification of the first current data and, based on that, writes the first modified data to a second data storage element provided by one of the storage subsystems and grouped in a second superblock associated with a second data modification frequency range that is different than the first data modification frequency range. | 2021-04-15 |
20210109852 | CONTROLLER AND DATA STORAGE SYSTEM HAVING THE SAME - Provided herein may be a controller and a data storage system having the controller. The controller may include a mapping time generator configured to generate a first mapping time at which a logical block address and a physical block address are mapped to each other, an internal memory configured to store first address mapping information including an address map, and the first mapping time, a host interface configured to transmit the first address mapping information to a host, and receive second address mapping information from the host, and a central processing unit configured to generate the address map, store the first address mapping information in the internal memory, compare a second mapping time included in the second address mapping information with the first mapping time, and select a read mode based on a result of the comparison. | 2021-04-15 |
20210109853 | CONTROLLER AND METHOD OF OPERATING THE SAME - Provided herein may be a controller and a method of operating the controller. The controller may include a central processing unit configured to generate a command, manage a logical address using a notation system, a radix of which is greater than that of a binary notation system, and output the command and the logical address, and a flash interface layer configured to queue the command depending on workloads of dies, translate the logical address into a physical address, and output the command and the physical address through a selected channel. | 2021-04-15 |
20210109854 | MEMORY DEVICE AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM - According to one embodiment, a memory device includes a nonvolatile memory, address translation unit, generation unit, and reception unit. The nonvolatile memory includes erase unit areas. Each of the erase unit areas includes write unit areas. The address translation unit generates address translation information relating a logical address of write data written to the nonvolatile memory to a physical address indicative of a write position of the write data in the nonvolatile memory. The generation unit generates valid/invalid information indicating whether data written to the erase unit areas is valid data or invalid data. The reception unit receives deletion information including a logical address indicative of data to be deleted in the erase unit area. | 2021-04-15 |
20210109855 | MANAGING GARBAGE COLLECTION FOR A STORAGE SYSTEM - The present disclosure relates to systems, methods, and computer readable media for reconfiguring garbage collection configurations and initiating garbage collection in accordance with the reconfigured garbage collection configuration. For example, systems disclosed herein may identify or receive workload data associated with write activity of an application. Systems disclosed herein may additionally reconfigure a garbage collection configuration by modifying storage threshold and corresponding garbage collection parameters associated with initiating and performing garbage collection on a storage system. Based on a comparison of a current state of a storage system and the received workload data, systems disclosed herein may implement an intelligent garbage collection process that reduces media wear and accommodates unique write-based needs for one or more applications. | 2021-04-15 |
20210109856 | STORAGE DEVICE AND A GARBAGE COLLECTION METHOD THEREOF - A memory management method of a storage device including: programming write-requested data in a memory block; counting an elapse time from a time when a last page of the memory block was programmed with the write-requested data; triggering a garbage collection of the storage device when the elapse time exceeds a threshold value; and programming valid data collected by the garbage collection at a first clean page of the memory block. | 2021-04-15 |
20210109857 | LOCK-FREE SHARING OF LIVE-RECORDED CIRCULAR BUFFER RESOURCES - Novel techniques are described for lock-free sharing of a circular buffer. Embodiments can provide shared, lock-free, constant-bitrate access by multiple consumer systems to a live stream of audiovisual information being recorded to a circular buffer by a producer. For example, when a producer system writes a data stream to the circular buffer, the producer system records shared metadata. When a consumer system desires to begin reading from the shared buffer at a particular time, the shared metadata is used to compute a predicted write pointer location and corresponding dirty region around the write pointer at the desired read time. A read pointer of the consumer system can be set to avoid the dirty region, thereby permitting read access to a stable region of the circular buffer without relying on a buffer lock. | 2021-04-15 |
20210109858 | TOKENS TO INDICATE COMPLETION OF DATA STORAGE - Systems, apparatuses, and methods related to tokens to indicate completion of data storage to memory are described. An example method may include storing a number of data values by a first page in a first row of an array of memory cells responsive to receipt of a first command from a host, where the first command is associated with an open transaction token, and receiving a second command from the host to store a number of data values by a second page in the first row. The method may further include sending a safety token to the host to indicate completion of storing the number of data values by the second page in the first row. | 2021-04-15 |
20210109859 | LIFETIME ADAPTIVE EFFICIENT PRE-FETCHING ON A STORAGE SYSTEM - Managing a cache memory in a storage system includes maintaining a queue that stores data indictive of the read requests for a particular logical storage unit of the storage system in an order that the read requests are received by the storage system, receiving a read request for a particular page of the particular logical storage unit, and removing a number of elements in the queue and resizing the queue in response to the queue being full. Managing the cache memory also includes placing data indicative of the read request in the queue, determining a prefetch metric that varies according to a number of adjacent elements in a sorted version of the queue having a difference that is less than a predetermined value and greater than zero, and prefetching a plurality of pages that come after the particular page sequentially if the prefetch metric is greater than a predefined value. | 2021-04-15 |
20210109860 | EFFICIENT PRE-FETCHING ON A STORAGE SYSTEM - Managing a cache memory in a storage system includes maintaining a first queue that stores data indictive of the read requests for a particular logical storage unit of the storage system in an order that the read requests are received by the storage system and maintaining a second queue that stores data indictive of the read requests for the particular logical storage unit in a sort order corresponding to page numbers of the read requests, the second queue persisting for a plurality of iterations of read requests. A read request is received and data indicative of the read request is placed in the first queue and in the second queue while maintaining the sort order of the second queue. The second queue is used to determine a prefetch metric that varies according to a number of adjacent elements in the second queue. | 2021-04-15 |
20210109861 | CACHE MANAGEMENT BASED ON REUSE DISTANCE - A cache of a processor includes a cache controller to implement a cache management policy for the insertion and replacement of cache lines of the cache. The cache management policy assigns replacement priority levels to each cache line of at least a subset of cache lines in a region of the cache based on a comparison of a number of accesses to a cache set having a way that stores a cache line since the cache line was last accessed to a reuse distance determined for the region of the cache, wherein the reuse distance represents an average number of accesses to a given cache set of the region between accesses to any given cache line of the cache set. | 2021-04-15 |
20210109862 | ROUTING TRAFFIC OF A LOGICAL UNIT TO MULTIPLE BACKEND DATA OBJECTS BASED ON METADATA MAPPING - The disclosure herein describes enabling use of a logical unit for data storage in a distributed storage system using a plurality of backend data objects. Based on receiving instructions to create a logical unit of a logical unit size, a target backend object size to be used with the logical unit is determined, and a plurality of backend objects for allocation to the logical unit is calculated. The backend objects are allocated to the logical unit and a metadata mapping associated with the logical unit is generated. The metadata mapping associates logical block addresses of the logical unit to the allocated backend objects. The logical unit is linked with the metadata mapping in an input/output (I/O) service and, based on the linked metadata mapping, I/O traffic is routed to and from the logical unit. Using multiple backend objects enhances flexibility and efficiency of data storage on the distributed storage system. | 2021-04-15 |
20210109863 | DYNAMICALLY JOINING AND SPLITTING DYNAMIC ADDRESS TRANSLATION (DAT) TABLES BASED ON OPERATIONAL CONTEXT - An aspect includes determining, via a processor, context attributes of a storage. Data address translation (DAT) tables are created, via the processor, to map virtual addresses to real addresses within the storage. When detecting, via the processor, that a context attribute of the storage has changed, and the DAT tables are updated based at least in part on the changed context attributes of the storage. | 2021-04-15 |
20210109864 | Method and Apparatus for Monitoring Memory Access Behavior of Sample Process - A method for monitoring memory access behavior of a sample process is provided. A processing unit of a computer device determines a page table of the sample process based on a page directory base address of the sample process, where each entry of the page table includes first information, the first information indicates whether the entry has been assigned a guest physical address, the entry that has been assigned the guest physical address includes second information that is used to indicate an access permission of the assigned guest physical address; determines a target entry from the page table, the target entry has been assigned a guest physical address, and an access permission is execution allowed; determines a target host physical address corresponding to the target guest physical address that is assigned to the target entry; and monitors behavior of accessing memory space indicated by the target host physical address. | 2021-04-15 |
20210109865 | GLOBALLY OPTIMIZED PARTIAL DEDUPLICATION OF STORAGE OBJECTS - An aspect of implementing globally optimized partial deduplication of storage objects includes gathering pages that share a common feature, dividing the pages into groups based on commonality with corresponding representative pages, where each is assigned as a representative dedupe page for the corresponding groups. For each group in the groups of pages, an aspect also includes writing the pages to a corresponding physical container. | 2021-04-15 |
20210109866 | TRANSLATION LOOKASIDE BUFFER PREWARMING - A method includes executing, by a processor core, a first task; scheduling, by a scheduler, a second task to be executed by the processor core upon completion of executing the first task; responsive to scheduling the second task, providing, by the scheduler, a prewarming message to a memory management unit (MMU) coupled to the processor core; and responsive to receiving the prewarming message, fetching, by the MMU, a page table specified by a page table base of the prewarming message. | 2021-04-15 |
20210109867 | NON-STALLING, NON-BLOCKING TRANSLATION LOOKASIDE BUFFER INVALIDATION - A method includes receiving, by a MMU for a processor core, an address translation request from the processor core and providing the address translation request to a TLB of the MMU; generating, by matching logic of the TLB, an address transaction that indicates whether a virtual address specified by the address translation request hits the TLB; providing the address transaction to a general purpose transaction buffer; and receiving, by the MMU, an address invalidation request from the processor core and providing the address invalidation request to the TLB. The method also includes, responsive to a virtual address specified by the address invalidation request hitting the TLB, generating, by the matching logic, an invalidation match transaction and providing the invalidation match transaction to one of the general purpose transaction buffer or a dedicated invalidation buffer. | 2021-04-15 |
20210109868 | SOFTWARE-HARDWARE MEMORY MANAGEMENT MODES - A method includes receiving, by a memory management unit (MMU) comprising a translation lookaside buffer (TLB) and a configuration register, a request from a processor core to directly modify an entry in the TLB. The method also includes, responsive to the configuration register having a first value, operating the MMU in a software-managed mode by modifying the entry in the TLB according to the request. The method further includes, responsive to the configuration register having a second value, operating the MMU in a hardware-managed mode by denying the request. | 2021-04-15 |
20210109869 | DETERMINING CAPACITY IN A GLOBAL DEDUPLICATION SYSTEM - An aspect of determining per volume exclusive capacity in a deduplication system includes setting a percentage of a population of pages for selection. For each of the pages, an aspect includes selecting a page in the population, providing a data segment facilitating multiple references of the segment by at least one storage entity, maintaining counts corresponding with each segment in the page, and determining exclusive ownership of the page based on the counts and a key value of one of a plurality of storage entities. | 2021-04-15 |
20210109870 | ISOLATING MEMORY WITHIN TRUSTED EXECUTION ENVIRONMENTS - Example methods and systems are directed to isolating memory in trusted execution environments (TEEs). In function-as-a-service (FaaS) environments, a client makes use of a function executing within a TEE on a FaaS server. To minimize the trusted code base (TCB) for each function, each function may be placed in a separate TEE. However, this causes the overhead of creating a TEE to be incurred for each function. As discussed herein, multiple functions may be placed in a single TEE without compromising the data integrity of each function. For example, by using a different extended page table (EPT) for each function, the virtual address spaces of the functions are kept separate and map to different, non-overlapping physical address spaces. Partial overlap may be permitted to allow functions to share some data while protecting other data. Memory for each function may be encrypted using a different encryption key. | 2021-04-15 |
20210109871 | OPTIMIZING TIME-DEPENDENT SIMULATIONS OF QUANTUM COMPUTING ARCHITECTURES - A method is performed to compile input data including a plurality of pulse sequences, hardware parameters obtained from a computing device, and a mathematical model with time-dependent control parameters to decrease a computation time of the input data. The method also includes providing the input data to the computing device to allow the computing device to run a computation of the input data. The method further includes converting the pulse sequences into memory-aligned arrays to decrease the computation time of the input data. The method includes calculating optimized output data using an adaptive step size computation to decrease the computation time needed to compute the output data. | 2021-04-15 |
20210109872 | MEMORY COMPONENT WITH A VIRTUALIZED BUS AND INTERNAL LOGIC TO PERFORM A MACHINE LEARNING OPERATION - A memory component can include memory cells with a memory region to store a machine learning model and input data and another memory region to store host data from a host system. The memory component can include an in-memory logic, coupled to the memory cells, to perform a machine learning operation by applying the machine learning model to the input data to generate an output data. A bus can receive additional data from the host system and can provide the additional data to the other memory region or the in-memory logic based on a characteristic of the additional data. | 2021-04-15 |
20210109873 | MEMORY - A memory includes: a first data bus; a second data bus; and a plurality of bank groups. The bank groups output read data by alternately using the first data bus and the second data bus during read operations of the bank groups. | 2021-04-15 |
20210109874 | STORAGE SYSTEM AND METHOD FOR SWITCHING WORKING MODE OF STORAGE SYSTEM - A storage system comprises a host, a first control device, a second control device, and a storage drive that has a physical connector A and a physical connector B for connecting to the first control device and second control device, respectively. When the storage drive is configured to operate in first working mode, it provides a shared storage space to be accessed by both the first control device and the second control device. When the storage drive is configured to operate in a second working mode, it provides a first storage space for access by the first control device and a second storage space for access by the second control device. | 2021-04-15 |
20210109875 | DATA STORAGE DEVICE, ELECTRONIC APPARATUS, AND SYSTEM CAPABLE OF REMOTELY CONTROLLING ELECTRONIC APPARATUS - The invention provides a system capable of remotely controlling electronic apparatus, which includes a cloud management platform and at least one electronic apparatus. The electronic apparatus includes at least one operation element, and a data storage device having a network communication function. The data storage device includes a first transmission interface, a second transmission interface, a data storage unit, and an operation management unit. Via the first transmission interface, data stored in the data storage unit can be read or data can be written into the data storage unit. The operation management unit of the data storage device transmits a specific operation instruction to the operation element via the second transmission interface after receiving the specific operation instruction sent from the cloud management platform, such that the operation element can execute a corresponding operation according to the specific operation instruction. | 2021-04-15 |
20210109876 | REMOTELY-POWERED SENSING SYSTEM WITH MULTIPLE SENSING DEVICES - A sensing system including analyte sensing devices, an interface device, and shared communication device. The interface device may be configured to receive a power signal and generate power for powering the sensing devices and to convey data signals generated by the sensing devices. The sensing system may be configured to receive addressed and unaddressed commands. The sensing devices may be configured to perform activities (e.g., measurement sequences) in parallel in response to the unaddressed commands (e.g., unaddressed measurement commands). The sensing devices may be configured to only perform activities (e.g., conveying measurement data) in response to addressed commands (e.g., addressed read measurement data commands) if the sensing devices determine that the addressed commands are addressed to them. The sensing devices may be configured to perform different measurement sequences in response to an unaddressed measurement command to minimize interference caused by the sensing devices performing the measurement sequences in parallel. | 2021-04-15 |
20210109877 | TRANSMISSION TERMINAL, NON-TRANSITORY RECORDING MEDIUM, TRANSMISSION METHOD, AND TRANSMISSION SYSTEM - A transmission terminal includes at least one processor configured to transmit a terminal information request to request the number of transmission terminals under transmission to a transmission management apparatus connected via a network; and display image data received from one or more of the transmission terminals under transmission on a display device, and display the number of the transmission terminals under transmission received from the transmission management apparatus in response to the terminal information request on the display device. | 2021-04-15 |
20210109878 | ADAPTER, TERMINAL DEVICE AND ADAPTER SYSTEM - Provided are an adapter, a terminal device and an adapter system. The adapter includes: a universal serial bus type-C (USB-C) plug cooperatively connected to a USB-C interface of the terminal device, a USB socket cooperatively connected to a charging plug, and a headset socket cooperatively connected to a headset plug, where a first communication pin of the USB-C plug is connected to a first communication pin of the USB socket, a second communication pin of the USB-C plug is connected to a second communication pin of the USB socket, a first sound channel pin and a second sound channel pin of the USB-C plug are connected to a right sound channel signal pin and a left sound channel signal pin of the headset socket in one-to-one correspondence. | 2021-04-15 |
20210109879 | POOLED MEMORY ADDRESS TRANSLATION - A shared memory controller receives, from a computing node, a request associated with a memory transaction involving a particular line in a memory pool. The request includes a node address according to an address map of the computing node. An address translation structure is used to translate the first address into a corresponding second address according to a global address map for the memory pool, and the shared memory controller determines that a particular one of a plurality of shared memory controllers is associated with the second address in the global address map and causes the particular shared memory controller to handle the request. | 2021-04-15 |
20210109880 | PIN CONTROL METHOD - An integrated circuit includes a plurality of peripheral input/output pins, a plurality of general-purpose input/output pins, a link network, and a network control circuit. The link network is coupled to the plurality of peripheral input/output pins and the plurality of general-purpose input/output pins. The network control circuit is coupled to the link network, and controls the respective connections between the plurality of peripheral input/output pins and the plurality of general-purpose input/output pins via the link network according to correspondence between the plurality of peripheral input/output pins and the plurality of general-purpose input/output pins. | 2021-04-15 |
20210109881 | DEVICE FOR A VEHICLE - A device for a vehicle may include a first wireline interface configured to receive a first data stream from a first sensor having a first sensor type for perceiving a surrounding of the vehicle, the first data stream including raw sensor data detected by the first sensor; a second wireline interface configured to receive a second data stream from a second sensor having a second sensor type for perceiving the surrounding of the vehicle, the second data stream including raw sensor data detected by the second sensor; one or more processors configured to generate a coded packet including the received first data stream and the received second data stream by employing vector packet coding on the first data stream and the second data stream; and an output wireline interface configured to transmit the generated coded packet to one or more target units of the vehicle. | 2021-04-15 |
20210109882 | MULTICHIP PACKAGE WITH PROTOCOL-CONFIGURABLE DATA PATHS - Integrated circuit packages with multiple integrated circuit dies are provided. A multichip package may include a substrate, a main die that is mounted on the substrate, and multiple transceiver daughter dies that are mounted on the substrate and that are coupled to the main die via corresponding Embedded Multi-die Interconnect Bridge (EMIB) interconnects formed in the substrate. Each of the main die and the daughter dies may include configurable adapter circuitry for interfacing with the EMIB interconnects. The adapter circuitry may include FIFO buffer circuits operable in a 1× mode or 2× mode and configurable in a phase-compensation mode, a clock-compensation mode, an elastic mode, and a register bypass mode to help support a variety of communications protocols with different data width and clocking requirements. The adapter circuitry may also include boundary alignment circuitry for reconstructing (de)compressed data streams. | 2021-04-15 |
20210109883 | INTERFACE BRIDGE BETWEEN INTEGRATED CIRCUIT DIE - An interface bridge to enable communication between a first integrated circuit die and a second integrated circuit die is disclosed. The two integrated circuit die may be connected via chip-to-chip interconnects. The first integrated circuit die may include programmable logic fabric. The second integrated circuit die may support the first integrated circuit die. The first integrated circuit die and the secondary integrated circuit die may communicate with one another via the chip-to-chip interconnects using an interface bridge. The first and second component integrated circuits may include circuitry to implement the interface bridge, which may provide source-synchronous communication using a data receive clock from the second integrated circuit die to the first integrated circuit die. | 2021-04-15 |
20210109884 | CONFIGURATION PARAMETER TRANSFER - Examples relate to configuration parameter transfer. An apparatus may include a memory resource storing executable instructions. Instructions may include instructions to receive a first signal from a host computing device. Instructions may further include instructions to initiate communications with the host computing device in response to receiving the first signal. Instructions may further include instructions to receive a configuration parameter from the host computing device in response to initiation of communications with the host computing device. The apparatus may further include a processing resource to execute the instructions stored on the memory resource. | 2021-04-15 |
20210109885 | DEVICE FOR MANAGING HDD BACKPLANE - A device for managing an HDD backplane includes a mainboard and a backplane. The mainboard includes a first connector port and a second connector. The backplane includes a first HDD interface, a second HDD interface, an I2C selector, and a CPLD. The first and second HDD interfaces are both electrically connected to the CPLD. The first and second connector ports are both electrically connected to the I2C selector. The I2C selector is electrically connected to the CPLD. The CPLD receives an identification signal from the first HDD interface or the second HDD interface, and determines a type of HDD inserted in the first HDD interface or in the second HDD interface, and outputs a controlling signal to the I2C selector according to the type of HDD which is identified. The I2C selector turns on the first connector port and the second connector port according to the controlling signal. | 2021-04-15 |
20210109886 | N-CHANNEL SERIAL PERIPHERAL COMMUNICATION, AND RELATED SYSTEMS, METHODS AND DEVICES - Embodiments of an N-channel serial peripheral interface are described, and N-channel serial communication links comprising the same. Also described are methods of communication using N-channel serial communication interfaces and links. | 2021-04-15 |
20210109887 | I3C PENDING READ WITH RETRANSMISSION - Embodiments of the present disclosure may relate to apparatus, process, or techniques in a I3C protocol environment that include identifying a pending read notification message by a slave device to be sent to a master device to indicate that the data is available to be read by the master device from a buffer associated with the slave device. The pending read notification may be subsequently transmitted to the master device. Subsequently, until the data in the buffer has been read by the master device, the slave device may wait an identified amount of time that is less than a value of a timeout of the master device, and retransmit the pending read notification message to the master device. Other embodiments may be described and/or claimed. | 2021-04-15 |
20210109888 | PARALLEL PROCESSING BASED ON INJECTION NODE BANDWIDTH - A technique includes performing a collective operation among multiple nodes of a parallel processing computer system using multiple parallel processing stages. The technique includes regulating an ordering of the parallel processing stages so that an initial stage of the plurality of parallel processing stages is associated with a higher node injection bandwidth than a subsequent stage of the plurality of parallel processing stages. | 2021-04-15 |
20210109889 | TRANSPARENT NETWORK ACCESS CONTROL FOR SPATIAL ACCELERATOR DEVICE MULTI-TENANCY - An apparatus to facilitate transparent network access controls for spatial accelerator device multi-tenancy is disclosed. The apparatus includes a secure device manager (SDM) to: establish a network-on-chip (NoC) communication path in the apparatus, the NoC communication path comprising a plurality of NoC nodes for ingress and egress of communications on the NoC communication path; for each NoC node of the NoC communication path, configure a programmable register of the NoC node to indicate a node group that the NoC node is assigned, the node group corresponding to a persona configured on the apparatus; determine whether a prefix of received data at the NoC node matches the node group indicated by the programmable register of the NoC; and responsive to determining that the prefix does not match the node group, discard the data from the NoC node. | 2021-04-15 |
20210109890 | SYSTEM AND METHOD FOR PLANNING AND CONFIGURING A FILE SYSTEM MIGRATION - A migration plan is created that is based at least in part on an operator input. The resources of a destination file system are provisioned based on the migration plan. One or more processes to migrate the source file system for the provisioned resources of the destination file system are then configured based on the migration plan. | 2021-04-15 |
20210109891 | MULTI-POLICY INTERLEAVED SNAPSHOT LINEAGE - Multi-policy interleaved snapshot lineage is described herein. A method can include assigning a virtual storage volume at a remote storage system to a local storage device according to first and second data retention policies for first and second storage groups, respectively, that comprise the local storage device; obtaining a first data snapshot of the local storage device at a first time according to the first data retention policy; in response to the obtaining the first data snapshot, transferring a first incremental representation of the first data snapshot to the virtual storage volume; obtaining a second data snapshot of the local storage device at a second time according to the second data retention policy; and in response to the obtaining the second data snapshot, transferring a second incremental representation of the second data snapshot to the virtual storage volume. | 2021-04-15 |
20210109892 | File Management Systems And Methods - Example file management systems and methods are described. In one implementation, a system detects a user entry in a document. The system then retrieves knowledge relevant to the user entry. The system also presents the knowledge to a user. | 2021-04-15 |
20210109893 | APPROACHES FOR MANAGING OBJECT DATA - Systems and methods are provided for determining multiple fragments of data to be imported, the multiple fragments of data corresponding to different instances of data obtained from one or more external data sources, the different instances of data each corresponding to duplicate content. The multiple fragments of data that each correspond to different instances of duplicate content can be ingested. The multiple fragments of data can be de-duplicated to determine one or more corresponding object data source records (DSRs). The one or more object DSRs can be imported within a data platform system. | 2021-04-15 |
20210109894 | AUTOMATED CUSTOMIZED MODELING OF DATASETS WITH INTUITIVE USER INTERFACES - A computer-implemented method for automatically determining data relationships includes generating a graphical user interface (GUI) that allows a user to intuitively form a customized model of data from different data sources. The GUI includes icons that represent data sources, data variable selection, data modeling, and data prediction. The icons can be logically arranged to form a customized model without any additional user input or knowledge of data modeling. A prediction GUI allows the user to set customized weights of data variables in the model to form predictive controls for data prediction such as in what-if scenarios. | 2021-04-15 |
20210109895 | DETERMINING USER INTERFACE CONTEXTS FOR REQUESTED RESOURCES - A computer system for inferring a context from which a resource was selected in a distributed file sharing system is provided. The system includes a processor that is configured to receive a first connection request from a client agent and transmit a response to the first connection request to the client agent, the response including a resource collection that includes at least one resource. The processor is further configured to receive a request for the at least one resource from the client agent and generate a context inference indicating the client agent selected the at least one resource from the resource collection. The processor is further configured to receive a second connection request from the client agent, select an updated resource collection for transmission to the client agent based on the context inference, and transmit the updated resource collection to the client agent. | 2021-04-15 |
20210109896 | Smart Filesystem Indexing For Any Point-in-Time Replication - Filesystem events that change a file system are detected, and information comprising metadata that describe each filesystem change event of a consecutive sequence of changes is created and associated with timestamps and point-in-time snapshots of the filesystem at the time of occurrence of the filesystem events. The information is entered into an event stream that is saved in a journal, and applied to a previously created full index of the filesystem structure in the journal to synthesize and replicate a filesystem index and structure as they existed at any desired point in time represented by the event stream. The reconstructed index and filesystem structure can be searched for a reference to an object of interest such as a filename or a directory, and the file or directory recovered and replicated using an associated PiT. | 2021-04-15 |
20210109897 | DYNAMICALLY UPDATING DISTRIBUTED CONTENT OBJECTS - A document object may be transmitted to a plurality of user devices. The document object may include at least one field for display of a content object of a group of content objects. The field may be associated with an identifier corresponding to the group of content objects. When the document object is accessed, then the access may trigger a request to a server, which may select a content object of the group of content objects using a content object identification function. The content object identification function may be dynamically updated based at least in part on the document object being accessed by one or more of the plurality of user devices. Responsive to the request, a unique content object identifier corresponding to the selected content object may be transmitted to the user device and displayed at the accessed document object. | 2021-04-15 |
20210109898 | Multi-Component Content Asset Transfer - Methods, systems, and apparatuses are described for multi-component asset transfer. A plurality of references can be generated from a manifest of a content asset. A monitoring agent can determine when a content item for the content asset is received and modify the state of the corresponding reference. | 2021-04-15 |
20210109899 | SYSTEMS AND METHODS FOR DOCUMENT SEARCH AND AGGREGATION WITH REDUCED BANDWIDTH AND STORAGE DEMAND - Methods and systems comprising a gateway coordinator of a local system that receives a task comprising search criteria, crawls for files on a local data source of the local system, and encounters one or more files of interest. The one or more files of interest may be deNISTed and deduplicated and sent to an upload coordinator of a remote cloud facility. In one or more examples, the gateway coordinator may be a virtual machine. | 2021-04-15 |
20210109900 | INLINE AND POST-PROCESS DATA DEDUPLICATION FOR A FILE SYSTEM - Deduplication, including inline deduplication, of data for a file system can be implemented and managed. A data management component (DMC) can control inline and post-process deduplication of data during write and read operations associated with memory. DMC can determine whether inline data deduplication is to be performed to remove a data chunk from a write operation to prevent the data chunk from being written to a data store based on a whether a hash associated with the data chunk matches a stored hash stored in a memory index and associated with a stored data chunk stored in a shadow store. If there is a match, DMC can perform a byte-by-byte comparison of the data chunk and stored data chunk to determine whether they match. If they match, DMC can perform inline data deduplication to remove the data chunk from the write operation. | 2021-04-15 |
20210109901 | A DATA MANAGEMENT SYSTEM AND METHOD - A data management system ( | 2021-04-15 |
20210109902 | SYSTEM AND METHOD FOR INFORMATION STORAGE USING BLOCKCHAIN DATABASES COMBINED WITH POINTER DATABASES - A system and method for information storage using blockchain and pointer databases, comprising a computer with a blockchain manager and datastore manager, and blockchain data input, which connects over a network to a distributed blockchain ledger containing information such as personally-identifying biometric data and a datastore system containing searchable information such as a DNS system on the persons entered into the biometric blockchain, the datastore system also containing reference numbers for each searchable block in the blockchain, such that verification or identification can both be accomplished swiftly and securely of data in the blockchain such as for biometric verification to verify or identify persons submitting biometric data to such a system. | 2021-04-15 |
20210109903 | UNIFIED FILE SYSTEM ON AIR-GAPPED ENDPOINTS - A system and method for providing a unified file system on an air-gapped endpoint are provided. The method includes monitoring the first and second security zones instantiated on the virtually air-gapped endpoint to intercept at least one file system operation to access files on the first security zone; determining if the detected file system operation triggers a display of a file system dialog window of the second security zone; and when the file system dialog window of the second security zone is determined to be triggered, preventing the display of a file system dialog window in the first security zone; and displaying the file system dialog window of the second security zone in the second security zone. | 2021-04-15 |
20210109904 | SYSTEM AND METHOD FOR PROCESS AND AUTOMATION FRAMEWORK TO ENABLE ACCEPTANCE TEST DRIVEN DEVELOPMENT - Systems and methods according to exemplary embodiments provide a process and automation framework enabling Acceptance Test Driven Development (ATDD) automation for Extract, Transform, and Load (ETL) and Big Data testing. Exemplary embodiments include a user interface for executing end to end tests as part of an ATDD process during ETL. The user interface may act as a shopping cart where the user only has to pick and choose the flavor of tests he/she desires to run (e.g., Pre-Ingestion, Post Ingestion, Data Reconciliation, etc.), and the feature files associated with the tests are dynamically generated. | 2021-04-15 |
20210109905 | PROCESSING METRICS DATA WITH GRAPH DATA CONTEXT ANALYSIS - A method, system and computer program product for processing metrics data with graph data context analysis. Graph data representing one or more devices or sensors is stored into a first database, and metrics data generated by the devices or sensors is stored in a second database. The metrics data is then applied to the graph data for the context analysis, wherein the context analysis reflects the relationships of the devices or sensors in the graph data to the metrics data generated by the devices or sensors. The graph data comprises nodes for representing the devices or sensors, edges for representing a topology of the devices or sensors, and properties for storing the metrics data associated with the nodes and edges; and the metrics data comprises time-series data that is logged by the devices and sensors. | 2021-04-15 |
20210109906 | CLUSTERING MODEL ANALYSIS FOR BIG DATA ENVIRONMENTS - A method of persisting and performing a clustering analysis through use of a large data electronic file system includes generating a job identifier and linking the job identifier with a configuration identifier, a plurality of model identifiers and a plurality of data regularization identifiers. Each of the configuration identifier, model identifiers and data regularization identifiers are stored in respective management tables of the file system along with meta-data indicating a physical location of an analysis configuration, a physical location of a data regularizer and a physical location of a clustering model, respectively. The method further includes specifying the job identifier to a clustering analysis application causing the analysis configuration, the clustering models and the data regularizers to load into the clustering analysis application and receiving a plurality of scores resulting from a cluster analysis performed by the clustering analysis application based on the job identifier. | 2021-04-15 |
20210109907 | VERSIONING SCHEMAS FOR HIERARCHICAL DATA STRUCTURES - Versions of a schema may be maintained for application to hierarchical data structures. Updates to include in a new version of a schema may be received. The updates may be evaluated for compatibility with a current version of the schema. Compatible updates may be included in the new version of the schema. Incompatible updates may not be included in the new version of the schema. The new version of the schema may be made available for application to hierarchical data structures inclusive of the compatible updates to the schema. | 2021-04-15 |
20210109908 | COMPUTER-IMPLEMENTED METHOD, AN APPARATUS AND A COMPUTER PROGRAM PRODUCT FOR PROCESSING A DATA SET - According to an aspect, there is provided a computer-implemented method for processing a data set, the data set comprising respective data subsets for a plurality of subjects, each data subset comprising a plurality of data entries, each entry comprising respective parameter values for each of a plurality of parameters at a respective time point, wherein for a first data subset relating to a first subject in the plurality of subjects, one or more parameter values for at least a first parameter in the plurality of parameters is missing from the first data subset, the method comprising, for a first missing parameter value in a first data entry in the first data subset (a) determining completeness scores for the first parameter, wherein each completeness score indicates a level of completeness of the data entries in the first data subset for the first parameter and a respective one of the other parameters in the plurality of parameters; (b) determining correlation scores for the first parameter, wherein each correlation score indicates a level of correlation between the parameter values in the data set for the first parameter and the parameter values in the data set for a respective one of the other parameters in the plurality of parameters; (c) determining a subset of the plurality of parameters to use to form regression trees based on the determined completeness scores and the determined correlation scores; (d) forming a plurality of regression trees, wherein each regression tree relates to a respective parameter combination of the first parameter and one or more of the other parameters in the determined subset, and each regression tree is trained to predict a parameter value for the first parameter based on input parameter values for the one or more other parameters in the parameter combination, wherein each regression tree is trained using training data comprising parameter values for the parameters in the respective parameter combination, wherein the training data includes the parameter values in any data entry in the first data subset for which a parameter value is present for all of the parameters in the respective parameter combination; (e) using each regression tree to predict a parameter value for the first parameter based on parameter values in the first data entry for the one or more other parameters in the parameter combination; and (0 combining the predicted parameter values to estimate the first missing parameter value. A corresponding apparatus and computer program product are also provided. | 2021-04-15 |
20210109909 | METHOD FOR MANAGING A DATABASE SHARED BY A GROUP OF APPLICATIONS, RELATED COMPUTER PROGRAM AND ON-BOARD SYSTEM - The invention relates to a method for managing a database shared by a group of applications. The database comprises elements, each comprising a value and a version number. Each application comprises a replica of the database. | 2021-04-15 |
20210109910 | NODE, NETWORK SYSTEM AND METHOD OF DATA SYNCHRONISATION - The present disclosure provides a node, comprising: a data storage unit configured to store a plurality of data entries, each data entry comprising a data entry ID, a data entry version identifier, and a data payload representing operating information of the node or another node; a processing unit; a first interface for communicating with another node. The node is configured to transmit a first data packet comprising data entry ID and data entry version identifier of a data entry in the data storage unit, to other nodes; and to receive a second data packet transmitted by said another node, the second data packet comprising data entry ID and data entry version identifier of a data entry of said another node; and compare the received data entry version identifier of the second data packet, with a data entry version identifier of a corresponding data entry in the data storage unit having a same data entry ID as the second data packet. | 2021-04-15 |
20210109911 | PERVASIVE SEARCH ARCHITECTURE - A pervasive search architecture that indexes personal content of a querying user and made accessible to the user by other users. A compute node of a personal content location facilitates index generation and serve of the index. The index is generated for personal content stored at the personal content location. For a given content location, the index may encapsulate content stored in a set of locations with access permissions. The indexing application runs periodically at the personal content location and incrementally indexes content that is added to the shared locations. The same application allows the user to configure locations with the desired access permissions for participation in the search. | 2021-04-15 |
20210109912 | MULTI-LAYERED KEY-VALUE STORAGE - Systems and methods for multi-layered key-value storage are described. For example, methods may include receiving two or more put requests that each include a respective primary key and a corresponding respective value; storing the two or more put requests in a buffer in a first datastore; determining whether the buffer is storing put requests that collectively exceed a threshold; responsive to the determination that the threshold has been exceeded, transmitting a write request to a second datastore, including a subsidiary key and a corresponding data file that includes the respective values of the two or more put requests at respective offsets in the data file; for the two or more put requests, storing respective entries in an index in the first datastore that associate the respective primary keys with the subsidiary key and the respective offsets; and deleting the two or more put requests from the buffer. | 2021-04-15 |
20210109913 | OBJECT STORAGE METHOD AND OBJECT STORAGE GATEWAY - An object storage method includes: receiving an operation instruction, towards a target object, transmitted by a client terminal of a user; in response to the operation instruction, determining a storage area corresponding to the target object, where the storage area is located in a key-value storage database, the storage area is associated with a data table in the key-value storage database, and storage information of objects in the storage area is recorded in the data table; and processing the target object in the storage area of the key-value storage database according to the operation instruction, and modifying storage information of the target object in the data table according to a result of processing the target object. | 2021-04-15 |
20210109914 | TRANSIENT SOFTWARE ERROR HANDLING IN A DISTRIBUTED SYSTEM - A method for use in a storage system is disclosed, comprising: receiving, at a first server in the storage system, a given block layer request for reservation of a storage resource, by the first server, an identifier corresponding to the given block layer request; performing a search of a database to detect whether the given block layer request has been completed, the search being performed by the first server, the search being performed based on the identifier corresponding to the given block layer request; when the database indicates that the given block layer request has not been completed: completing the given block layer request and transmitting a notification that the given block layer request is completed; and when the database indicates that given block layer request has been completed, re-transmitting a notification that the given block layer request is completed. | 2021-04-15 |
20210109915 | AUTOMATED COMPUTING PLATFORM FOR AGGREGATING DATA FROM A PLURALITY OF INCONSISTENTLY CONFIGURED DATA SOURCES TO ENABLE GENERATION OF REPORTING RECOMMENDATIONS - Methods, apparatus, systems, computing devices, computing entities, and/or the like for generating medical research reports automatically collect data from a plurality of separate health data storage systems, standardize the received data to support at least a requested report type, apply one or more machine-learning quality control check to identify potentially inaccurate data included within the received data, and to generate the requested report based at least in part on the standardized, refined data. Moreover, one or more recommended additional reports supported by the refined data set is identified and recommended to a user based at least in part on user attributes and reports initially requested. | 2021-04-15 |
20210109916 | RELATIONAL DATABASE BLOCKCHAIN ACCOUNTABILITY - The present invention provides for employing SQL to introduce blockchain technologies into a relational database, and thereby leverage the inherent tamper-resistant properties of blockchain, without the need to completely rewrite existing or legacy relational database software. The invention creates a relational database inside of a blockchain and uses a conventional SQL interface for standard database operations. This reduces the burden of introducing blockchain technologies, while providing the benefits of the intrinsic security and verification features of blockchain technology. The invention provides a rich historical record of every transaction thereby greatly reducing, if not eliminating, the relational database's susceptibility to tampering. This allows for temporal queries on arbitrary records within the database and the generation of reports and audits for any point in the history of the database. | 2021-04-15 |
20210109917 | System and Method for Processing a Database Query - A system and a method for processing a database query are provided. The system includes a server associated with one or more databases and a cryptographic structure storing one or more fingerprints in a plurality of nodes, each of the one or more fingerprints associated with a respective database of the one or more databases. The server includes at least one processor, and at least one memory including computer program code. The at least one memory and the computer program code are configured to, with the at least one processor, cause the server at least to receive an input requesting a database query result to the database query, determine the database query result based on the one or more databases in response to the input, and determine one or more fingerprints of the databases associated with the database query result, and a verifying value in response to the determined one or more fingerprints, the verifying value being one that is used to verify if the determined one or more fingerprints are part of the cryptographic structure. | 2021-04-15 |
20210109918 | INTELLIGENT READING SUPPORT - Embodiments for providing data content consumption support by a processor. Data from one or more data sources may be captured and received by one or more data capturing devices while a user is consuming the data on the one or more data sources. A domain knowledge may be automatically updated with the data. A response may be provided to one or more queries based upon information accessed from the knowledge domain. | 2021-04-15 |
20210109919 | DISTRIBUTED LEDGER BASED GENERATION OF ELECTRONIC DOCUMENTS - A data management system is provided. The data management system is communicatively coupled to a distributed ledger that stores electronic document information for users and smart contracts that control access to the document information. The data management system receives a user request, including a user identifier (ID), for an electronic document and identifies a domain associated with the user request. The data management system selects smart contract(s) for the domain from the stored smart contracts and extracts user-specific information for the electronic document from the distributed ledger based on the user ID and the selected smart contract(s). The data management system determines a content ID associated with template information for the electronic document and extracts the template information from the distributed ledger based on the determined content ID and the selected smart contract(s). Based on the user-specific information and the template information, the data management system generates the electronic document. | 2021-04-15 |
20210109920 | Method for Validating Transaction in Blockchain Network and Node for Configuring Same Network - In this method for validating a transaction in a blockchain network, validation speed is increased. A first node | 2021-04-15 |
20210109921 | MERKLE TREE STORAGE OF BIG DATA - A non-transitory computer tangible medium containing instructions for securing a large data set within a Merkle Tree structure is disclosed in the present specification. The instructions include storing each data object of a large data set within a separate node of a Merkle Tree including within a root node, leaf nodes, and nodes interconnecting the root node to the leaf nodes. The nodes of the Merkle Tree may be blockchained together with multiple blockchains that all have an initial blockchain block based on the root node of the Merkle Tree and a final blockchain block based on one of the different leaf nodes. The Merkle Tree may have an order “O” that remains constant for each level of the Merkle B-Tree, or have an order “O” that varies for at least one level of the Merkle B-Tree from the remaining levels. | 2021-04-15 |
20210109922 | DATABASE MIGRATION TECHNIQUE - One or more processors generate a view that identifies the data records of a first database having a back-level version. Instructions are received to migrate the data records from a back-level version to a new version of the data records. Responsive to receiving a query requesting data records of the first database, the version level of the requested data records is determined based on the generated view. Responsive to determining the requested data records are in the back-level version, a migration is performed on the requested data records including changes resulting in the new version of the requested data records. The requested data records are identified as changed to the new version of data records, and the new version of the requested data records are written to a pre-determined storage location and are provided to a requestor submitting the query. | 2021-04-15 |
20210109923 | CONTEXTUAL DATA VISUALIZATION - A method for contextual data visualization includes receiving data selected by a user and meta-data associated with the data. The data is analyzed, using a processor of a computing device, to determine content and structure attributes of the data that are relevant to visualization of the data. The meta-data is analyzed, using a processor of the computing device, to determine a context in which the visualization of the data will be used. A database comprising an aggregation of visualization records from a plurality of users is accessed and at least one template from the data visualization records that matches the data attributes and context is selected. A data visualization is created by applying at least one template to the data. | 2021-04-15 |
20210109924 | USER INTERFACE FOR SEARCHING - The present disclosure relates to search techniques. In one example process, the device concurrently displays remote search results and local search results. In another example process, the device provides previews of search results that include actionable user interface objects. In another example process, the device concurrently displays options for initiating a search using various search engines. | 2021-04-15 |
20210109925 | INFORMATION MANAGEMENT DEVICE, INFORMATION MANAGEMENT METHOD, AND INFORMATION MANAGEMENT PROGRAM - An information management device includes a memory, and processing circuitry coupled to the memory and configured to convert said spatio-temporal information in storage object information into a one-dimensional bit string, split the converted one-dimensional bit string into an upper bit string and a lower bit string, and cause a storage target node to store at least the split upper bit string in a key and to store the split lower bit string and said associated data in a value of that key, and convert a range condition of spatio-temporal information of an object to be retrieved into one-dimensional bit string, split the converted one-dimensional bit string into an upper bit string and a lower bit string, retrieve a key from a search target node using at least the split upper bit string, and retrieve a value corresponding to the split lower bit string from values of the retrieved key. | 2021-04-15 |
20210109926 | SYSTEMS AND METHODS FOR STORING AND QUERYING USER EVENT DATA - Systems and methods for storing and querying user event data record information about user events in data pairs that are tied to specific days of the year. Storing the user event data in this fashion makes it easy to conduct very rapid queries to identify those users who satisfy certain criteria. | 2021-04-15 |
20210109927 | SYSTEM AND METHOD FOR AUTOMATICALLY PROVIDING ALTERNATIVE POINTS OF VIEW FOR MULTIMEDIA CONTENT - A selection of content from a content presentation is received. At least one topic from the selected content is extracted using natural language processing (NLP). The at least one topic is representative of a subject conveyed within the selected content. At least one perspective associated with the at least one topic is extracted using NLP. The at least one perspective is representative of a point of view conveyed within the selected content regarding the at least one topic. A topic rating of the extracted topics and associated perspectives is determined based upon the extracted topics and associated perspectives. The topic rating is representative of a topic diversity among the extracted topics and associated perspectives. The topic rating is presented within a graphical user interface (GUI). | 2021-04-15 |
20210109928 | INTERACTIVE TABLE-BASED QUERY CONSTRUCTION USING CONTEXTUAL FORMS - A method includes causing display of events that correspond to search results of a search query in a table. The table includes rows representing events comprising data items of event attributes, columns forming cells with the row, the columns representing respective event attributes, and interactive regions corresponding to one or more data items of the displayed data items. The method also includes in response to the user selecting a designated interactive region, causing display of a list of options, each displayed option corresponding to an interface template for composing query commands, and based on the user selecting an option in the displayed list of options, causing one or more commands to be added to the search query, the one or more commands composed based on the one or more data items that corresponds to the designated interactive region according to instructions of the interface template of the selected option. | 2021-04-15 |
20210109929 | PERFORMANCE OPTIMIZATION OF HYBRID SHARING MODEL QUERIES - Systems and methods for processing requests for shared records are described. A server computing system receives a data access request associated with a user. The server determines shared records granted by a first sharing rule associated with the user in response to receiving a data access request. The server processes the data access request based on the shared records granted by the first sharing rule and shared records granted by a second sharing rule associated with the user. The shared records granted by the second sharing rule having been determined prior to receiving the data access request, the first sharing rule and the second sharing rule generated prior to receiving the data access request. | 2021-04-15 |
20210109930 | BITMAP-BASED COUNT DISTINCT QUERY REWRITE IN A RELATIONAL SQL ALGEBRA - Techniques are described for storing and maintaining, in a materialized view, bitmap data that represents a bitmap of each possible distinct value of an expression and rewriting a query for a count of distinct values of the expression using the materialized view. The materialized view contains bitmap data that represents a bitmap of each possible distinct value of a first expression, and aggregate values of additional expressions, and is stored in memory or on disk by a database system. The database system receives a query that requests a number of distinct values, of the first expression, and an aggregate value for an additional expression. In response, the database system, rewrites the query to: compute the number of distinct values by counting the bits in the bitmap data of the materialized view that are set to the first value, and obtains the aggregate value for the additional expression in the materialized view. | 2021-04-15 |
20210109931 | DATA SECURITY THROUGH QUERY REFINEMENT - Systems, methods, and computer media for securing data accessible through software applications are provided herein. By capturing path data such as returned results for a query and displayed results provided by an application (e.g., to or by a web browser) for an operation, it can be determined if the query returned more data than was needed for what was displayed. The query can be refined to limit the data returned and reduce the security risk of such over-provisioning of data. | 2021-04-15 |
20210109932 | SELECTING AN OPTIMAL COMBINATION OF SYSTEMS FOR QUERY PROCESSING - A method is provided for generating a classification model configured to select an optimal execution combination for query processing. The method provides, to a processor, training queries and different execution combinations for executing the training queries. Each different execution combination involves a respective different query engine and a respective different runtime. The method extracts, from a set of Directed Acyclic Graphs (DAGs) using a set of Cost-Based Optimizers (CBOs), a set of feature vectors for each of the plurality of training queries. The method adds, by the processor to each of merged feature vectors a respective label indicative of the optimal execution combination based on actual respective execution times of the plurality of different execution combinations, to obtain a set of labels. The method trains, by the processor, the classification model by learning the set of merged feature vectors with the set of labels. | 2021-04-15 |
20210109933 | LINKING DATA SETS - Linking data sets, including receiving a selection of a first column of a first data set related to a second column of a second data set; in response to the selection, generating a query based on a relationship between the first column of the first data set and the second column of the second data set; and presenting a third data set based on a response to the query. | 2021-04-15 |