49th week of 2020 patent applcation highlights part 49 |
Patent application number | Title | Published |
20200379897 | SYSTEMS AND METHODS FOR MANAGING AN ARTIFICIALLY LIMITED LOGICAL SPACE OF NON-VOLATILE MEMORY - Systems and methods for managing non-volatile memory devices are provided. Embodiments discussed herein define a native logical space to manage relatively high volume data write operations and define an artificially limited logical space to manage relatively low volume data write operations. The native logical space may include native logical bands that are mapped to a native number of physical blocks to enable high volume, high data transfer of data. The artificially limited logical space may include artificially limited logical bands that are mapped to an artificially limited number of available physical blocks. The artificially limited logical bands are better suited for low volume, low data transfer of data and do not unnecessarily tie up a native number of physical blocks. | 2020-12-03 |
20200379898 | METHOD FOR PERFORMING ACCESS MANAGEMENT OF MEMORY DEVICE WITH AID OF INFORMATION ARRANGEMENT, ASSOCIATED MEMORY DEVICE AND CONTROLLER THEREOF, ASSOCIATED ELECTRONIC DEVICE - A method for performing access management of a memory device with aid of information arrangement and associated apparatus (e.g. the memory device and controller thereof, and an associated electronic device) are provided. The method may include: when the host device sends a write command to the memory device, utilizing the memory controller to generate a plurality of ECC chunks respectively corresponding to a plurality of sets of memory cells of the NV memory according to data, for establishing one-to-one mapping between the plurality of ECC chunks and the plurality of sets of memory cells; and utilizing the memory controller to store the plurality of ECC chunks into the plurality of sets of memory cells, respectively, to prevent any two ECC chunks of the ECC chunks from sharing a same set of memory cells of the sets of memory cells, to enhance read performance of the memory controller regarding the data. | 2020-12-03 |
20200379899 | METHODS FOR FACILITATING EFFICIENT STORAGE OPERATIONS USING HOST-MANAGED SOLID-STATE DISKS AND DEVICES THEREOF - Methods, non-transitory machine readable media, and computing devices that facilitate efficient storage operations using host-managed solid-state disks (SSDs) are disclosed. With this technology, a direct memory access (DMA) transfer is initiated of a data block from a location indicated in an application write request to a write buffer in a device memory of an SSD. A determination is made when write rule(s) are satisfied based on content of the write buffer including at least the data block and other data block(s) previously transferred to the write buffer. A copy request is issued to transfer a portion of the content to flash media of the SSD, when the write rule(s) are satisfied. This technology does not require host memory for write buffering or processor cycles for copying data from application data buffers to a write buffer in host memory, and thereby significantly improves resource utilization of host devices managing SSDs. | 2020-12-03 |
20200379900 | CONFIGURABLE MEMORY DEVICE CONNECTED TO A MICROPROCESSOR - The present memory restoration system enables a collection of computing systems to prepare inactive rewritable memory for reserve and future replacement of other memory while the other memory is active and available for access by a user of the computing system. The preparation of the reserved memory part is performed off-line in a manner that is isolated from the current user of the active memory part. Preparation of memory includes erasure of data, reconfiguration, etc. The memory restoration system allows for simple exchange of the reserved memory part, once the active memory part is returned. The previously active memory may be concurrently recycled for future reuse in this same manner to become a reserved memory. This enables the computing collection infrastructure to “swap” to what was previously the inactive memory part when a user vacates a server, speeding up the server wipe process. | 2020-12-03 |
20200379901 | CONTROLLER, DATA STORAGE DEVICE, AND PROGRAM PRODUCT - According to one embodiment, a write instructing unit instructs a data access unit to write, in a storage area of a data storage unit indicated by a first physical address, write object data, instructs a management information access unit to update address conversion information, and instructs a first access unit to update the first physical address. A compaction unit extracts a physical address of compaction object data, instructs the data access unit to read the compaction object data stored in a storage area of the data storage unit indicated by the physical address, instructs the data access unit to write the compaction object data in a storage area of the data storage unit indicated by a second physical address, instructs the management information access unit to update the address conversion information, and instructs a second access unit to update the second physical address. | 2020-12-03 |
20200379902 | SECURITY CHECK SYSTEMS AND METHODS FOR MEMORY ALLOCATIONS - A memory controller is to store a unique tag at the mid-point address within each of allocated memory portions. In addition to the tag data, additional metadata may be stored at the mid-point address of the memory allocation. For each memory access operation, an encoded pointer contains information indicative of a size of the memory allocation as well as its own tag data. The processor circuitry compares the tag data included in the encoded pointer with the tag data stored in the memory allocation. If the tag data included in the encoded pointer matches the tag data stored in the memory allocation, the memory operation proceeds. If the tag data included in the encoded pointer fails to match the tag data stored in the memory allocation, an error or exception is generated. | 2020-12-03 |
20200379903 | STAGGERED GARBAGE COLLECTION UNIT (GCU) ALLOCATION ACROSS DIES - Apparatus and method for managing a non-volatile memory (NVM) such as a flash memory in a solid-state drive (SSD). In some embodiments, the NVM is arranged as a plurality of semiconductor memory dies coupled to a controller circuit using a plurality of channels. The controller circuit divides the plurality of dies into a succession of garbage collection units (GCUs). Each GCU is independently erasable and allocatable for storage of user data. The GCUs are staggered so that each GCU is formed from a different subset of the dies in the NVM. In further embodiments, the dies are arranged into NVM sets in accordance with the NVMe (Non-Volatile Memory Express) specification with each NVM set addressable by a different user for storage of data in a separate set of staggered GCUs. | 2020-12-03 |
20200379904 | OPTIMIZING GARBAGE COLLECTION USING CHECK POINTED DATA SETS - A determination as to whether a section of a storage device of a plurality of storage devices of the storage system corresponds to one or more check-pointed data sets of a plurality of check-pointed data sets that identifies one or more regions of the section having overwritten data is made. A garbage collection process is performed on the one or more regions of the section having overwritten data upon determining that the section corresponds to the one or more check-pointed data sets. | 2020-12-03 |
20200379905 | STORAGE DEVICE AND METHOD OF OPERATING THE SAME - A storage device having enhanced operating efficiency may include: a memory device including a plurality of memory blocks; and a memory controller configured to perform, using an identical random seed, an operation of de-randomizing data stored in different memory blocks among the plurality of memory blocks. | 2020-12-03 |
20200379906 | SELECTIVE RELEASE-BEHIND OF FILE PAGES - Disclosed is a computer implemented method to manage a cache, the method comprising, determining that a primary application opens a first file, wherein opening the first file includes reading the first file into a file cache from a storage. The method also includes, setting a first monitoring variable in the primary application process proc structure, wherein the first monitoring variable is set in response to the primary application opening the first file, and the first monitoring variable records a set of operations completed on the first file by the primary application. The method comprises a first read of the first file being at a beginning of the first file. The method includes identifying that the first file is read according to a pattern that includes reading the first file sequentially and reading the first file entirely and removing the first file from the file cache. | 2020-12-03 |
20200379907 | REDUCING CACHE INTERFERENCE BASED ON FORECASTED PROCESSOR USE - In various embodiments, a predictive assignment application computes a forecasted amount of processor use for each workload included in a set of workloads using a trained machine-learning model. Based on the forecasted amounts of processor use, the predictive assignment application computes a performance cost estimate associated with an estimated level of cache interference arising from executing the set of workloads on a set of processors. Subsequently, the predictive assignment application determines processor assignment(s) based on the performance cost estimate. At least one processor included in the set of processors is subsequently configured to execute at least a portion of a first workload that is included in the set of workloads based on the processor assignment(s). Advantageously, because the predictive assignment application generates the processor assignment(s) based on the forecasted amounts of processor use, the isolation application can reduce interference in a non-uniform memory access (NUMA) microprocessor instance. | 2020-12-03 |
20200379908 | Intelligent Content Migration with Borrowed Memory - Systems, methods and apparatuses to intelligently migrate content involving borrowed memory are described. For example, after the prediction of a time period during which a network connection between computing devices having borrowed memory degrades, the computing devices can make a migration decision for content of a virtual memory address region, based at least in part on a predicted usage of content, a scheduled operation, a predicted operation, a battery level, etc. The migration decision can be made based on a memory usage history, a battery usage history, a location history, etc. using an artificial neural network; and the content migration can be performed by remapping virtual memory regions in the memory maps of the computing devices. | 2020-12-03 |
20200379909 | CACHE ARRANGEMENT FOR GRAPHICS PROCESSING SYSTEMS - A graphics processing system is disclosed having a cache system ( | 2020-12-03 |
20200379910 | Host and Method for Storage System Calibration - A storage system, host, and method for storage system calibration are provided. In one embodiment, a storage system is provided comprising a memory and a controller. The controller is configured to: determine a pattern of host writes to the memory; determine whether the pattern of host writes matches a granularity of a logical-to-physical address map used by the storage system; and in response to determining that the pattern of host writes does not match the granularity of the logical-to-physical address map used by the storage system, change the granularity of the logical-to-physical address map used by the storage system. In another embodiment, the storage system calibration is done by host directive. Other embodiments are provided. | 2020-12-03 |
20200379911 | ALLOCATION OF MACHINE LEARNING TASKS INTO A SHARED CACHE - The subject technology receives code corresponding to a neural network (NN) model, the code including particular operations that are performed by the NN model. The subject technology determines, among the particular operations, a set of operations that are to be allocated to a cache of the electronic device that is to execute the NN model. The subject technology generates a set of cache indicators corresponding to the determined set of operations. The subject technology compiles the code and the generated set of cache indicators to provide a compiled binary for the NN model to execute on a target device. | 2020-12-03 |
20200379912 | SINGLE PRODUCER SINGLE CONSUMER BUFFERING IN DATABASE SYSTEMS - A method for execution by a virtual machine core includes retrieving a first pointer by accessing a first buffer of a plurality of buffers stored in allocated memory of a main memory based on assignment of the virtual machine core as a single consumer of the first buffer. First intermediate data in the allocated memory is accessed by utilizing the first pointer. Second intermediate data is generated by executing one of an ordered set of operations on the first intermediate data. The second intermediate data is written to the allocated memory. A second pointer is written to a second buffer of the plurality of buffers based on assignment of the virtual machine core as a single producer of the second buffer. | 2020-12-03 |
20200379913 | Distributed Computing based on Memory as a Service - Systems, methods and apparatuses of distributed computing based on Memory as a Service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time. | 2020-12-03 |
20200379914 | Fine Grain Data Migration to or from Borrowed Memory - Systems, methods and apparatuses of fine grain data migration in using Memory as a Service (MaaS) are described. For example, a memory status map can be used to identify the cache availability of sub-regions (e.g., cache lines) of a borrowed memory region (e.g., a borrowed remote memory page). Before accessing a virtual memory address in a sub-region, the memory status map is checked. If the sub-region has cache availability in the local memory, the memory management unit uses a physical memory address converted from the virtual memory address to make memory access. Otherwise, the sub-region is cached from the borrowed memory region to the local memory, before the physical memory address is used. | 2020-12-03 |
20200379915 | PERSISTENT LOGICAL TO VIRTUAL TABLE - Techniques for persisting a logical address-to-virtual address table in a solid state storage device are presented. An example method includes receiving a request to write data to a logical block address (LBA) in a memory component of the solid state storage device. The data is written to a location identified by a virtual block address (VBA) in the solid state storage device. The VBA is stored in a rotating dump table in a reserved logical unit of the solid state storage device. A mapping between the LBA and the VBA is stored in a rotating journal table located in the reserved logical unit. The rotating journal table is buffered such that a number of journal entries are stored in a buffer until a threshold number of journal entries are committed to the rotating journal table. A pointer to a current address in the rotating journal is stored in the buffer. | 2020-12-03 |
20200379916 | SUPPORTING A VIRTUAL MEMORY AREA AT A REMOTE COMPUTING MACHINE - Systems and methods for operating a virtual memory area are provided. A dynamic address translation table for the virtual memory area is generated. A program is operated at a first computing machine until insufficient local real memory is available to complete operation. A request for real memory space is transmitted from the first computing machine to an additional computing machine. A location of a segment of the local real memory of the additional computing machine is received at the first computing machine and the dynamic address translation table is updated to associate a virtual address with the received location. | 2020-12-03 |
20200379917 | DEFINING VIRTUALIZED PAGE ATTRIBUTES BASED ON GUEST PAGE ATTRIBUTES - A processing system includes a processing core to execute a virtual machine (VM) comprising a guest operating system (OS) and a memory management unit, communicatively coupled to the processing core, comprising a storage device to store an extended page table entry (EPTE) comprising a mapping from a guest physical address (GPA) associated with the guest OS to an identifier of a memory frame, a first plurality of access right flags associated with accessing the memory frame in a first page mode referenced by an attribute of a memory page identified by the GPA, and a second plurality of access right flags associated with accessing the memory frame in a second page mode referenced by the attribute of the memory page identified by the GPA. | 2020-12-03 |
20200379918 | COMPRESSION FOR FLASH TRANSLATION LAYER - A device compresses a mapping table in a flash translation layer of a SSD. The mapping table includes mappings between Logical Page Numbers (LPNs) and Physical Page Numbers (PPNs). A base PPN table stores at least one entry including a base PPN common to multiple LPNs. A PPN offset table stores an offset for each mapping. A set of hash functions are duplicated for each entry in the base PPN table. A bit extension unit adds bits to the respective offset in the PPN offset table to provide an extended offset bit. A hash calculator calculates a hash value using the base PPN and one of the hash functions corresponding to the base PPN. An exclusive OR unit outputs a new PNN for each of different LPNs, including the multiple LPNs, by applying an exclusive OR operation to the hash value and the extended offset bit. | 2020-12-03 |
20200379919 | Memory Management Unit (MMU) for Accessing Borrowed Memory - Systems, methods and apparatuses to accelerate accessing of borrowed memory over network connection are described. For example, a memory management unit (MMU) of a computing device can be configured to be connected both to the random access memory over a memory bus and to a computer network via a communication device. The computing device can borrow an amount of memory from a remote device over a network connection using the communication device; and applications running in the computing device can use virtual memory addresses mapped to the borrowed memory. When a virtual address mapped to the borrowed memory is used, the MMU translates the virtual address into a physical address and instruct the communication device to access the borrowed memory. | 2020-12-03 |
20200379920 | Power Aware Translation Lookaside Buffer Invalidation Optimization - One disclosed embodiment includes a method for memory management. The method includes receiving a first request to clear one or more entries of a translation lookaside buffer (TLB), receiving a second request to clear one or more entries of the TLB, bundling the first request with the second request, determining that a processor associated with the TLB transitioned to an inactive mode, and dropping the bundled first and second requests based on the determination. | 2020-12-03 |
20200379921 | STORAGE SYSTEM AND METHOD FOR PERFORMING AND AUTHENTICATING WRITE-PROTECTION THEREOF - In one embodiment, the method includes receiving, at a storage device, a request. The request includes a request message authentication code and write protect information. The write protect information includes at least one of start address information and length information. The start address information indicates a logical block address at which a memory area in a non-volatile memory of the storage device starts, and the length information indicates a length of the memory area. The method also includes generating, at the storage device, a message authentication code based on (1) at least one of the start address information and the length information, and (2) a key stored at the storage device; authenticating, at the storage device, the request based on the generated message authentication code and the request message authentication code; and processing, at the storage device, the request based on a result of the authenticating. | 2020-12-03 |
20200379922 | ADAPTIVE ROUTING FOR POOLED AND TIERED DATA ARCHITECTURES - Examples described herein relate to a network device apparatus that includes a packet processing circuitry configured to determine if target data associated with a memory access request is stored in a different device than that identified in the memory access request and based on the target data associated with the memory access request identified as stored in a different device than that identified in the memory access request, cause transmission of the memory access request to the different device. In some examples, the memory access request comprises an identifier of a requester of the memory access request and the identifier comprises a Process Address Space identifier (PASID) and wherein the configuration that a redirection operation is permitted to be performed for a memory access request is based at least on the identifier. In some examples, the packet processing circuitry is to: based on configuration of a redirection operation not to be performed for the memory access request, cause transmission of the memory access request to a device identified in the memory access request. | 2020-12-03 |
20200379923 | GRANULAR ACCESS CONTROL FOR SECURE MEMORY - A secure processing system includes a memory having a secure partition and a non-secure partition, a neural network processing unit (NPU) configured to initiate transactions with the memory, and a memory protection unit (MPU) configured to filter the transactions. Each of the transactions includes at least an address of the memory to be accessed, one of a plurality of first master identifiers (IDs) associated with the NPU, and security information indicating whether the NPU is in a secure state or a non-secure state when the transaction is initiated. The MPU is to selectively deny access to the secure partition of the memory based at least in part on the memory address, the first master ID, and the security information associated with each of the transactions. | 2020-12-03 |
20200379924 | FUNCTIONAL SAFETY METHOD, CORRESPONDING SYSTEM-ON-CHIP, DEVICE AND VEHICLE - A method is provided to access a data storage memory that stores data signals in a plurality of indexed memory locations. An access control circuit receives a memory access request signals from a processing circuit. The method includes replicating the respective memory access request signals to provide for each a respective replicated memory access request signal, accessing indexed internal memory locations to retrieve a first data signal retrieved as a function of the respective memory access request signal and a second data signal retrieved as a function of the respective replicated memory access request signal, and checking for identity the first data signal and the at least one second data signal. The access control circuit transmits to the processing circuit a data signal or an integrity error flag signal as a result of the identity check. | 2020-12-03 |
20200379925 | EXECUTION SPACE AGNOSTIC DEVICE DRIVERS - Embodiments described herein provide techniques to manage drivers in a user space in a data processing system. One embodiment provides a data processing system configured perform operations, comprising discovering a hardware device communicatively coupled to the communication bus, launching a user space driver daemon, establishing an inter-process communication (IPC) link between a first proxy interface for the user space driver daemon and a second proxy interface for a server process in a kernel space, receiving, at the first proxy interface, an access right to enable access to a memory buffer in the kernel space, and relaying an access request for the memory buffer from the user space driver daemon via a third-party proxy interface to enable the user space driver daemon to access the memory buffer, the access request based on the access right. | 2020-12-03 |
20200379926 | EXTERNAL BLOCK TRANSFER IN A MEMORY SYSTEM - A memory system includes a dynamic random access memory (DRAM) device, a second memory device, and a memory controller circuit. The memory controller circuit is coupled to the DRAM device by a first data channel configured to transfer first data between the memory controller circuit and the DRAM device on behalf of a host, and is also coupled to the DRAM device by a second data channel configured to transfer second data between the memory controller circuit and the DRAM device on behalf of the second memory device while the first data is being transferred across the first data bus. | 2020-12-03 |
20200379927 | Providing Copies of Input-Output Memory Management Unit Registers to Guest Operating Systems - An electronic device includes a processor that executes a guest operating system, an input-output memory management unit (IOMMU), and a main memory that stores an IOMMU backing store. The IOMMU backing store includes a separate copy of a set of IOMMU memory-mapped input-output (MMIO) registers for each guest operating system in a set of supported guest operating systems. The IOMMU receives, from the guest operating system, a communication that accesses data in a given IOMMU MMIO register. The IOMMU then performs a corresponding access of the data in a copy of the given IOMMU MMIO register in the IOMMU backing store associated with the guest operating system. | 2020-12-03 |
20200379928 | IMAGE PROCESSING ACCELERATOR - A processing accelerator includes a shared memory, and a stream accelerator, a memory-to-memory accelerator, and a common DMA controller coupled to the shared memory. The stream accelerator is configured to process a real-time data stream, and to store stream accelerator output data generated by processing the real-time data stream in the shared memory. The memory-to-memory accelerator is configured to retrieve input data from the shared memory, to process the input data, and to store, in the shared memory, memory-to-memory accelerator output data generated by processing the input data. The common DMA controller is configured to retrieve stream accelerator output data from the shared memory and transfer the stream accelerator output data to memory external to the processing accelerator; and to retrieve the memory-to-memory accelerator output data from the shared memory and transfer the memory-to-memory accelerator output data to memory external to the processing accelerator. | 2020-12-03 |
20200379929 | Memory Access System - A memory access system includes a memory that is abstracted into data structures. The memory access system further includes a processor that generates an access request for accessing the abstracted memory by way of a structure access circuit of the memory access system. As the memory is abstracted into the data structures and the processor accesses the abstracted memory using the data structures, an addressing capability of the processor is extended. Further, the computing overhead of the processor is reduced, as the processor performs various memory operations by accessing the memory by way of the structure access circuit. | 2020-12-03 |
20200379930 | SYSTEM AND METHOD FOR TRANSFORMING LEGACY SR-IOV DEVICES TO APPEAR AS SIOV QUEUE PAIRS USING A MANAGEMENT CONTROLLER - Methods and systems support bridging between end devices conforming to a legacy bus specification and a host processor using an updated bus specification, for example the latest PCIe specification or Compute Express Link (CXL). A hardware bridge can serve as an intermediary between the legacy I/O devices and the host processor. The hardware bridge has a hardware infrastructure and performs a hardware virtualization of the legacy I/O devices such that their legacy hardware is emulated by a virtual interface. The hardware bridge can surface the virtual interface to the host processor, enabling these I/O devices to appear to the host processor as an end device communicating in accordance with the updated bus specification. The hardware virtualization can involve emulating the I/O devices using scalable I/O Virtualization (SIOV) queue pairs, providing flexible and efficient translation between the legacy and updated specifications. | 2020-12-03 |
20200379931 | SYSTEM ARCHITECTURE WITH SECURE DATA EXCHANGE - In an embodiment, a system comprises: a first bus; a second bus; a first peripheral coupled to the first bus and the second bus, the first peripheral configured to receive a command from the first bus and to generate data in response to the first command; and a second peripheral coupled to the first bus and the second bus, the second peripheral configured to initiate transfer of the generated data from the first peripheral to the second peripheral over the second bus such that access to the generated data through the first bus is prevented. | 2020-12-03 |
20200379932 | APPLICATION PROCESSOR FOR LOW POWER OPERATION, ELECTRONIC DEVICE INCLUDING THE SAME AND METHOD OF OPERATING THE SAME - An application processor includes a system bus, a host processor, a voice trigger system and an audio subsystem that are electrically connected to the system bus. The voice trigger system performs a voice trigger operation and issues a trigger event based on a trigger input signal that is provided through a trigger interface. The audio subsystem includes an audio interface and processes audio streams through the audio interface. The voice trigger system is disposed in an always-powered domain where power is supplied in both of an active mode and a standby mode. The host processor and the audio subsystem are disposed in a power-save domain where power is blocked in the standby mode. The host processor launches into the active mode when the voice trigger system issues the trigger event. | 2020-12-03 |
20200379933 | MULTI-PROTOCOL IO INFRASTRUCTURE FOR A FLEXIBLE STORAGE PLATFORM - A flexible storage system. A storage motherboard accommodates, on a suitable connector, a storage adapter circuit that provides protocol translation between a host bus interface and a storage interface, and that provides routing, to accommodate a plurality of mass storage devices that may be connected to the storage adapter circuit through the storage motherboard. The storage adapter circuit may be replaced with a circuit supporting a different host interface or a different storage interface. | 2020-12-03 |
20200379934 | MODE SWITCHING SYSTEM AND MODE SWITCHING METHOD USING THE SAME - A mode switching system including a first electronic device and the second electronic device is provided. The first electronic device includes a main control unit, a USB Type-C interface controller and a USB hub. The interface controller is coupled to the main control unit. The USB hub is coupled to the interface controller. The second electronic device is coupled to the interface controller of the first electronic device. The main control unit is configured to: (1) disable the USB hub in response to a mode switching instruction; (2) switch the mode of the interface controller from a first mode to a second mode; (3) command the interface controller to re-communicate with the second electronic device. | 2020-12-03 |
20200379935 | DYNAMIC ALLOCATION OF RESOURCES OF A STORAGE SYSTEM UTILIZING SINGLE ROOT INPUT/OUTPUT VIRTUALIZATION - A peripheral component interconnect express (PCIe) physical function is coupled to a controller. The controller is configured to allocate a first portion of resources for use by the PCIe physical function. A PCIe virtual function is coupled to the controller. The is configured to allocate a second portion of resources for use by the PCIe virtual function based, at least in part, on a total number of PCIe physical functions and a total number of PCIe virtual functions associated with the apparatus. | 2020-12-03 |
20200379936 | RECONFIGURABLE CHANNEL INTERFACES FOR MEMORY DEVICES - Methods, systems, and devices for reconfigurable channel interfaces for memory devices are described. A memory device may be split into multiple logical channels, where each logical channel is associated with a memory array and a command/address (CA) interface. In some cases, the memory device may configure a first CA interface associated with a first channel to forward commands to a first memory array associated with the first channel and a second memory array associated with a second channel. The configuring may include isolating a second CA interface associated with the second channel from the second array and coupling the first CA interface with the second memory array. | 2020-12-03 |
20200379937 | SEMICONDUCTOR SYSTEM AND SEMICONDUCTOR DEVICE - A semiconductor system capable of reducing processing time in connection processing to a USB port is provided. The semiconductor system comprises TCPM and TCPC. The TCPM and the TCPC are communicably connected via the I2C bus. The TCPM has a connection detector. The TCPC in a CC logic and a controller. The CC logic embodies a state machine. The controller controls transitions in the state machine. The controller outputs a connected state transition notification when the connected state transitions to the connected state. The connection detector receives the connected state transition notification and detects the connection of the USB port. The TCPM performs a process corresponding to the connection detection by the connection detector. | 2020-12-03 |
20200379938 | SYSTEMS AND METHODS FOR DOOR AND DOCK EQUIPMENT SERVICING - A method for monitoring automatic mechanical devices selected from at least one of automatic doors and automatic dock equipment located at a commercial site. The method may include installing, at the commercial site, a plurality of internet-of-things (IoT) monitoring devices. Each of the IoT monitoring devices may include a plurality of connectors corresponding to respective data communication standards, and a wireless transceiver configured to transmit operational information. Electronic communication between each automatic mechanical device and one of the IoT monitoring devices may be established via one of the connectors. A device profile may be assigned for each of the automatic mechanical devices. Each device profiles defines a respective connector and combination of manufacturer and device model. Data reflecting operational events and states of the automatic mechanical devices is received over the connectors, and corresponding operational information relating to the automatic mechanical devices is transmitted for analysis. | 2020-12-03 |
20200379939 | SYSTEMS, COMPUTER-READABLE MEDIA AND COMPUTER-IMPLEMENTED METHODS FOR NETWORK ADAPTER ACTIVATION IN CONNECTION WITH FIBRE CHANNEL UPLINK MAPPING - A system, computer-readable media and computer-implemented method for automated network adapter activation in connection with fibre channel uplink mapping. The system includes a non-virtualized storage area network switch having a plurality of fibre channel ports. Each of the fibre channel ports is coupled to a corresponding cable to at least partly define a fibre channel uplink. The system also includes a plurality of client devices. Each client device has a network adapter. The system also includes a processing element and non-transitory computer-readable media having computer-readable instructions instructing the processing element to complete the following steps: (1) automatically execute an algorithm to determine a sequence for mapping the network adapters to respective fibre channel uplinks; (2) automatically determine a network adapter activation pattern based on the sequence to include a time delay between the network adapters; (3) automatically map the network adapters to respective fibre channel uplinks according to the sequence; and (4) automatically activate the network adapters based on the network adapter activation pattern. | 2020-12-03 |
20200379940 | INPUT DATA SWITCHES - An example method comprises establishing, via a first data communication interface of a first computing device, a physical connection to between the first computing device and a second computing device. The method also includes receiving, at the first computing device, input data via a second data communication interface of the first computing device. The method further includes controlling an operation at the first computing device based on the input data when the input data is received prior to establishing the connection. The method further includes routing the input data to the second computing device via an input data switch of the first computing device when the input data is received after establishing the connection. | 2020-12-03 |
20200379941 | COMMUNICATION SYSTEM AND COMMUNICATION CONTROL METHOD - Provided is a communication system including: a first communication bus available for communication of at least a first communication scheme; a second communication bus available for both communication of the first communication scheme and communication of a second communication scheme having a lower processing load than the first communication scheme; a plurality of first communication devices connected to both the first communication bus and the second communication bus; a plurality of second communication devices, connected to the second communication bus, which perform communication through the second communication scheme using the second communication bus; and a processor that detects an abnormality of the first communication bus, wherein each of the plurality of first communication devices performs communication through the first communication scheme using the first communication bus in a case where the abnormality of the first communication bus is not detected by the processor, and performs communication through the first communication scheme using the second communication bus in a case where the abnormality of the first communication bus is detected by the processor. | 2020-12-03 |
20200379942 | SYNCHRONIZATION OF AUDIO ACROSS MULTIPLE DEVICES - Methods and devices for synchronizing audio among a plurality of display devices in communication with a computer device may include determining a plurality of audio data subsets with audio data from an audio stream to transmit to a plurality of display devices in communication with the computer device via a universal serial bus (USB) connection. The methods and devices may include obtaining a current frame number of a display device render buffer from a first display device of the plurality of display devices. The methods and devices may include determining an updated frame number by adding a constant to the current frame number; and generating a plurality of USB request blocks with the updated frame number and packets with the plurality of audio data subsets. The methods and devices may include sending the USB request blocks to a corresponding display device of the plurality of display devices. | 2020-12-03 |
20200379943 | PROVIDING A CONTINUATION POINT FOR A USER TO RECOMMENCE CONSUMING CONTENT - A pause point during consumption of media data is identified. The pause point is a point at which identify a user stops the consumption of the media data. A portion of content preceding the identified pause point is determined. The portion of content is analyzed to identify changes in content concepts in the portion of content. One or more continuation points for the user to return to the content based on changes in the content concepts in the portion of content are identified. The one or more continuation points are indicated to the user. | 2020-12-03 |
20200379944 | Shared Memory Structure for Reconfigurable Parallel Processor - Processors, systems and methods are provided for thread level parallel processing. A processor may comprise a plurality of processing elements (PEs) each having a plurality of arithmetic logic units (ALUs) that are configured to execute a same instruction in parallel threads and a plurality of memory ports (MPs) for the plurality of PEs to access a memory unit. Each of the plurality of MPs may comprise an address calculation unit configured to generate respective memory addresses for each thread to access a common area in the memory unit. | 2020-12-03 |
20200379945 | CIRCULAR RECONFIGURATION FOR RECONFIGURABLE PARALLEL PROCESSOR - Processors, systems and methods are provided for thread level parallel processing. A processor may comprise a plurality of reconfigurable units that may include a plurality of processing elements (PEs) and a plurality of memory ports (MPs) for the plurality of PEs to access a memory unit. Each of the plurality of reconfigurable units may comprise a configuration buffer and a reconfiguration counter. The processor may further comprise a sequencer coupled to the configuration buffer of each of the plurality of reconfigurable units and configured to distribute a plurality of configurations to the plurality of reconfigurable units for the plurality of PEs and the plurality of MPs to execute a sequence of instructions. | 2020-12-03 |
20200379946 | DEVICE, METHOD, AND GRAPHICAL USER INTERFACE FOR MIGRATING DATA TO A FIRST DEVICE DURING A NEW DEVICE SET-UP WORKFLOW - A method includes: detecting, via the one or more input devices, a first input that corresponds to migrating data to set-up the first device during a new device set-up workflow; and, in response to detecting the first input, displaying, via the display device, a data migration user interface that includes concurrently displaying: a selectable direct transfer option that corresponds to initiating a direct transfer of the data to the first device from a second device within a predefined proximity range of the first device, wherein the selectable direct transfer option includes an estimated time for completion of the direct transfer; and a selectable remote transfer option that corresponds to initiating a remote transfer of the data to the first device from a remote storage device, wherein the selectable remote transfer option includes an estimated time for completion of the remote transfer. | 2020-12-03 |
20200379947 | SYSTEMS AND METHODS FOR COLLECTING, ANALYZING, BILLING, AND REPORTING DATA FROM INTELLIGENT ELECTRONIC DEVICES - Systems and methods for collecting, analyzing, billing and reporting data from intelligent electronic devices are provided. Also, systems and methods for managing sensor data are provided. In some embodiments, a system for managing sensor data may include intelligent electronic devices, a server, a plurality of client devices, and a network. Each of the intelligent electronic devices is configured to obtain sensor data related to power parameters distributed to a load. The server is configured to receive the sensor data from the plurality of intelligent electronic devices and store the sensor data in a database. Each client device is configured to retrieve the sensor data from the database. The network enables communication among the server, the plurality of intelligent electronic devices, and the plurality of client devices. | 2020-12-03 |
20200379948 | INDEXES AND QUERIES FOR FILES BY INDEXING FILE DIRECTORIES - The described technology is generally directed towards improving indexes and queries for files by indexing file directories. According to an embodiment, a system can comprise a memory and a processor that can execute the components stored in the memory. The components can comprise a data interface to couple to a database system comprising a database storing metadata describing a file system, wherein the database comprises records that correspond to ones of directories of the file system, and wherein the records comprise a field that corresponds to files logically stored in the directories of the file system. The system can further comprise an indexing component that creates an index for the records based on an index key and an analysis of the ones of the files and the directories to which the records correspond, wherein the index comprises links between instances of the index key and ones of the directories. Further, the system can comprise a query component that queries the database for a file of the file system by employing a search key and the index. | 2020-12-03 |
20200379949 | MODIFICATION AND PERIODIC CURATION OF METADATA COLLECTED FROM A FILE SYSTEM - The described technology is generally directed towards reducing the amount of data stored in a sequence of data blocks by combining deduplication and compression. According to an embodiment, a system can comprise a memory that can store computer executable components, and a processor that can execute the components stored in the memory. The components can comprise a receiver component to receive metadata describing directories in a data store, wherein the metadata comprises, for the respective ones of the directories, a descendant directory. The system can further comprise a data structure component to create a tree data structure, comprising nodes corresponding to the directories, and comprising links corresponding to the metadata of the respective ones of the directories. Further, the system, can comprise a curation component to cull non-useful portions of the metadata from the tree data structure periodically. | 2020-12-03 |
20200379950 | SYSTEMS AND METHODS FOR UTILIZING MACHINE LEARNING AND NATURAL LANGUAGE PROCESSING TO PROVIDE A DUAL-PANEL USER INTERFACE - A device receives provides content to a client device via a dual-panel user interface that includes a first panel and a second panel. The device receives, from the client device, information indicating a user interaction, and processes the information, with a first model, to determine a question based on the user interaction. The device utilizes natural language processing with the question to determine an intent of the question, and processes the intent of the question, with a second model, to map the intent of the question to a content answer to the question. The device adds additional user information, associated with a user of the client device, to the content answer to generate a personalized content response, and updates the first panel with the personalized content response to generate an updated first panel. The device provides the updated first panel, via the dual-panel user interface, to the client device. | 2020-12-03 |
20200379951 | VISUALIZATION AND INTERACTION WITH COMPACT REPRESENTATIONS OF DECISION TREES - A decision tree model is generated from sample data. A visualization system may automatically prune the decision tree model based on characteristics of nodes or branches in the decision tree or based on artifacts associated with model generation. For example, only nodes or questions in the decision tree receiving a largest amount of the sample data may be displayed in the decision tree. The nodes also may be displayed in a manner to more readily identify associated fields or metrics. For example, the nodes may be displayed in different colors and the colors may be associated with different node questions or answers. | 2020-12-03 |
20200379952 | SELF-HEALING DATA SYNCHRONIZATION - A self-healing data synchronization process includes an initial stage in which a collection of data change events is received, a set of data record(s) corresponding to the data change event(s) is identified, and a syncing of the set of data record(s) is initiated. Data that indicates which data record(s) successfully synced and which failed is stored. During a subsequent stage of the self-healing process, data change events that occurred during a preceding time horizon are identified, a corresponding first set of data record(s) are identified, a difference between the first set and a second set of data record(s) that successfully synced during the time horizon is determined as a third set of data record(s), and any data record that was attempted to be synced during the time horizon but failed is excluded from the third set. A sync of any data record remaining in the third set is then initiated. | 2020-12-03 |
20200379953 | DATA COMPRESSION BY USING COGNITIVE CREATED DICTIONARIES - A compression method, system, and computer program product include creating compressed data via a first system from input data, sending information to a second system detailing a compression strategy for the compressed data, and learning, via the second system, from the information how to recreate the input to the first system using the compressed data. | 2020-12-03 |
20200379954 | FILE PROCESSING METHOD FOR VEHICLE MOUNTED MONITORING DEVICE AND VEHICLE MOUNTED MONITORING DEVICE - The present disclosure relates to a file processing method for a vehicle-mounted monitoring device and a vehicle-mounted monitoring device. The file processing method includes: acquiring locking weights of files in a local storage space if it is detected that a size of an occupied space in the local storage space is greater than or equal to a first storage threshold; and processing the files according to the locking weights. | 2020-12-03 |
20200379955 | INCREMENTAL METADATA AGGREGATION FOR A FILE STORAGE SYSTEM - The described technology is generally directed towards incremental aggregation of metadata for a file storage system. According to an embodiment, a system can comprise a memory and a processor that can execute the components stored in the memory. The components can comprise a scanner component that can accessing a data structure storage component that can store a first data structure, and a branch of the first data structure can comprise a node that comprises at least one descendent link to a descendant node. The scanner component can further traverse from a first node to a second node by employing a first descendent link. Further, the method comprises a data collector that can collect node data from the first node and the second node. The system can further comprise a rollup data generator to aggregate, upon occurrence of a condition, the node data, resulting in aggregated node data. | 2020-12-03 |
20200379956 | VERSION CONTROL OF ELECTRONIC FILES DEFINING A MODEL OF A SYSTEM OR COMPONENT OF A SYSTEM - A method for controlling versions of a model file includes determining a format of a particular element of a plurality of elements of the model file. The model file defines a model of a system. The method also includes converting the particular element of the plurality of elements from a non-text format to a text-based format in response to the particular element including data in the non-text format. The method further includes writing the particular element converted to the text-based format to a model text file. The model text file is a text-based version of the model file and the model text file is used to detect changes in the model file and to control different versions of the model file. | 2020-12-03 |
20200379957 | EFFICIENT CLUSTERED PERSISTENCE - The systems and methods disclosed herein relate to using the clusters of a file to store versioning of a dataset. When the dataset is initially stored, a file is created that is twice the size of the dataset. The file may include one cluster (or a first set of clusters) that is marked as active and a second cluster (or a second set of clusters) that are marked inactive. The dataset is initially saved to the active cluster(s), and a version number is stored with the dataset. When the dataset is next saved, an application scans the file to determine whether there is (or are) an inactive cluster(s). If there is an inactive cluster(s) the second version of the dataset is saved to the inactive clusters. Both clusters are then marked active. | 2020-12-03 |
20200379958 | DYNAMIC SYNTACTIC AFFINITY GROUP FORMATION IN A HIGH-DIMENSIONAL FUNCTIONAL INFORMATION SYSTEM - The invention includes methods for algorithmically modifying a representation of a functional system based on functional trajectory signals by electronically representing a systems syntax, wherein the systems syntax comprises a logical data model, electronically constructing a representation of the functional system comprising a graph, based on an input signal algorithmically computing a functional trajectory that assesses magnitude, distance, or paths among at least two nodes, and updating the functional trajectory representing a set of paths through functional locations over time. | 2020-12-03 |
20200379959 | NESTED MEDIA CONTAINER, PANEL AND ORGANIZER - A method for the organizing, managing, mapping, distributing, transportation and displaying of multi-layered content and/or data in a tactile volumetric (three-dimensional), flat (two-dimensional) and/or multi-dimensional container and/or panel which functions as a macro controller through tactile, sensatory, audible and/or other forms of user control. This includes the means to manipulate content and/or data through a visual and/or multi-sensatory interface that stores content and media in a nested and sub-nested hierarchical container and sub-container array which can give real-time feedback to any involved party. These containers and/or panels provide a means to permanently move and validate content between servers, devices and/or users, while giving a real-time visual and/or multi-sensatory response and representation to that user. This system also provides a means to ingest and convert legacy media formats. | 2020-12-03 |
20200379960 | INGESTING AND PROCESSING CONTENT TYPES - The present disclosure generally relates to systems, methods, and computer-readable media for developing and implementing workflows for a variety of data types. For example, systems disclosed herein may receive or otherwise generate a schema object on a schema system including a plurality of schema objects associated with different workflows. The schema object may include user interface behavior data indicating a content type and associated control type. The schema object may further include application programming interface (API) behavior data indicating a binding between a user interface engine and an API engine. The schema object may also include workflow behavior data indicating one or more services for processing the schema object. Moreover, systems described herein may deploy a plurality of parsers on a plurality of processing engines to enable flexibility and dynamic updates to content ingestion lifecycles. | 2020-12-03 |
20200379961 | METHOD AND SYSTEM FOR DATA QUALITY DELTA ANALYSIS ON A DATASET - The present disclosure relates to a method for data quality delta analysis on a dataset. The method provides a set of data quality rules for the dataset. At least one delta rule of a set of data quality rules is defined as relevant for delta analysis of at least part of the dataset, the delta rule being a delta analysis quality rule. Data changes on the dataset are tracked. In response to determining that a number of modified records of the at least part of the dataset is higher than a predefined insert modification threshold, a data quality score may be determined for said modified records using the delta rule. | 2020-12-03 |
20200379962 | QUALITY CHECK APPARATUS, QUALITY CHECK METHOD, AND PROGRAM - A quality check apparatus, a quality check method, and a quality check program can check the quality of input data output to a processing module. A device outputs the input data and first metadata indicating an attribute regarding the quality of the input data to the processing module. The quality check apparatus includes a first obtaining unit and a check unit. The first obtaining unit obtains the first metadata. The check unit checks the quality of the input data based on the first metadata. | 2020-12-03 |
20200379963 | SYSTEM AND METHOD FOR CARDINALITY ESTIMATION FEEDBACK LOOPS IN QUERY PROCESSING - Methods for cardinality estimation feedback loops in query processing are performed by systems and devices. A query host executes queries against data sources via an engine based on estimated cardinalities, and query monitors generate event signals during and at completion of execution. Event signals include indicia of actual data cardinality, runtime statistics, and query parameters in query plans, and are routed to analyzers of a feedback optimizer where event signal information is analyzed. The feedback optimizer utilizes analysis results to generate change recommendations as feedback for later executions of the queries, or similar queries, performed by a query optimizer of the query host. The query host stores change recommendations, and subsequent queries are monitored for the same or similar queries to which change recommendations are applied to query plans for execution and observance by the query monitors. Change recommendations are optionally viewed and selected via a user interface. | 2020-12-03 |
20200379964 | Automatically Defining Arrival Rate Meters - A determination is made that a database system is resource bound resulting in a resource bound condition. Signals for the resources being bound in the database system are identified. Events associated with the signals are extracted. Events are correlated temporally to identify a time interval for which an arrival rate meter (ARM) is helpful. Database system segments are selected that effect key performance indicators associated with the identified time interval. Parameters for the selected database system segments to be deferred by the database system are estimated. The estimated parameters are incorporated into an arrival rate meter (ARM). The ARM is put into effect. | 2020-12-03 |
20200379965 | MECHANISM FOR A SYSTEM WHERE DATA AND METADATA ARE LOCATED CLOSELY TOGETHER - A processor-based method for locating data and metadata closely together in a storage system is provided. The method includes writing a first range of a file and a first metadata relating to attributes of the file into at least one segment controlled by a first authority of the file. The method includes delegating, by the first authority, a second authority for a second range of the file, and writing the second range of the file and second metadata relating to the attributes of the file into at least one segment controlled by the second authority. | 2020-12-03 |
20200379966 | METHOD AND SYSTEM FOR IMPLEMENTING A DECENTRALIZED STORAGE POOL FOR AUTONOMOUS VEHICLE NAVIGATION GUIDANCE INFORMATION - A method and system for implementing a decentralized storage pool for autonomous vehicle navigation guidance information. Specifically, the method and system disclosed herein entail creating a decentralized storage pool aggregated and virtualized from disparate physical storage resources across autonomous vehicles, edge clusters, and/or the cloud to retain ever increasing amounts of data generated and/or employed through autonomous vehicle map-based localization. Further, a content-addressable, peer-to-peer (P2P) distributed file system may be employed to manage the organization of information on, and coordinate filesystem operations pertinent to, the decentralized storage pool. | 2020-12-03 |
20200379967 | DATA MANAGEMENT APPARATUS, METHOD AND NON-TRANSITORY TANGIBLE MACHINE-READABLE MEDIUM THEREOF - A data management apparatus, method, and non-transitory tangible machine-readable medium thereof are provided. The data management apparatus includes a storage and a processor, wherein the processor is electrically connected to the storage. The storage stores a dimension table, wherein the dimension table is defined by a plurality of attributes, and a subset of the attributes is set to be an index attribute. The dimension table includes a plurality of members, and each of the members includes an index datum corresponding to the index attribute. The processor creates a last index for each distinct index datum among the plurality of index data, wherein each of the last indexes points to a latest-stored location of the corresponding index datum in the dimension table. | 2020-12-03 |
20200379968 | Selecting Interfaces for Device-Group Identifiers - In one embodiment, a computer networking device calculates a first hash value for an identifier of a group of computing devices, as well as a second hash value for the identifier of the group of computing devices, with each hash value being at least in part on the identifier of the group of computing devices and an identifier of the respective interface. The computer networking device may also analyze the first hash value with respect to the second hash value and select the first interface for association with the identifier of the group of computing devices based at in part on the analyzing. The computer networking device may further store an indication that the identifier of the group of computing devices is associated with the first interface. | 2020-12-03 |
20200379969 | CONTENT DATA HOLDING SYSTEM, STORAGE MEDIUM, CONTENT DATA HOLDING SERVER, AND DATA MANAGEMENT METHOD - A server includes a content storage medium configured to store content data of content usable in different types of games. The server, upon a transmission request, sends content data to an information-processing device, and retains the sent content data in the content storage medium wherein sending the content data again is prohibited. The server, when the content data is sent from the information-processing device, receives the content data, assigns a new ID to the received content data in case the received content data lacks the ID, and stores the received content data in the content storage medium wherein sending the content data is allowed. | 2020-12-03 |
20200379970 | SYSTEMS AND METHODS FOR PROVIDING CUSTOM OBJECTS FOR A MULTI-TENANT PLATFORM WITH MICROSERVICES ARCHITECTURE - A multi-tenant system, comprises a main storage system including: a monolithic database storing global records associated with global objects, each global object including global fields common for all tenants; a monolithic application configured to process a particular global record storage request by instructing the monolithic database to store particular global field values of the particular global record for a particular tenant, and to process a particular global record fetch request by instructing the monolithic database to retrieve the one or more particular global field values; a custom object storage system including: a custom object database configured to store custom records associated with one or more custom objects, each custom object including one or more custom fields for a tenant; a custom object record service configured to process a particular custom record storage request by instructing the custom object database to store one or more particular custom field values for the tenant, and to process a particular custom record fetch request by instructing the custom object database to retrieve the one or more particular custom field values; and a query engine configured to receive a query, fetch relevant global records from the monolithic database, fetch relevant custom records from the custom object database, and generate a query response. | 2020-12-03 |
20200379971 | SYSTEM AND METHOD FOR AGGREGATING AND UPDATING HETEROGENEOUS DATA OBJECTS - A method of aggregating and updating heterogeneous data objects for a client subsystem includes: storing a set of data object definitions, each defining a mapping between an aggregated data object format and a plurality of supplier data object formats; storing a set of update definitions, each defining a mapping between an aggregated update operation and a plurality of supplier update mechanisms; receiving a data object in a supplier data object formats; selecting, based on the supplier data object format of the received data object, a data object definitions and generating an aggregated data object according to the selected definition; presenting the generated aggregated data object to the client subsystem; receiving an aggregated update operation from the client subsystem for updating the aggregated data object; and selecting, based on the received aggregated update operation, one of the update definitions and initiating a supplier update mechanism according to the selected update definition. | 2020-12-03 |
20200379972 | SYSTEM AND METHOD FOR INTEGRATING HETEROGENEOUS DATA OBJECTS - A method of integrating data objects includes: storing (i) an originating record containing a first unique identifier and a first set of data fields defining a first item supplied by a first provider, and (ii) a destination record containing a second unique identifier and a second set of data fields defining a second item supplied by a second provider; receiving an instruction to merge the originating record into the destination record, the request containing the first and second unique identifiers; in response to receiving the instruction, updating the destination record by: comparing the first set of data fields with the second set of data fields; and for each data field of the first set that matches a corresponding data field of the second set, marking the corresponding data field of the second set as a shared field; and sending the updated destination record to a client device for display. | 2020-12-03 |
20200379973 | SYSTEMS AND METHODS FOR PROVIDING TENANT-DEFINED EVENT NOTIFICATIONS IN A MULTI-TENANT DATABASE SYSTEM - Systems and methods for providing tenant-defined event notifications in a multi-tenant database system are provided. The method may include receiving a first event definition from a first tenant, defining a first business event trigger based on one or more first business object changes occurring to the tenant data of the first tenant; receiving particular database change events from a change data capture service, wherein each of the particular database change events represents a particular change to the tenant data for the plurality of tenants in the database; identifying one or more particular business object changes based on the particular database change events; comparing the first event definition against the one or more particular business object changes to determine whether the first business event trigger has been satisfied; and when the first event trigger has been satisfied, emitting a first business event. | 2020-12-03 |
20200379974 | System and Method to Prevent Formation of Dark Data - A method is provided for preventing dark data in a data set. At a time t | 2020-12-03 |
20200379975 | GLOBAL TRANSACTION SERIALIZATION - A method, computer program product, and a system to globally serialize transactions where a processor(s) establishes a communications connection a (serialization) resource and a resource manager for a distributed computing system. The processor(s) obtains a first request from an application executing on the resource for access to a global resource managed by the resource manager, for executing a transaction. The processor(s) implements a lock for the global resource in an object store of the resource manager over the communications connection. The processor(s) communicates the lock to the application, which executes the transaction and the processor(s) updates a memory with a record comprising attributes of the lock. The processor(s) obtains a second request from the application to terminate the lock, obtains, identifies the lock for the transaction, in the object store, and updates the object store to delete the lock. | 2020-12-03 |
20200379976 | TRANSACTIONS ON NON-TRANSACTIONAL DATABASE - Disclosed are systems, methods, and non-transitory computer-readable media for an improved database management system that provides database transactions on a non-transactional database. The database management system executes garbage collection on data stored in a database to remove data values written to the database as part of uncommitted transactions. Each uncommitted transaction is associated with a respective transaction identifier that is not included in a list of committed transaction identifiers. The list of committed transaction identifiers lists, in sequential order, transaction identifiers for committed transaction. After removing each data value written to the database as part of an uncommitted transaction, the database management system modifies the list of committed transaction identifiers to include the transaction identifier for the uncommitted transaction. | 2020-12-03 |
20200379977 | ANONYMOUS DATABASE RATING UPDATE - An example operation may include one or more of generating, by an executing client, a blockchain transaction comprising an anonymous rating, a proof, a nullifier, and a root node value, receiving, by a smart contract, the blockchain transaction, the anonymous rating related to an authorizing client, verifying the proof with the root node value and the nullifier, verifying that the root node value is a current or a previous merkle tree root node value, adding the anonymous rating to a shared ledger, marking the nullifier as used, and storing the marked nullifier to the shared ledger. | 2020-12-03 |
20200379978 | SYSTEM AND METHOD OF PROCESSING LATE ARRIVING AND OUT OF ORDER DATA - Systems and methods for processing out of order data incrementally are provided. A database is maintained containing rows of data, each row of data having a timestamp and pertaining to a transaction, for example in the e-commerce platform. New data for new rows of data is received. At least some of the data is out of order. Each new row of data is processed in the same manner irrespective of whether the row is out of order or in order using a computation graph including at least one execution node configured to perform out-of-order incremental processing. A processing result is output based on the processing, wherein the result is up to date based on data that has been received. | 2020-12-03 |
20200379979 | SYSTEMS AND METHODS FOR RECORDING DATA REPRESENTING MULTIPLE INTERACTIONS - A method for combining multiple interactions into a single record entry is disclosed. A data package can be created that represents a set of interactions, and each entity associated with an interaction can review the data package. Each entity can indicate agreement with the interactions by digitally signing the data package. Once signed by each involved entity, the data package can be stored in a record such as a blockchain. | 2020-12-03 |
20200379980 | BLOCKCHAIN-BASED COMPUTING SYSTEM AND METHOD FOR MANAGING TRANSACTION THEREOF - A method for managing transaction is performed in a blockchain-based computing system and includes receiving a request for processing a first individual transaction from a client terminal, generating a batch transaction by aggregating a plurality of individual transactions including the first individual transaction, processing the generated batch transaction via a blockchain network, such that a status record associated with the batch transaction is recorded in the blockchain, and providing the client terminal with an identifier of the batch transaction and index information on the first individual transaction, wherein the status record associated with the batch transaction includes a first status record associated with the first individual transaction, and wherein the index information on the first individual transaction is determined based on a location of the first status record in the status record. | 2020-12-03 |
20200379981 | ACCELERATED PROCESSING APPARATUS FOR TRANSACTION AND METHOD THEREOF - An accelerated transaction processing apparatus includes a memory for storing one or more instructions, a communication interface for communicating with a blockchain network, and a processor. The processor is configured to determine whether the blockchain network is in a congested state based on monitoring information about the blockchain network, adjust a batch size based on a result of the determination, and perform batch processing for one or more individual transactions using the adjusted batch size. | 2020-12-03 |
20200379982 | INFORMATION PROCESSING SYSTEM AND METHOD OF CONTROLLING INFORMATION PROCESSING SYSTEM - An information processing system includes a plurality of distributed ledger nodes provided to a plurality of organizations, and configured to verify a content of a transaction with each other and hold a history of the transaction in distributed ledgers provided to the organizations, a client node configured to transmit the transaction to the distributed ledger nodes, and a plurality of processing nodes provided to the organizations, and configured to execute verification target processing being processing that is targeted for verification on a virtual platform. The client node selects the distributed ledger nodes to perform the verification of the verification target processing and transmits the transaction including a request to execute the verification target processing to each of the selected distributed ledger nodes. When each selected node receives the transaction, the distributed ledger node causes the processing node in the organization to perform the verification and records the verification result. | 2020-12-03 |
20200379983 | STRUCTURED QUERY FACILITATION APPARATUS AND METHOD - A control circuit receives from a message source via a network interface a database-update message having at least one data object in a data-independent data format comprising human-readable text. The control circuit then automatically converts that data object into a structured query language (SQL) message. Upon then selecting at least one of a plurality of candidate SQL databases to provide a selected SQL database, the control circuit transmits the SQL message to the selected SQL database. | 2020-12-03 |
20200379984 | METHODS AND SYSTEM FOR COLLECTION VIEW UPDATE HANDLING USING A DIFFABLE DATA SOURCE - This application relates to updating collection views in a computing device. A method includes receiving a first data array of a current view of a data collection and receiving a second data array of a future view of the data collection. The method also includes generating a difference data array that, based on a determination that the first data array element is equal to the second data array element, includes the second data array element. The method also includes, based on whether the first data array element is not included in the second data array and/or the second data array element is not included in the first data array, indicating, in the difference data array, that the first data array element is not in the future view or that the second data array element is not in the current view. | 2020-12-03 |
20200379985 | SYSTEMS AND METHODS OF METADATA MONITORING AND ANALYSIS - A system and method of generating platform-dependent queries from a platform-agnostic query are disclosed. A data pipeline comprising a plurality of events is implemented. Each event in the plurality of events has a set of platform-dependent metadata associated therewith and each of the plurality of events is processed by one of a plurality of ingestion platforms. Metadata associated with each of the plurality of events is stored in a combined metadata repository. The combined metadata repository stores metadata extracted from two or more platforms in a first repository. A platform-agnostic query configured to obtain one or more metadata search results from the platform-dependent metadata is received and deployed to the first repository within the combined metadata repository. The platform-agnostic query is configured to return a result set including metadata obtained from each of the two or more platforms. | 2020-12-03 |
20200379986 | CONVERSATIONAL AGENT FOR HEALTHCARE CONTENT - Various implementations disclosed herein include devices, systems, and methods for indicating whether a conversation regarding a subject satisfies a boundary condition associated with the subject. In various implementations, a device includes a display, a processor and a non-transitory memory. In some implementations, the method includes detecting a conversation in which a person is conveying information regarding a subject. In some implementations, the method includes determining whether the information satisfies a boundary condition associated with the subject. In some implementations, the boundary condition is defined by a set of one or more content items related to the subject. In some implementations, the method includes displaying an indicator that indicates whether or not the information being conveyed by the person satisfies the boundary condition associated with the subject. | 2020-12-03 |
20200379987 | SYSTEMS AND METHODS OF PLATFORM-AGNOSTIC METADATA ANALYSIS - A system and method of generating platform-dependent queries from a platform-agnostic query are disclosed. A data pipeline including a plurality of events having a set of platform-dependent metadata associated therewith is implemented. Each of the plurality of events is processed by one of a plurality of ingestion platforms. A platform-agnostic query configured to obtain one or more metadata search results from the platform-dependent metadata is received and a first platform-dependent query is generated from the platform-agnostic query. The first platform-dependent query is configured to be implemented by at least one target ingestion platform. | 2020-12-03 |
20200379988 | ROW-LEVEL WORKSHEET SECURITY - Row-level worksheet security may include creating a referencing worksheet from a data source worksheet, wherein the data source worksheet comprises a function configured to select, based on one or more user-relative functions, at least a subset of a plurality of rows from a data set in a database for presentation; presenting the at least a subset of the plurality of rows by: evaluating the one or more user-relative functions; and selecting, based on the filter, the at least a subset of the plurality of rows. | 2020-12-03 |
20200379989 | METHOD AND SYSTEM FOR PRESENTING A USER SELECTABLE INTERFACE IN RESPONSE TO A NATURAL LANGUAGE REQUEST - The present invention discloses numerous implementations of system and method which receives a user request and, using methods of natural language processing including part of speech tagging, analyses the user request to generate a query to a database of information. Based on the machine understanding, the system presents an interactive representation of the uttered request back to the user. This provides context to the user, which explains the machine understanding of the request and acts as an interface to iteratively refine or adjust the machine understanding by altering specific elements of the uttered language. The methods of altering specific elements of the uttered language may vary depending on the element and a variety of user selectable interfaces may be used to display one or more queried elements along with alternative elements pertaining to the queried element. The user could select an alternative element and change the database query. | 2020-12-03 |
20200379990 | GRAPHICAL QUERY BUILDER FOR MULTI-MODAL SEARCH - A system may involve persistent storage containing a configuration management database (CMDB) and a non-CMDB table, wherein the CMDB contains configuration items that represent software, devices, or services deployed within a network, and wherein the non-CMDB table contains entries related to operation of the network. The system may also involve one or more processors configured to provide a representation of a graphical user interface (GUI), wherein the GUI contains a first selectable tab that displays classes of the configuration items, a second selectable tab that displays the non-CMDB table, and a canvas for visually depicting query expressions, wherein the classes are selectable to place class GUI elements thereof onto the canvas, wherein the non-CMDB table is selectable to place a table GUI element thereof onto the canvas, and wherein the table GUI element and a particular class GUI element are connectable by a link on the canvas. | 2020-12-03 |
20200379991 | Monitoring Subsystem for Computer Systems - Techniques are provided for a monitoring subsystem for computer systems. In an example, a plurality of time series databases (TSDBs) can determine monitoring information for a plurality of computing nodes. A metrics reporting server can maintain an availability history for each TSDB that it communicates with. The metrics reporting server can implement a greedy heuristic to determine which TSDBs to query for a given time window. The metrics reporting server can use the responses from these queries to assemble monitoring information for the time window. | 2020-12-03 |
20200379992 | PROVIDING ACCESS TO STATE INFORMATION ASSOCIATED WITH OPERATORS IN A DATA PROCESSING SYSTEM - A data processing system that provides access to operator state information includes a plurality of operators that are configured to perform a computation with respect to data received from data sources. State information is associated with at least one of the plurality of operators. The data processing system also includes an object graph that comprises a representation of the computation, and that may dynamically change at runtime. The data processing system also includes an interface that provides access to the state information via the object graph. The data processing system also includes a query manager that is executable to process a graph query to retrieve the state information by traversing a plurality of nodes within the object graph. Temporal navigation is also supported. Thus, processing a graph query may involve navigating to a node in the object graph at a certain point in time. | 2020-12-03 |
20200379993 | Data Sharing And Materialized Views In Multiple Tenant Database Systems - Systems, methods, and devices for generating and updating cross-account materialized views in multiple tenant database systems. A methods includes defining a share object in a first account wherein the share object includes data associated with the first account. The method includes granting cross-account access rights to the share object to a second account such that the second account has access to the share object without copying the share object. The method includes generating a materialized view over the share object. The method includes updating the data associated with the first account. The method includes identifying whether the materialized view is stale with respect to the share object by merging the materialized view and the share object. | 2020-12-03 |
20200379994 | Sharing Materialized Views In Multiple Tenant Database Systems - Systems, methods, and devices for sharing materialized views in multiple tenant database systems. A method includes defining a materialized view over a source table that is associated with a first account of a multiple tenant database. The method includes defining cross-account access rights to the materialized view to a second account such that that second account can read the materialized view without copying the materialized view. The method includes modifying the source table for the materialized view. The method includes identifying whether the materialized view is stale with respect to the source table by merging the materialized view and the source table. | 2020-12-03 |
20200379995 | SHARING MATERIALIZED VIEWS IN MULTIPLE TENANT DATABASE SYSTEMS - Systems, methods, and devices for sharing materialized views in multiple tenant database systems. A method includes defining a materialized view over a source table that is associated with a first account of a multiple tenant database. The method includes defining cross-account access rights to the materialized view to a second account such that that second account can read the materialized view without copying the materialized view. The method includes modifying the source table for the materialized view. The method includes identifying whether the materialized view is stale with respect to the source table by merging the materialized view and the source table. | 2020-12-03 |
20200379996 | DATA SHARING AND MATERIALIZED VIEWS IN MULTIPLE TENANT DATABASE SYSTEMS - Systems, methods, and devices for generating and updating cross-account materialized views in multiple tenant database systems. A methods method includes defining a share object in a first account wherein the share object includes data associated with the first account. The method includes granting cross-account access rights to the share object to a second account such that the second account has access to the share object without copying the share object. The method includes generating a materialized view over the share object. The method includes updating the data associated with the first account. The method includes identifying whether the materialized view is stale with respect to the share object by merging the materialized view and the share object. | 2020-12-03 |