04th week of 2020 patent applcation highlights part 44 |
Patent application number | Title | Published |
20200026608 | PLUGGABLE DATABASE ARCHIVE - Techniques herein make and use a pluggable database archive file (AF). In an embodiment, a source database server of a source container database (SCD) inserts contents into an AF from a source pluggable database (SPD). The contents include data files from the SPD, a listing of the data files, rollback scripts, and a list of patches applied to the SPD. A target database server (TDS) of a target container database (TCD) creates a target pluggable database (TPD) based on the AF. If a patch on the list of patches does not exist in the TCD, the TDS executes the rollback scripts to adjust the TPD. In an embodiment, the TDS receives a request to access a block of a particular data file. The TDS detects, based on the listing of the data files, a position of the block within the AF. The TDS retrieves the block based on the position. | 2020-01-23 |
20200026609 | Transportable Backups for Pluggable Database Relocation - Techniques are provided for creating a backup of a source pluggable database (SPD) of a source container database and porting the backup for recovery into a different target container database. In an embodiment, a source database server retrieves metadata that describes backups of the SPD. The source database server inserts, into a unplugged pluggable database of the SPD, the metadata that describes each of the backups. For example, unplugging the SPD may automatically create the unplugged pluggable database. Eventually, the unplugged pluggable database may be plugged into the target container database. A target database server transfers the metadata that describes each of the backups from the unplugged pluggable database and into the target container database. Based on at least one backup and the metadata that describes backups of the SPD, the target database server restores a target pluggable database within the target container database. | 2020-01-23 |
20200026610 | MULTI-SECTION FULL VOLUME BACKUPS - A method for backing up data is disclosed. In one embodiment, such a method includes identifying a volume of data to back up, and determining a number of backup tasks that can operate in parallel to back up data in the volume. The number of backup tasks may be based on an amount of memory available, a fragmentation level of a target storage area, a number of tape mounts that are available, or the like. The method then divides the volume into a number of sections corresponding to the number of backup tasks. Each section is associated with a particular backup task. The method then initiates the backup tasks to back up their corresponding sections in parallel. In certain embodiments, each backup task generates a data set storing backup data from its corresponding section. A corresponding system and computer program product are also disclosed. | 2020-01-23 |
20200026611 | UPDATING A SNAPSHOT OF A FULLY ALLOCATED STORAGE SEGMENT BASED UPON DATA THEREWITHIN - A computer includes a storage segment fully allocated to an application. The storage segment initially includes a repeating initialization data pattern there within. After the application begins its workload, the application writes application data to a portion of the storage segment. A snapshot application takes a snapshot of the storage segment. After the snapshot, the application generates a post-snapshot-write to the storage segment. The snapshot application determines whether the post-snapshot-write modifies application data or modifies the repeating initialization data pattern. If the post-snapshot-write modifies the repeating initialization data pattern within the storage segment, the snapshot application blocks the repeating initialization data pattern from being copied and moved which resultantly blocks modification of the snapshot. If the post-snapshot-write modifies application data, the snapshot application copies and moves the application data to a destination storage location which resultantly modifies the snapshot to identify the destination storage location of the moved application data. | 2020-01-23 |
20200026612 | STORING A POINT IN TIME COHERENTLY FOR A DISTRIBUTED STORAGE SYSTEM - A plurality of computing devices are communicatively coupled to each other via a network, and each of the plurality of computing devices is operably coupled to one or more of a plurality of storage devices. The computing devices may take snapshots to store points in time coherently for a distributed storage system. | 2020-01-23 |
20200026613 | HISTORY MANAGEMENT METHOD AND HISTORY MANAGEMENT APPARATUS - A history management method for managing history information of a vehicle using a blockchain is provided. The history management method performed by at least one processor includes generating a block for being connected to the blockchain from the history information collected in the vehicle, per block, setting a storage destination of a backup of the generated block from among nodes communicable with the vehicle, and sending the backup of the block to the node that is set as the storage destination. | 2020-01-23 |
20200026614 | METHOD AND SYSTEM FOR DYNAMIC DATA PROTECTION - A method and system for dynamic data protection. Specifically, the method and system disclosed herein provide and manage tiers of licensed storage capacity on which backup data may be consolidated. The particular tier of licensed storage capacity on which backup data may be consolidated may be dependent on the characteristics of the backup data. In cases where there may be insufficient available capacity in a licensed storage capacity tier to consolidate the backup data, a remaining capacity lacking in the licensed storage capacity tier may be overdrawn from another licensed storage capacity tier, provided that the latter tier has enough unallocated capacity to subsume the lacking capacity. | 2020-01-23 |
20200026615 | INTELLIGENT LOG GAP DETECTION TO PREVENT UNNECESSARY BACKUP PROMOTION - An intelligent log gap detection to prevent unnecessary backup promotion. Specifically, the method and system disclosed herein entail determining whether to pursue a requested database backup type or to promote the requested database backup type to another database backup type, in order to preclude data loss across high availability databases. When a decision is made to pursue the requested database backup type, storage space, intended for backup consolidation on a backup system or media, is saved for future backup requests rather than being consumed as would be the case would the requested database backup type had been promoted. | 2020-01-23 |
20200026616 | STORAGE SYSTEM WITH MULTIPLE WRITE JOURNALS SUPPORTING SYNCHRONOUS REPLICATION FAILURE RECOVERY - A storage system in one embodiment is configured to participate as a source storage system in a synchronous replication process with a target storage system. In conjunction with the synchronous replication process, the source storage system receives write requests from at least one host device. Responsive to a given write request being a multi-page write request, an entry is created in a first journal, where the first journal is utilized to ensure that the given write request is completed for all of the pages or for none of the pages. Responsive to the write request being a single-page write request, an entry is created in a second journal different than the first journal. An address-to-signature table is updated utilizing write data of the write request, and if the corresponding entry for the write request was created in the first journal, the entry is swapped from the first journal into the second journal, and the write data of the write request is sent to the target storage system. | 2020-01-23 |
20200026617 | FREEZE AND UNFREEZE UPSTREAM AND DOWNSTREAM VOLUMES - According to examples, a system may include an upstream volume controller having: a processor and a non-transitory machine-readable storage medium. The storage medium may include instructions executable by the processor to freeze an upstream volume, the upstream volume being in a replication set with a downstream volume, receive a snapshot creation request, create a snapshot of the upstream volume, and send one of a snapshot permit message or a snapshot abort message to a downstream volume processor. The instructions may also be executable by the processor to unfreeze the upstream volume responsive to at least one of the sending of the one of the snapshot permit message or the snapshot abort message or expiration of a timeout corresponding to a maximum time period during which the upstream volume is to remain frozen. | 2020-01-23 |
20200026618 | EFFICIENT RESTORE OF SYNTHETIC FULL BACKUP BASED VIRTUAL MACHINES THAT INCLUDE USER CHECKPOINTS - A method and system for efficiently restoring synthetic full backup based virtual machines that include user checkpoints. Specifically, the method and system disclosed herein overcome a behavioral limitation exhibited in present virtual machine backup methodologies, where said methodologies ignore the presence of user checkpoints storing state for a virtual machine. In accounting for the user checkpoints while recovering a virtual machine, embodiments of the invention maintain restoration points for virtual machine state instantiated by the user, in addition to those instantiated by the system. | 2020-01-23 |
20200026619 | HISTORY MANAGEMENT METHOD, HISTORY MANAGEMENT APPARATUS AND HISTORY MANAGEMENT SYSTEM - A history management method for managing history information of multiple vehicles using blockchains is provided. The history management method includes generating a master block from history information collected in a vehicle, setting a node serving as a storage destination of a backup block of the master block per block, storing, together with the master block, backup blocks that are different in history information collecting vehicle from the master block in a block storage unit, and sending the backup block for a particular vehicle requested in a recovery request. | 2020-01-23 |
20200026620 | BACKUP CLIENT AGENT - In one example, a method includes receiving, by a cloud service, a register call for authorization to access one or more other cloud services, and the register call is received from a backup client agent and includes a registration code, and registering, by the cloud service, the backup client agent. The cloud services implements an authentication process that includes evaluating the registration code, and when the backup client agent is not authenticated, access by the backup client agent to one or more of the other cloud services is prevented, and when the backup client agent is authenticated, a token is transmitted to the backup client agent. | 2020-01-23 |
20200026621 | BACKUP CLIENT AGENT - One example method includes registering with a long poll cloud service, receiving a notification from the cloud long poll service, and the notification includes information about a restore command, acknowledging receipt of the notification, downloading a restore description to a cloud restore service, performing the restore command, receiving file information in response to performance of the restore command, creating a restore job using the file information, and signaling that the restore command is complete. | 2020-01-23 |
20200026622 | INTELLIGENT LOG GAP DETECTION TO ENSURE NECESSARY BACKUP PROMOTION - An intelligent log gap detection to ensure necessary backup promotion. Specifically, a method and system are disclosed, which entail determining whether to pursue a differential database backup or promote the differential database backup to a full database backup, in order to preclude data loss across high availability databases. The deduction pivots on a matching or mismatching between log sequence numbers (LSNs). | 2020-01-23 |
20200026623 | DISTRIBUTED STORAGE ACCESS USING VIRTUAL TARGET PORTAL GROUPS - The technology disclosed herein enables a group of clients to concurrently access data of a distributed storage system over multiple paths without including a client portion of the distributed storage system. An example method may include: determining, by a processing device, a portal group comprising a plurality of network portals for accessing a storage unit; transmitting data of the portal group to a first client and to a second client, wherein data transmitted to the first client indicates a first network portal is preferred and wherein data transmitted to the second client indicates a second network portal is preferred; and providing access for the first client to the storage unit using a storage session, the storage session providing the first client multiple paths to access the storage unit, wherein one of the multiple paths comprises the first network portal. | 2020-01-23 |
20200026624 | EXECUTING RESOURCE MANAGEMENT OPERATIONS IN DISTRIBUTED COMPUTING SYSTEMS - Computing cluster system management. Embodiments implement fine-grained rule-based approaches to error recovery. A service dispatches tasks to components of the computing cluster. At the time of task dispatching, entries are made into a write-ahead log. The write-ahead log entries serve for recording task and component attributes. A monitor detects a failure event raised by one or more of the components of the computing cluster. Responses to the failure event include determining a set of conditions that are present in the computing cluster at the time of the detection, and then using the failure event and the determined conditions in combination with a set of fine-grained failure processing rules to determine one or more recovery actions to take. Recovery actions include redistributing the failed task to a different node or to different service. Certain conditions and rules initiate actions that rollback the state of a component to a previous success points. | 2020-01-23 |
20200026625 | TWO NODE CLUSTERS RECOVERY ON A FAILURE - Systems and methods for high availability computing systems. Systems and methods include disaster recovery of two-node computing clusters. A method embodiment commences upon identifying a computing cluster having two nodes, the two nodes corresponding to a first node and a second node that each send and receive heartbeat indications periodically while performing storage I/O operations. One or both of the two nodes detect a heartbeat failure between the two nodes, and in response to detecting the heartbeat failure, one or both of the nodes temporarily cease storage I/O operations. A witness node is accessed in an on-demand basis as a result of detecting the heartbeat failure. The witness performs a leadership election operation to provide a leadership lock to only one requestor. The leader then resumes storage I/O operations and performs one or more disaster remediation operations. After remediation, the computing cluster is restored to a configuration having two nodes. | 2020-01-23 |
20200026626 | MODIFYING JOURNALING ASSOCIATED WITH DATA MIRRORING WITHIN A STORAGE SYSTEM - A method for modifying a configuration of a storage system. The method includes one or more computer processors identifying data received at a logical partition (LPAR) of a storage system, wherein a copy program associated with a process for data mirroring executes within the LPAR. The method further includes determining a first rate based on analyzing a quantity of data received at the LPAR during the process of data mirroring. The method further includes creating a journal file from a set of records within the received data. The method further includes determining a second rate related to migrating the journal file from the LPAR to intermediate storage included in the storage system. The method further includes determining to modify a set of configuration information associated with the process of data mirroring by the storage system based, at least in part, on the first rate and the second rate. | 2020-01-23 |
20200026627 | METHOD TO SUPPORT SYNCHRONOUS REPLICATION FAILOVER - In one aspect, synchronous replication failover support is provided for a storage system that includes a source site and a target site. The failover support includes locating a recovery snap set on the source site. The source site is identified as a subject of a failover event, and the recovery snap set includes a snap set that contains a subset of data content that is also stored at the target site. The recovery snap set also has a time of creation that is equal to or greater than a timeout value for serving input/outputs (IOs) to the target site. The failover support further includes sending a difference between volumes of the source site and the recovery snap set to the target site. The difference is configured to enable in sync status between the source site and the target site. | 2020-01-23 |
20200026628 | SEMICONDUCTOR MEMORY DEVICE - A semiconductor memory device has a memory cell array area including a normal area including memory blocks and a redundant memory area including a redundant block which is a replacement target of a defective block among memory blocks; a storage unit storing address information indicating a position of the defective block in the normal area and address information indicating a position of the redundant block being the replacement target of the defective block, both being in association with each other as a first information; and an output circuit outputting a data row exhibiting a positional relation between the defective block and a memory block other than the defective block in the normal area based on the first information stored in the storage unit in response to the data read signal. | 2020-01-23 |
20200026629 | DATA PROCESSING - Data processing apparatus comprises a processing module to initiate data handling transactions, for transmission to a data handling module by a transaction interface, in response to successive processing instructions; a verification module connectable to the transaction interface and configured to detect test data, representing an ordered series of communications via the transaction interface generated in response to a test series of processing instructions; in which the verification module is configured to compare two or more instances of the test data generated in response to the same test series of processing instructions and to detect whether the two or more instances of the test data are identical. | 2020-01-23 |
20200026630 | ACCELERATOR MONITORING AND TESTING - An accelerator manager monitors and logs performance of multiple accelerators, analyzes the logged performance, determines from the logged performance of a selected accelerator a desired programmable device for the selected accelerator, and specifies the desired programmable device to one or more accelerator developers. The accelerator manager can further analyze the logged performance of the accelerators, and generate from the analyzed logged performance an ordered list of test cases, ordered from fastest to slowest. A test case is selected, and when the estimated simulation time for the selected test case is less than the estimated synthesis time for the test case, the test case is simulated and run. When the estimated simulation time for the selected test case is greater than the estimated synthesis time for the text case, the selected test case is synthesized and run. | 2020-01-23 |
20200026631 | DYNAMIC I/O MONITORING AND TUNING - A method for dynamically tuning I/O performance is disclosed. In one embodiment, such a method includes identifying various stages of an I/O process. The method further monitors progress of an I/O operation as it advances through the stages of the I/O process. The method records, in a data structure associated with the I/O operation, timing information indicating time spent in each of the stages. This timing information may include, for example, entry and exit times of the I/O operation relative to each of the stages. In the event the I/O operation exceeds a maximum allowable time spent in one or more of the stages, the method automatically adjusts an allocation of computing resources to one or more stages of the I/O process. A corresponding system and computer program product are also disclosed. | 2020-01-23 |
20200026632 | RECORD-BASED PLANNING IN OPERATIONAL MAINTENANCE AND SERVICE - In some implementations, there is provided a method, which includes receiving, at a recommendation system, a failure notice representing a failure detected by a sensor monitoring an object; comparing, by the recommendation system, the failure notice to a plurality of reference failure notices to identify at least one matching failure notice, the comparing based on a domain-by-domain scoring between the failure notice and the plurality of reference failure notices; and generating, by the recommendation system, a message including a suggested solution including at least one task to remedy the failure, the suggested solution obtained from the at least one matching failure notice. Related systems, methods, and articles of manufacture are also disclosed. | 2020-01-23 |
20200026633 | DETECTION OF RESOURCE BOTTLENECKS IN EXECUTION OF WORKFLOW TASKS USING PROVENANCE DATA - Techniques are provided for detecting resource bottlenecks in workflow task executions using provenance data. An exemplary method comprises: obtaining a state of multiple workflow executions of multiple concurrent workflows performed with different resource allocation configurations in a shared infrastructure environment; obtaining first and second signature execution traces of a task representing first and second resource allocation configurations, respectively; identifying first and second corresponding sequences of time intervals in the first and second signature execution traces for the task, respectively, based on a similarity metric; and identifying a given time interval as a resource bottleneck of a resource that differs between the first and second resource allocation configurations based on a change in execution time for the given time interval between the first and second signature execution traces. The first signature execution trace is optionally obtained by disaggregating data related to batches of workflow executions. | 2020-01-23 |
20200026634 | TIMELINE DISPLAYS OF EVENT DATA WITH START AND END TIMES - Techniques and mechanisms are disclosed that enable a data intake and query system to generate and cause display of circular timelines of timestamped event data. As used herein, a circular timeline generally refers to a graphical display of timestamped events stored by a data intake and query system, wherein the timestamped events may be displayed as arcs of one or more concentric circles and located in a circular timeline area according to a chronological ordering associated with the events. One or more display attributes of each arc may further depend on other data associated with the corresponding events. For example, each arc of a circular time may be displayed at a particular radial distance, with a particular thickness, using a particular shading and/or color, etc., depending on various data values associated with the one or more events represented by the arc. | 2020-01-23 |
20200026635 | System Operational Analytics Using Additional Features for Health Score Computation - Techniques are provided for system operational analytics using additional features over time-series counters for health score computation. An exemplary method comprises: obtaining log data from data sources of a monitored system; applying a counting function to the log data to obtain time-series counters for a plurality of distinct features within the log data; applying an additional function to the time-series counters for the plurality of distinct features; and processing an output of the additional function using a machine learning model to obtain a health score for the monitored system based on the output of the additional function. The additional function comprises, for example, an entropy function representing a load balancing of a plurality of devices in the monitored system; one or more clustered counts for a plurality of entities in the monitored system; a number of unique values; and/or one or more modeled operations based on correlations between a plurality of different operations in the monitored system. | 2020-01-23 |
20200026636 | CLASSIFYING WARNING MESSAGES GENERATED BY SOFTWARE DEVELOPER TOOLS - A method for classifying warning messages generated by software developer tools includes receiving a first data set. The first data set includes a first plurality of data entries, where each data entry is associated with a warning message generated based on a first set of software codes, includes indications for a plurality of features, and is associated with one of a plurality of class labels. A second data set is generated by sampling the first data set. Based on the second data set, at least one feature is selected from the plurality of features. A third data set is generated by filtering the second data set with the selected at least one feature. A machine learning classifier is determined based on the third data set. The machine learning classifier is used to classify a second warning message generated based on a second set of software codes to one of the plurality of class labels. | 2020-01-23 |
20200026637 | MULTI-LANGUAGE HEAP ANALYZER - A method for analyzing memory may include obtaining, from a heap snapshot, host objects each represented in a host format for a host language. The host objects may include a first host object and a second host object. The method may further include translating, using a first guest format for a first guest language, the first host object to a first guest object, and translating, using a second guest format for a second guest language, the second host object to a second guest object. | 2020-01-23 |
20200026638 | Method and System for Implementing Data Flow or Code Analysis Using Code Division - Novel tools and techniques are provided for implementing data flow or code analysis, and, more particularly, to methods, systems, and apparatuses for implementing analysis of data flow or code using code division. In various embodiments, a computing system might receive a software code for testing, might identify at least one divisible point for each of one or more portions of the received software code, and might divide the software code into the one or more portions based on the identified divisible points. Each of the one or more portions, after being divided, is an atomic element of the software code that is capable of execution independent of other portions of the software code. The computing system might analyze at least one portion of the one or more portions of the received software code, each portion being analyzed separately from analysis of other portions of the received software code. | 2020-01-23 |
20200026639 | LOGGING TRACE DATA FOR PROGRAM CODE EXECUTION AT AN INSTRUCTION LEVEL - Methods and systems are disclosed for logging trace data generated by executing program code at an instruction level. In aspects, high volumes of trace data are generated during certain time periods, e.g., immediately following a start of the tracing. Processors operating at normal speeds are often unable to log such high volumes of trace data. The issue of such high volumes of trace data may be addressed by selectively and dynamically controlling logging of outstanding trace data. For example, a rate of generating the trace may be reduced by slowing processor speeds, logging of outstanding trace data may be suspended for a period, and logging of non-urgent trace data may be selectively delayed. | 2020-01-23 |
20200026640 | SYSTEMS AND METHODS FOR MODULAR TEST PLATFORM FOR APPLICATIONS - A test platform provides modular test automation. The modular test automation may include defining modular segments of a test configuration, and testing an Application Under Test (“AUT”) based on the modular segments. The test configuration may define a specific flow or execution ordering for the modular segments. The flow may be changed by reordering the segments, modularly adding new segments anywhere in the flow, removing segments, modifying individual segments without affecting other segments, defining a particular segment once and reusing the particular segment in two or more different test configurations, and/or carrying over a change made to the particular segment in a first test configuration to the particular segment of a second test configuration automatically. The modular test automation may also include modularly selecting a set of nodes for testing the AUT according to the modular segments of the flow. | 2020-01-23 |
20200026641 | AUTOMATED TEST INPUT GENERATION FOR INTEGRATION TESTING OF MICROSERVICE-BASED WEB APPLICATIONS - Techniques for automated generation of inputs for testing microservice-based applications are provided. In one example, a computer-implemented method comprises: traversing, by a system operatively coupled to a processor, a user interface of a microservices-based application by performing actions on user interface elements of the user interface; and generating, by the system, an aggregated log of user interface event sequences and application program interface call sets based on the traversing. The computer-implemented method also comprises: determining, by the system, respective user interface event sequences that invoke application program interface call sets; and generating, by the system, respective test inputs based on the user interface event sequences that invoke the application program interface call sets. | 2020-01-23 |
20200026642 | MODEL INTEGRATION TOOL - Certain aspects involve building and debugging models for generating source code executed on data-processing platforms. One example involves receiving an electronic data-processing model, which generates an analytical output from input attributes weighted with respective modeling coefficients. A target data-processing platform is identified that requires bin ranges for the modeling coefficients and reason codes for the input attributes. Bin ranges and reason codes are identified. Modeling code is generated that implements the electronic data-processing model with the bin ranges and the reason codes. The processor outputs source code, which is generated from the modeling code, in a programming language used by the target data-processing platform. | 2020-01-23 |
20200026643 | BIASED SAMPLING METHODOLOGY FOR WEAR LEVELING - First data units can be sampled from a set of data units of a memory component. The first data units can be a subset of the set of data units. An initial data unit is determined from the first data units as a first candidate data unit based on a wear metric associated with the first data units. The wear metric is indicative of a level of physical wear of the first data units. A wear leveling operation can be performed in view of the first candidate data unit. | 2020-01-23 |
20200026644 | RECONSTRUCT DRIVE FOR DYNAMIC RESIZING - A solid-state drive (SSD) is configured for dynamic resizing. When the SSD approaches the end of its useful life because the over-provisioning amount is nearing the minimum threshold as a result of an increasing number of bad blocks, the SSD is reformatted with a reduced logical capacity so that the over-provisioning amount may be maintained above the minimum threshold. | 2020-01-23 |
20200026645 | System and Method of Data Writes and Mapping of Data for Multiple Sub-Drives - A system and method is disclosed for managing data in a non-volatile memory. The system may include a non-volatile memory having multiple non-volatile memory sub-drives. A controller of the memory system is configured to route incoming host data to a desired sub-drive, keep data within the same sub-drive as its source during a garbage collection operation, and re-map data between sub-drives, separate from any garbage collection operation, when a sub-drive overflows its designated amount logical address space. The method may include initial data sorting of host writes into sub-drives based on any number of hot/cold sorting functions. In one implementation, the initial host write data sorting may be based on a host list of recently written blocks for each sub-drive and a second write to a logical address encompassed by the list may trigger routing the host write to a hotter sub-drive than the current sub-drive. | 2020-01-23 |
20200026646 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A memory system includes a memory device including a plurality of dies each including one or more flush blocks and one or more multi-level cell blocks; a controller write buffer; a controller buffer manager configured to buffer host data into the controller rite buffer; a flush block manager configured to control, when a flush command is received, the memory device to perform an interleaved program operation of programming the buffered host data into the flush blocks respectively included in the dies; and a processor configured to control, when a size of the buffered host data reaches a threshold value, the memory device to perform the interleaved program operation of programming the buffered host data into the multi-level cell blocks respectively included in the dies. | 2020-01-23 |
20200026647 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR MANAGING CACHE - Techniques manage a cache. Such techniques involve creating a primary cache by a cache management module in a storage system. Such techniques further involve: in response to the primary cache being created, sending a first request to a hardware management module to obtain first information about a first virtual disk. Such techniques further involve: in response to receiving the first information from the hardware management module, creating a secondary cache using the first virtual disk. Such techniques further involve: in response to an available capacity of the primary cache being below a predetermined threshold, flushing at least one cache page in the primary cache to the secondary cache. In certain techniques, it is possible to use spare extents in the disk array to create the secondary cache to increase a total capacity of the cache in the system, thereby improving the access performance of the system. | 2020-01-23 |
20200026648 | Memory Circuit and Cache Circuit Configuration - A memory circuit includes a first memory circuit formed of a first die or a set of stacked dies. The memory circuit further includes a second memory circuit formed of a second die, the second memory circuit comprising one or more sets of memory cells of a second type and each set of the memory cells of the second type comprising multiple cache sections. The first die or the set of stacked dies are stacked over the second die, wherein the second die further includes a first plurality of I/O terminals and a second plurality of I/O terminals, the first plurality of I/O terminals being electrically coupled to the first memory circuit, and the second plurality of I/O terminals being electrically isolated from the first memory circuit. | 2020-01-23 |
20200026649 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A memory system includes: a memory device including a first memory buffer and a second memory buffer; a controller write buffer; a memory buffer manager suitable for controlling the memory device to buffer first data stored in the first memory buffer into the second memory buffer while the memory device programs, in a program operation, the first data into a memory block; a controller buffer manager suitable for deleting the first data stored in the controller write buffer after the memory device buffers the first data into the second memory buffer; and a failure processor suitable for controlling the memory device to perform a reprogram operation of reprogramming the first data, when the program operation fails. | 2020-01-23 |
20200026650 | ARITHMETIC PROCESSING DEVICE AND ARITHMETIC PROCESSING METHOD - An arithmetic processing device includes circuitry configured to add an identifier of a request source that generates a prefetch request into the prefetch request, and output, in response to detecting a certain number of cache hits less than a first threshold, each of the cache hits occurring in a first cache memory provided at a lower hierarchical level than a second cache memory by each prefetch request into which a first identifier is added, a notification for suppressing a prefetch request issued for the lower hierarchical level of the first cache memory from a first request source identified by the first identifier. | 2020-01-23 |
20200026651 | PREFETCH PROTOCOL FOR TRANSACTIONAL MEMORY - Providing control over processing of a prefetch request in response to conditions in a receiver of the prefetch request and to conditions in a source of the prefetch request. A processor generates a prefetch request and a tag that dictates processing the prefect request. A processor sends the prefetch request and the tag to a second processor. A processor generates a conflict indication based on whether a concurrent processing of the prefetch request and an atomic transaction by the second processor would generate a conflict with a memory access that is associated with the atomic transaction. Based on an analysis of the conflict indication and the tag, a processor processes (i) either the prefetch request or the atomic transaction, or (ii) both the prefetch request and the atomic transaction. | 2020-01-23 |
20200026652 | PROCESS DATA CACHING THROUGH ITERATIVE FEEDBACK - Systems and methods for improved process caching through iterative feedback are disclosed. In embodiments, a computer implemented method comprises retrieving updated metadata of a process to be executed, wherein the updated metadata includes information regarding cache misses from a prior execution of the process; automatically modifying a setting of a data stream control register based on the updated metadata; automatically setting a hint at a data cache block touch module; performing an initial execution of the process after the steps of retrieving the updated metadata, automatically modifying the setting of the data stream control register, and automatically setting the hint at the data cache block touch module; and modifying the updated metadata of the process after the execution of the process based on cache miss statistical data gathered during the execution of the process, to produce newly updated metadata. | 2020-01-23 |
20200026653 | METADATA LOADING IN STORAGE SYSTEMS - During a restart process in which metadata is loaded from at least one of a plurality of storage devices into a cache, a storage controller is configured to generate an IO thread in response to the receipt of an IO request, identify at least one metadata page of the metadata that is used to fulfill the IO request, and generate a loading thread in association with the received IO thread that is configured to cause the storage controller to perform prioritized loading of the identified at least one page of the metadata into the cache. The loading thread is detachable from the IO thread such that, in response to an expiration of the IO thread, the loading thread continues to cause the storage controller to perform the prioritized loading until the loading of the at least one page of the metadata into the cache is complete. | 2020-01-23 |
20200026654 | In-Memory Dataflow Execution with Dynamic Placement of Cache Operations - A dataflow execution environment is provided with dynamic placement of cache operations. An exemplary method comprises: obtaining a first cache placement plan for a dataflow comprised of multiple operations; executing operations of the dataflow and updating a number of references to the executed operations to reflect remaining executions of the executed operations; determining a current cache gain by updating an estimated reduction in the total execution cost for the dataflow of the first cache placement plan; determining an alternative cache placement plan for the dataflow following the execution; and implementing the alternative cache placement plan based on a predefined threshold criteria. A cost model is optionally updated for the executed operations using an actual execution time of the executed operations. A cached dataset can be removed from memory based on the number of references to the operations that generated the cached datasets. | 2020-01-23 |
20200026655 | DIRECT MAPPED CACHING SCHEME FOR A MEMORY SIDE CACHE THAT EXHIBITS ASSOCIATIVITY IN RESPONSE TO BLOCKING FROM PINNING - An apparatus is described. The apparatus includes a memory controller to interface with a multi-level memory, where, an upper level of the multi-level memory is to act as a cache for a lower level of the multi-level memory. The memory controller has circuitry to determine: i) an original address of a slot in the upper level of memory from an address of a memory request in a direct mapped fashion; ii) a miss in the cache for the request because the slot is pinned with data from another address that competes with the address; iii) a partner slot of the slot in the cache in response to the miss; iv) whether there is a hit or miss in the partner slot in the cache for the request. | 2020-01-23 |
20200026656 | EFFICIENT SILENT DATA TRANSMISSION BETWEEN COMPUTER SERVERS - Aspects of the invention include receiving a request to transfer data from a first storage device, coupled to a sending server, to a second storage device, coupled to a receiving server. The data is transferred from the first storage device to the second storage device in response to the request. The transferring includes allocating a first temporary memory on the sending server and moving the data from the first storage device to the first temporary memory. The transferring also includes initiating a remote direct memory access (RDMA) between the first temporary memory and a second temporary memory on the second server. The RDMA causes the data to be transferred from the first temporary memory to the second temporary memory independently of an operating system executing on a processor of the sending server or the receiving server. The transferring further includes receiving a notification that the transfer completed. | 2020-01-23 |
20200026657 | HYBRID MEMORY ACCESS FREQUENCY - Techniques that facilitate hybrid memory access frequency are provided. In one example, a system stores access frequency data for storage class memory and volatile memory in a translation lookaside buffer. The access frequency data is indicative of a frequency of access to the storage class memory and the volatile memory. The system also determines whether to store data in the storage class memory or the volatile memory based on the access frequency data stored in the translation lookaside buffer. | 2020-01-23 |
20200026658 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR MANAGING ADDRESS IN STORAGE SYSTEM - Techniques manage addresses in a storage system. In such techniques, an address page of an address pointing to target data in the storage system is determined in response to receiving an access request for accessing data in the storage system. A transaction for managing the address page is generated on the basis of the address page, here the transaction at least comprises an indicator of the address page and a state of the transaction. A counter describing how many times the address page is referenced is set. The transaction is executed at a control node of the storage system on the basis of the counter. With such techniques, the access speed for addresses in the storage system can be accelerated, and then the overall response speed of the storage system can be increased. | 2020-01-23 |
20200026659 | VIRTUALIZED MEMORY PAGING USING RANDOM ACCESS PERSISTENT MEMORY DEVICES - Systems for virtual memory computing systems. A set of hardware or software operational elements of a computing system performs virtualized memory paging. The operational elements serve to identify a random access memory device and at least one random access persistent memory device (RAPM) in a computing system. The random access persistent memory device is configured as a swap device that is apportioned as having at least some address space for swap. At least some of the swap address space is assigned to one or more virtualized entities in the computing system. When a page swap event is detected by the computing system, one or more of the operational elements execute one or more paging operations based on characteristics of the page swap event. The paging operations perform swap-in or swap-out of at least one page between the random access memory device and the random access persistent memory device. | 2020-01-23 |
20200026660 | DATA PROCESSING - Data processing apparatus comprises one or more processing elements to execute processing instructions; address translation circuitry to perform address translations between a virtual address space and a physical address space, the address translations being defined by a current hierarchical set of address translation tables selected from two or more hierarchical sets of address translation tables, the address translation circuitry being responsive to current table definition data providing at least a pointer to a memory location of the current hierarchical set of address translation tables; the one or more processing elements being configured to overwrite the current table definition data with second table definition data providing at least a pointer to a memory location of a second, different, hierarchical set of address translation tables of the two or more hierarchical sets of address translation tables; the one or more processing elements being configured to execute test instructions requiring address translation before and after the overwriting of the current table definition data by the control circuitry and to detect whether the address translations required by the test instructions are correctly performed. | 2020-01-23 |
20200026661 | SECURE ADDRESS TRANSLATION SERVICES USING MESSAGE AUTHENTICATION CODES AND INVALIDATION TRACKING - Embodiments are directed to providing a secure address translation service. An embodiment of a system includes a memory for storage of data, an Input/Output Memory Management Unit (IOMMU) coupled to the memory via a host-to-device link the IOMMU to perform operations, comprising receiving a memory access request from a remote device via a host-to-device link, wherein the memory access request comprises a host physical address (HPA) that identifies a physical address within the memory pertaining to the memory access request and a first message authentication code (MAC), generating a second message authentication code (MAC) using the host physical address received with the memory access request and a private key associated with the remote device, and performing at least one of allowing the memory access to proceed when the first MAC and the second MAC match and the HPA is not in an invalidation tracking table (ITT) maintained by the IOMMU; or blocking the memory operation when the first MAC and the second MAC do not match. | 2020-01-23 |
20200026662 | DIRECT MEMORY ACCESS - A system includes a direct memory access controller and a memory coupled to the direct memory access controller. The memory stores a linked list of records. Each record contains a first field determining the number of fields of a next record. For example, each record can be representative of parameters of execution of a data transfer by the direct memory access controller. | 2020-01-23 |
20200026663 | METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR MANAGING STORAGE SYSTEM - Techniques manage a storage system. In accordance with such techniques, an access request for target data is received; a storage position of the target data is determined, the storage position indicating one of a storage device and a cache; a target element corresponding to the target data is determined from a first replacement list and a second replacement list associated with the first replacement list based on the storage position, the first replacement list including at least a counting element, the counting element indicating an access count of data in the storage device, the second replacement list including a low-frequency access element, the low-frequency access element indicating a cache page with a low access frequency in the cache; and a position of the target element in a replacement list where the target element exist is updated. Therefore, the overall performance of the storage system can be improved. | 2020-01-23 |
20200026664 | CACHE MEMORY, MEMORY SYSTEM INCLUDING THE SAME, AND EVICTION METHOD OF CACHE MEMORY - In a cache memory used for communication between a host and a memory, the cache memory may include a plurality of cache sets, each comprising: a valid bit; N dirty bits; a tag; and N data sets respectively corresponding to the N dirty bits and each including data of a data chunk size substantially identical to a data chunk size of the host, wherein a data chunk size of the memory is N times as large as the data chunk size of the host, where N is an integer greater than or equal to 2. | 2020-01-23 |
20200026665 | SYSTEMS AND METHODS FOR MEMORY SAFETY WITH RANDOM EMBEDDED SECRET TOKENS - Disclosed are devices, systems, apparatus, circuits, methods, products, and other implementations, including a method that includes obtaining, during execution of a process associated with a particular privilege level, data content from a memory location, and determining by a hardware-based detection circuit whether the data content matches at least one of one or more token values, with the one or more token values stored in one or more pre-determined memory locations, and with access of any of the pre-determined one or more memory locations indicating a potential anomalous condition. The method further includes triggering, in response to a determination that the data content matches the at least one of the one or more token values, another process with a higher or same privilege level as the particular privilege level associated with the process, to handle occurrence of a potential system violation condition. | 2020-01-23 |
20200026666 | FLASH MEMORY SYSTEM AND METHOD OF GENERATING QUANTIZED SIGNAL THEREOF - The flash memory system according to the embodiment of the present invention is characterized by programming a selected page in a quantization signal generating operation, providing a reference read voltage to a selected word line connected to the selected page, A flash memory for generating a flash memory; And a memory controller for receiving a quantized signal from the flash memory and generating a response using the quantized signal, wherein the memory controller receives an challenge from a host and the flash memory performs the quantized signal generation. | 2020-01-23 |
20200026667 | WRITE ACCESS CONTROL FOR DOUBLE DATA RATE WRITE-X/DATACOPY0 COMMANDS - In conventional memory systems, no access control is performed when write-x and datacopy0 are issued. To address this issue, it is proposed to provide access control to these commands by leveraging the mechanism to enforce access control to normal write commands so that the mechanism is also applied to the write-x and datacopy0 commands. | 2020-01-23 |
20200026668 | METHODS AND APPARATUS FOR REDUCED OVERHEAD DATA TRANSFER WITH A SHARED RING BUFFER - Methods and apparatus for reducing bus overhead with virtualized transfer rings. The Inter-Processor Communications (IPC) bus uses a ring buffer (e.g., a so-called Transfer Ring (TR)) to provide Direct Memory Access (DMA)-like memory access between processors. However, performing small transactions within the TR inefficiently uses bus overhead. A Virtualized Transfer Ring (VTR) is a null data structure that doesn't require any backing memory allocation. A processor servicing a VTR data transfer includes the data payload as part of an optional header/footer data structure within a completion ring (CR). | 2020-01-23 |
20200026669 | MEMORY SYSTEM - A memory system is disclosed, which relates to technology for an accelerator of a high-capacity memory device. The memory system includes a plurality of memories configured to store data therein, and a pooled memory controller (PMC) configured to perform map computation by reading the data stored in the plurality of memories and storing resultant data produced by the map computation in the plurality of memories. | 2020-01-23 |
20200026670 | INFORMATION PROCESSING DEVICE - In an information processing device serving as a PCIe system including a host device and a plurality of memory devices, one of the plurality of memory devices is defined as a master memory. The other memory devices are defined as slave memories, and are logically coupled to the master memory. The plurality of memory devices thus constitute a single virtual storage. When accessing is performed from a root complex to the plurality of memory devices constituting the single virtual storage, the root complex hands over a bus master to the master memory. The master memory receives a command regarding the accessing from the root complex, changes address information used for the accessing in the command regarding the accessing, based on a logical relationship with the slave memories, and sends changed command regarding the accessing to the slave memories. | 2020-01-23 |
20200026671 | CIRCUITRY SYSTEM AND METHOD FOR PROCESSING INTERRUPT PRIORITY - The disclosure is related to a circuitry system and a method for processing interrupt priority. The circuitry system is such as a system-on-chip that operates the method. The high-priority interrupt is configured to be always on and prohibited from accessing a critical section. When a high-priority interrupt occurs as a processor of the system is in operation, the processor sets a low-priority interrupt to access the critical section where the high-priority interrupt accessed previously. When the low-priority interrupt is terminated, the processor determines whether or not to wake up the unfinished task that is previously set for the high-priority interrupt. The processor continues processing the task since the task has not been finished. The circuity system can therefore retain the characteristics of all disabled interrupts and also maintain an instantaneity for the important tasks of the system. | 2020-01-23 |
20200026672 | DIRECT MEMORY ACCESS - A memory contains a linked list of records representative of a plurality of data transfers via a direct memory access control circuit. Each record is representative of parameters of an associated data transfer of the plurality of data transfers. The parameters of each record include a transfer start condition of the associated data transfer and a transfer end event of the associated data transfer. | 2020-01-23 |
20200026673 | Systems And Methods For Device Communications - Systems and methods for improvement in bus communications with daisy-chained connected devices are described herein. In some embodiments, a bus communication system comprises a master chain controller, a first peripheral device, and a second peripheral device. A first communication bus couples a master interface port of the master chain controller to a slave interface port of the first peripheral device, and a second communication bus couples a master interface port of the first peripheral device to a slave interface port of the second peripheral device. The first communication device is configured to receive a communication packet via the first communication bus and to send a copy of the communication packet to the second peripheral device during transmission of the communication packet to the first peripheral device. The first communication device is also configured to send an idle state signal to the master chain controller. | 2020-01-23 |
20200026674 | ARBITRATION CIRCUITRY - Arbitration circuitry is provided for allocating up to M resources to N requesters, where M≥2. The arbitration circuitry comprises group allocation circuitry to control a group allocation in which the N requesters are allocated to M groups of requesters, with each requester allocated to one of the groups; and M arbiters each corresponding to a respective one of the M groups. Each arbiter selects a winning requester from the corresponding group, which is to be allocated a corresponding resource of the M resources. In response to a given requester being selected as the winning requester by the arbiter for a given group, the group allocation is changed so that in a subsequent arbitration cycle the given requester is in a different group to the given group. | 2020-01-23 |
20200026675 | Switching Method and Related Electronic System - A switching method is applied for a display device. The display device is coupled to a first computer device, an input device and a storage device, wherein the input device controls the first computer device through the display device, and the storage device exchanges data with the first computer device through the display device. The switching method comprises steps of when a user instructs to switch the first computer device through an input signal generated by the input device, switching the input device from controlling the first computer device to controlling a second computer device, and determining whether the storage device is exchanging data with the first computer device; and when the storage device is not exchanging data with the first computer device, switching the storage device from connecting with the first computer device to connecting with the second computer device. | 2020-01-23 |
20200026676 | USB EXPANSION FUNCTION DEVICE - The present invention discloses a USB expansion function device. When the USB expansion function device with batteries is connected to a mobile terminal device that needs to be charged, by limiting a charging current through a current limiter, a data communication function of the USB expansion function device connected to the mobile terminal device in a USB host mode is implemented while low power consumption of the batteries inside the USB expansion function device is ensured. Thus, the USB expansion function device with batteries can operate continuously for a long time, and its compatibility with data connection of the mobile terminal device can be improved. | 2020-01-23 |
20200026677 | FOLDED MEMORY MODULES - A memory module comprises a data interface including a plurality of data lines and a plurality of configurable switches coupled between the data interface and a data path to one or more memories. The effective width of the memory module can be configured by enabling or disabling different subsets of the configurable switches. The configurable switches may be controlled by manual switches, by a buffer on the memory module, by an external memory controller, or by the memories on the memory module. | 2020-01-23 |
20200026678 | Apparatus and Method to Provide a Multi-Segment I2C Bus Exerciser/Analyzer/Fault Injector and Debug Port System - A baseboard management controller (BMC) includes a plurality of device I2C interfaces. Each device I2C interfaces provides a device I2C bus that is ported externally to the BMC. The BMC further includes a plurality of device buffer/switch circuits. Each device buffer/switch circuit is connected to a respective device I2C bus, and is configured to selectably connect to the respective I2C bus in a high-impedance mode, an open-drain mode, and a FET switch mode. The BMC further includes a multiplexor/driver circuit that has a multiplexor I2C interface that provides a multiplexor I2C bus that is ported externally to the BMC. The multiplexor/driver circuit is coupled to each device I2C bus via the respective buffer/switch circuit, and is configured to selectively couple one of the device I2C busses to the multiplexor I2C bus, and to select one of the high-impedance mode, the open-drain mode, or the FET switch mode for the selected buffer/switch circuit. | 2020-01-23 |
20200026679 | TRANSACTION ROUTING FOR SYSTEM ON CHIP - A system on chip includes an interconnect circuit including at least p input interfaces and at least k output interfaces, p source devices respectively coupled to the p input interfaces and k access ports respectively coupled to the k output interfaces and belonging to a target that includes one or more target devices. Each source device is configured to deliver transactions to the target via one of the access ports. An associated memory of each access port is configured to temporarily store the transactions received by the access port. The target is configured to deliver, for each access port, a fill signal representative of a current fill level of its associated memory. A control circuit is configured to receive the fill signals from the access ports and select the access ports eligible to receive a transaction depending on the current fill levels. | 2020-01-23 |
20200026680 | SECURE CRYPTO MODULE INCLUDING ELECTRICAL SHORTING SECURITY LAYERS - A security matrix layer between a first and second conductive shorting layers are located within a printed circuit board (PCB). The security matrix layer includes at least two types of microcapsules with each type of microcapsule containing a different reactant. When the security matrix layer is accessed, drilled, or otherwise damaged, the microcapsules rupture and the reactants react to form at least an electrically conductive material. The electrically conductive material may contact and short the first and second conductive shorting layers. | 2020-01-23 |
20200026681 | MEMORY PACKAGE INCLUDING BUFFER, EXPANSION MEMORY MODULE, AND MULTI-MODULE MEMORY SYSTEM - Provided are a memory package, an expansion memory module, and a multi-module memory system. A base memory module, to/from which an expansion memory module is capable of being attached/detached, includes a module board, a plurality of module terminals arranged on the module board to be connected to a slot, and a plurality of memory packages, each of which including a first surface to be attached to the module board and a second surface opposite to the first surface facing away from the module board, wherein each of the plurality of memory packages includes a plurality of package terminals exposed on the second surface of the memory package to be connected to the expansion memory module. | 2020-01-23 |
20200026682 | TECHNIQUES OF ACCESSING SERIAL CONSOLE OF BMC USING HOST SERIAL PORT - In an aspect of the disclosure, a method, a computer-readable medium, and an apparatus are provided. The apparatus may be an embedded-system device. In an aspect of the disclosure, a method, a computer-readable medium, and an apparatus are provided. The apparatus may be an embedded-system device. The embedded-system device provides to a host of the embedded-system device control of a first serial port controller of the embedded-system device. The embedded-system device further connects a serial port with the first serial port controller. The embedded-system device also determines whether the embedded-system device is in a predetermined condition. The embedded-system device disconnects the serial port from the first serial port controller and connecting the serial port with a second serial port controller when the embedded-system device is in the predetermined condition. | 2020-01-23 |
20200026683 | SFF-TA-100X BASED MULTI-MODE PROTOCOLS SOLID STATE DEVICES - A system includes a storage device; a storage device controller; a first interface configured to connect the storage device controller to the storage device; and a second interface configured to connect the storage device controller to a host device, wherein the storage device is configured to operate in a first mode or a second mode based on a status of a signal at the second interface based on instructions received from the host device. | 2020-01-23 |
20200026684 | CONFIGURABLE NETWORK-ON-CHIP FOR A PROGRAMMABLE DEVICE - An example programmable integrated circuit (IC) includes a processor, a plurality of endpoint circuits, a network-on-chip (NoC) having NoC master units (NMUs), NoC slave units (NSUs), NoC programmable switches (NPSs), a plurality of registers, and a NoC programming interface (NPI). The processor is coupled to the NPI and is configured to program the NPSs by loading an image to the registers through the NPI for providing physical channels between NMUs to the NSUs and providing data paths between the plurality of endpoint circuits. | 2020-01-23 |
20200026685 | PIPELINED CONFIGURABLE PROCESSOR - A configurable processing circuit capable of handling multiple threads simultaneously, the circuit comprising a thread data store, a plurality of configurable execution units, a configurable routing network for connecting locations in the thread data store to the execution units, a configuration data store for storing configuration instances that each define a configuration of the routing network and a configuration of one or more of the plurality of execution units, and a pipeline formed from the execution units, the routing network and the thread data store that comprises a plurality of pipeline sections configured such that each thread propagates from one pipeline section to the next at each clock cycle, the circuit being configured to: (i) associate each thread with a configuration instance; and (ii) configure each of the plurality of pipeline sections for each clock cycle to be in accordance with the configuration instance associated with the respective thread that will propagate through that pipeline section during the clock cycle. | 2020-01-23 |
20200026686 | SUPERSEDING OBJECTS IN A RETENTION SYSTEM - Superseding a prior version of a document, to which prior version a retention policy or other requirement has been applied, is disclosed. In some embodiments, an attribute of a retention policy indicates whether a document to which the retention policy has been applied is to be superseded by a subsequently created and/or saved version of the document. In some embodiments, the attribute is set by a logic or process configured to apply the retention policy to the document. If the retention policy indicates that supersede is enabled, in various embodiments when a subsequent version is created and/or saved, the prior version is promoted to the final phase of the retention policy that has been applied to it and automatically “qualified” for disposition as indicated in the final phase of the retention policy, without regard to intervening requirements, processes, phases, approvals, retention, waiting, or other periods, etc. | 2020-01-23 |
20200026687 | PUSHING A POINT IN TIME TO A BACKEND OBJECT STORAGE FOR A DISTRIBUTED STORAGE SYSTEM - A plurality of computing devices are communicatively coupled to each other via a network, and each of the plurality of computing devices is operably coupled to one or more of a plurality of storage devices. The computing devices may push a point in time to a backend for a distributed storage system. | 2020-01-23 |
20200026688 | FILE SHARING METHOD BASED ON TWO-DIMENSIONAL CODE, SERVER AND TERMINAL DEVICE - The present application relates to the technical field of data processing, providing a file sharing method based on two-dimensional code, a server, a terminal device and a computer readable storage medium. The method comprising: obtaining a user mail sent by a user, wherein the user mail contains a mail subject, a user address and a shared file; storing the shared file and obtaining a storage address; generating a two-dimensional code corresponding to the storage address; returning the two-dimensional code to the user, wherein the two-dimensional code serves as an access entry of the shared file and is used for accessing the shared file by the user. The embodiments of the present application simplify the operational process of file sharing and improve the processing efficiency of file sharing. | 2020-01-23 |
20200026689 | LIMITED DEDUPLICATION SCOPE FOR DISTRIBUTED FILE SYSTEMS - A method, article of manufacture, and apparatus for limited deduplication scope on a distributed file system is discussed. A write request is received from a client at the metadata server (“MDS”), where the write request comprises a data object identifier and a preferred object store identifier. The MDS determines whether a preferred object store associated with the preferred object store identifier contains a copy of a data object associated with the data object identifier. A write URL comprising the data object identifier and a object store location associated with the preferred object store is transmitted to the client when the preferred object store does not contain the copy of the data object. | 2020-01-23 |
20200026690 | GLOBAL DATA DEDUPLICATION ACROSS MULTIPLE DISTRIBUTED FILE SYSTEMS - A write request is transmitted from a client to a metadata server (“MDS”), wherein the write request comprises an object identifier associated with a data object. An object store location is received for an object store from the MDS. A metadata request is transmitted to the object store using the object store location, wherein the metadata request includes the object identifier. A metadata response is received from the object store. Determine the metadata response contains an object designator. A count associated with a mapping between the object identifier and the object designator is incremented, wherein the mapping resides on an object version manager shared with a second MDS. | 2020-01-23 |
20200026691 | BLOCKCHAIN-BASED DATA PROCESSING METHOD AND DEVICE - Techniques for processing blockchain data are described. A node in a blockchain network receives service data generated by a first service, wherein the service data comprises a data structure having a field a value of which indicates that the first service is associated with a first processing level. The node stores, based on the value of the field, the service data in a first data processing queue selected from a plurality of data processing queues, wherein the first data processing queue corresponds to the first processing level, and each of the plurality of data processing queues corresponds to a different processing level. The node generates a new block that stores the service data read from the first data processing queue, and additional service data read from one or more of the plurality of data processing queues. | 2020-01-23 |
20200026692 | SYSTEM AND METHOD FOR BATCH DATABASE MODIFICATION - Altering a database structure based on software updates in a distributed computing system can include identifying a plurality of software updates that include alterations to structural elements in the database structure and identifying, for the plurality of software updates, a plurality of alterations corresponding to a first structural element of the structural elements. A combined alteration can be generated by combining the plurality of alterations. A database statement can be generated for altering the first structural element according to the combined alteration. The database structure can then be updated using the database statement. The structural elements can define logical relationships between data stored in the database structure. The alterations can be expressed using a markup language and the database statement can be expressed using a query language. | 2020-01-23 |
20200026693 | DIFFERENTIAL HEALTH CHECKING OF AN INFORMATION MANAGEMENT SYSTEM - Differential health-check systems and accompanying methods provide health-checking and reporting of one or more information management systems in reference to a first time period before and a second time period after a triggering event. A triggering event may be an upgrade of at least part of the information management system, or a restore operation completed in the information management system for example following a disaster, or any number of other events, etc. The health-checking and reporting may comprise a comparison of one or more performance metrics of one or more components and/or operations of the information management system during the first and second time periods. | 2020-01-23 |
20200026694 | PEER TO PEER OWNERSHIP NEGOTIATION - A method of negotiating memory record ownership between network nodes, comprising: storing in a memory of a first network node a subset of a plurality of memory records and one of a plurality of file system segments of a file system mapping the memory records; receiving a request from a second network node to access a memory record of the memory records subset; identifying the memory record by using the file system segment; deciding, by a placement algorithm, whether to relocate the memory record, from the memory records subset to a second subset of the plurality of memory records stored in a memory of the second network node; when a relocation is not decided, providing remote access of the memory record via a network to the second network node; and when a relocation is decided, relocating the memory record via the network for management by the second network node. | 2020-01-23 |
20200026695 | Incremental Clustering Of Database Tables - Automatic clustering of a database table is disclosed. A method for automatic clustering of a database table includes receiving an indication that a data modification task has been executed on a table and determining whether the table is sufficiently clustered. The method includes, in response to determining the table is not sufficiently clustered, selecting one or more micro-partitions of the table to be reclustered. The method includes assigning each of the one or more micro-partitions to an execution node to be reclustered. | 2020-01-23 |
20200026696 | CONTENT ATTRIBUTES DEPICTED IN A SOCIAL NETWORK - An example operation may include a method comprising one or more of receiving, by a server, a proposed update to data, wherein the data is one or more of a process, and a document, determining keywords based on a parsing of the proposed update, determining a criticalness of the proposed update, based on the keywords, determining a user related to the data; and notifying the user when the criticalness exceeds a threshold. | 2020-01-23 |
20200026697 | METHOD AND A DEVICE FOR DETECTING AN ANOMALY - This anomaly detection method serves to determine whether a message (MSGEv) that is to be evaluated, that is constituted by symbols and that is to be received by an application, constitutes an anomaly. It comprises: | 2020-01-23 |
20200026698 | DATABASE RECOVERY USING PERSISTENT ADDRESS SPACES - A processor(s) initiates a database transaction, in a computing environment that includes a database that includes one or more memory devices. The processor(s) forks a first address space that represents a current state of the database, to create a second address space. The processor(s) writes an entry indicating timing of the initiating to a log file and generates a file that is mapped to the one or more memory devices. The file includes differences in state between the current state of the database and a state subsequent to executing and committing the database transaction, and a timestamp indicating timing for committing the database transaction. The processor(s) write the database transaction to the second address space. | 2020-01-23 |
20200026699 | Highly Performant Decentralized Public Ledger with Hybrid Consensus - A method of electing a rotating committee of byzantine fault tolerance (BFT) nodes in a decentralized computer network includes determining that a current committee of BFT nodes has outputted a predetermined number of committed transactions; identifying a plurality of candidate nodes, each respective candidate node of the plurality of candidate nodes having successfully processed, using a proof-of-work (PoW) protocol, a respective transaction of the predetermined number of committed transactions; and selecting, as a new committee of BFT nodes, a subset of the plurality of candidate nodes based on a random function. | 2020-01-23 |
20200026700 | BLOCKCHAIN-BASED DATA STORAGE AND QUERY METHOD AND DEVICE - A blockchain node receives a service request, where the service request comprises one or more data types and respective service data corresponding to the one or more data types that are stored in a blockchain. At least one of a service type or identification information is determined corresponding to the service request. The service request is parsed to obtain each data type of the service request and service data corresponding to each data type. Based on a mapping relationship between a data type and service data, the service data that is obtained through parsing in a relational database corresponding to the blockchain node is stored. | 2020-01-23 |
20200026701 | DYNAMIC VISUALIZATION OF APPLICATION AND INFRASTRUCTURE COMPONENTS WITH LAYERS - Systems and methods for providing dynamic visualization of application and infrastructure components are disclosed. In one embodiment, in an information processing apparatus comprising at least one computer processor, a system for providing dynamic visualization of application and infrastructure components may include: (1) receiving, at an interface and from a requestor, a request for information about an application or infrastructure; (2) querying one or more systems of record or a data cache containing data from the one or more systems of record for data about the application or infrastructure components within the infrastructure; (3) formatting the data received from the one or more systems of record or the data cache according to a data definition; (4) identifying relational links between the applications and the infrastructure components in the formatted data; and (5) graphically rendering the formatted data and relational links in a plurality of interactive levels. | 2020-01-23 |
20200026702 | BUSINESS OPERATING SYSTEM ENGINE - An engine for resolving a query from a user to provide a dynamic actionable dashboard in a business operating system includes an MLET database, a data interface, a logic configured to process incoming queries, fetch data in relation to those queries, and render an actionable dashboard having data resulting from the queries. The MLET database comprises a plurality of templates (“MLETs”), each MLET being associated with a unique identifier and including a mechanism for accessing data relating to that identifier. The logic processes queries into constructs having a tokens and configurable inputs. If the query includes a unique identifier associated with an MLET in the MLET database, the MLET is used to fetch data responding to the query. If the query includes a unique identifier not associated with an MLET in the MLET database, the logic creates a new MLET using operational intelligence and stores it in the MLET database. | 2020-01-23 |
20200026703 | GENERATING STRUCTURED QUERIES FROM NATURAL LANGUAGE TEXT - Generating structured queries from natural language text may include receiving, using a processor, a natural language text input directed to a database management system and, using the processor, performing natural language processing on the natural language text input using an Unstructured Information Management Architecture. The natural language processing may annotate the natural language text input according to a structure of the database management system. A database operation and query elements may be determined using a processor from the annotated natural language text input. A structured query may be created, using the processor, for the database management system that implements the database operation using the query elements. | 2020-01-23 |
20200026704 | QUERY-TIME ANALYTICS ON GRAPH QUERIES SPANNING SUBGRAPHS - Reductions in latencies and improvements in computational efficiency when analyzing data stored in a relational graph by integrating analytical capabilities into graph queries. Instead of a user having to run a graph query and then perform analytics on the resulting subgraph via separate requests, the user is enabled to run analytics at the time the graph query is run via a single request to the database maintaining the relationship graph, which improves the computationally efficiency of analyzing relational graphs and thereby improves the functionality of the computing devices hosting the relational graphs and running the queries and analytics. | 2020-01-23 |
20200026705 | AUTOMATIC OBJECT INFERENCE IN A DATABASE SYSTEM - A binary relational database model is described whereby application-layer object structures are easily inferred from database query templates. The object structures take the form of acyclic hypergraphs, which are induced from primal graphs representing query templates. Database applications may iterate through the collection of returned object structures, accessing the data in each structure. The returned object structures are not based on a fixed object model, thereby permitting rich structures with greater applicability than traditional ORM systems. A relationship between non-primitive entities may be directly expressed without the need for alternative join tables. Development and maintenance costs are thus substantially reduced, and data is more efficiently stored and manipulated for database applications. | 2020-01-23 |
20200026706 | SYSTEM AND METHOD FOR GENERATING A TAGGED COLUMN-ORIENTED DATA STRUCTURE - A system and method for generating tagged column-oriented data structures, including: generating a column-oriented data structure that comprises a plurality of columns, wherein each column comprises a plurality of cells that are associated with a single data type, wherein at least one of the plurality of columns is a tag type column; and inserting at least one tag into a first cell of the tag type column, where the first cell further associated with a first row of cells. | 2020-01-23 |
20200026707 | METHOD AND SYSTEM FOR MACHINE LEARNING OF OPTIMIZED USER OUTREACH BASED ON SPARSE DATA - A method of optimizing user outreach for a subject, including: determining the N closest other users to the subject; learning an outreach policy for the subject using reinforcement learning based upon outreach data of the N closest other users and the subject; determining an outreach action for the subject based upon the learned outreach policy; performing the outreach action; collecting new outreach data; and determining a new value of N. | 2020-01-23 |