41st week of 2020 patent applcation highlights part 49 |
Patent application number | Title | Published |
20200319899 | ELECTRONIC DEVICE AND METHOD FOR OPERATING AN ELECTRONIC DEVICE - Electronic device and a method for operating an electronic device providing an improved initialization of the electronic device. Operational data such as software components or configurational data are stored in a removable storage device during the operation of the device. By removing the removable storage device and inserting the removable storage device into another device, the other device can be automatically initialized and configured similar to the former electronic device. | 2020-10-08 |
20200319900 | Method for Application Processing, Storage Medium, and Electronic Device - A method for application processing, a storage medium, and an electronic device are provided. The method includes: obtaining historical operation information of the electronic device; obtaining triggering probability values of a plurality of applications in an application platform installed in the electronic device based on the historical operation information; selecting an application with a triggering probability value greater than a first preset probability value as a target application; downloading resource files of the target application; buffering the resource files into a storage area corresponding to the application platform; and loading the resource files stored in the storage area and corresponding to the target application, in response to detecting a triggering operation on the target application. | 2020-10-08 |
20200319901 | SYSTEMS AND METHODS FOR DISPLAYING FULLY-ACCESSIBLE INTERFACES USING A SINGLE CODEBASE - Certain aspects and features of the present disclosure relate to systems and methods for simultaneously providing two versions of an application's interface using a single codebase. More specifically, certain aspects and features of the present disclosure relate to systems and methods for executing a single codebase to depict time-bound information in a calendar format and, at the same time, make the displayed time-bound information consumable by screen reader applications. | 2020-10-08 |
20200319902 | EXTENSION APPLICATION MECHANISMS THROUGH INTRA-PROCESS OPERATION SYSTEMS - The present disclosure relates to computer-implemented methods, software, and systems for providing extension application mechanisms. Memory is allocated for a virtual environment to run in an address space of an application that is to be extended with extension logic in a secure manner. The virtual environment is configured for execution of commands related to an extension functionality of the application. A virtual processor for an execution of a command of the commands is initialized at the virtual environment. The virtual processor is operable to manage one or more guest operating systems (OS). A first guest OS is loaded at the allocated memory and application logic of the extension functionality is copied into the allocated memory. The virtual environment is started to execute the first guest OS and the application logic of the extension functionality in relation to associated data of the application in the allocated memory. | 2020-10-08 |
20200319903 | SYSTEMS AND METHODS FOR ACCESSING REMOTE FILES - The present systems and methods generally relate to the elimination or reduction of network traffic required to support operations on a file of any size stored remotely on a file server or network share. More particularly, the present systems and methods relate to encapsulation of a remote file in such a way that the file appears to the local operating system and any local applications to be residing locally, thus overcoming some of the performance issues associated with multiple users accessing a single network share (e.g., CIF S share) and/or a single user remotely accessing a large file. | 2020-10-08 |
20200319904 | HYPERCONVERGED SYSTEM ARCHITECTURE FEATURING THE CONTAINER-BASED DEPLOYMENT OF VIRTUAL MACHINES - A hyperconverged system is provided which includes a plurality of containers, wherein each container includes a virtual machine and a virtualization solution module such as, for example, a KVM module. | 2020-10-08 |
20200319905 | METADATA SERVICE PROVISIONING IN A CLOUD ENVIRONMENT - In an approach a computer receives a first request from a metadata service to store metadata for a virtual machine (VM). The computer validates the metadata service. The computer stores the metadata for the VM in response to the validation being successful. The computer receives a second request from the VM for the metadata. The computer sends the metadata to the VM. | 2020-10-08 |
20200319906 | VIRTUAL MACHINE ALLOCATION ANALYSIS FOR PHYSICAL NETWORK DEPLOYMENTS - Embodiments of the present disclosure comprise considering the performance of an application under different candidate physical network topology configurations for a set of virtual machines (VMs) for an application. Given different physical network topologies corresponding to the logical topology for the application, each physical network topology may be analyzed to quantify the performance of the application based upon one or more metrics. In one or more embodiments, the metrics may include throughput, latency, and network resource usage, and these metrics may be formed into a performance set. The set of values provide a means by which the different physical network topology deployments may be compared. Based upon the comparison, a deployment of the VMs on the physical network topology may be selected and implemented; or alternatively, when input expected application performance parameters are satisfied by the metrics, the corresponding physical topology may be chosen. | 2020-10-08 |
20200319907 | CLOUD RESOURCE CREDENTIAL PROVISIONING FOR SERVICES RUNNING IN VIRTUAL MACHINES AND CONTAINERS - Some embodiments may be associated with a cloud computing environment. A cloud resource credential management system may be provisioned as part of a virtual machine deployment, access information associated with an application or a service configuration file and establish a cloud resource credential provisioning system external to an application to be executed in connection with the virtual machine. The cloud resource credential provisioning system may, for example, map a cloud resource policy and a cloud resource credential. The cloud resource credential provisioning system may then intercept a cloud resource call from the application to a cloud resource provider and validate that the cloud call request complies with the cloud resource policy. If the cloud resource call complies with the cloud resource policy, the cloud resource credential provisioning system may extend the cloud resource call with the cloud resource credential and forward the extended cloud resource call to the cloud resource provider. | 2020-10-08 |
20200319908 | REGION BASED PROCESSING AND STORAGE OF DATA - A method, a system, and a computer program product are provided. A first computing device determines that data to be processed for a request is confined to a geographic region in which the data is stored and identifies a second computing device within the geographic region in which the data is stored, wherein the identified second computing device and the computing device are connected to a network. The computing device directs the identified second computing to process the data within the geographic region to which the data is confined, according to the request, by one or more processing nodes executing on the identified second computing device. | 2020-10-08 |
20200319909 | SYSTEM AND METHOD FOR A DISTRIBUTED KEY-VALUE STORE - An apparatus includes a processor having programmed instructions to determine a container number of container instances to be deployed in a cluster based on compute resources and determine a node number of virtual nodes to be deployed in the cluster based on storage resources. The node number of virtual nodes includes a key-value store. Each of the node number of virtual nodes owns a corresponding key range of the key-value store. The processor has programmed instructions to distribute the node number of virtual nodes equally across the container number of container instances and deploy the container number of container instances. | 2020-10-08 |
20200319910 | REMOTE DESKTOP SYSTEM AND IMAGE DATA SHARING METHOD - This application discloses a remote desktop system, including a primary virtual machine, a plurality of secondary virtual machines, a primary terminal configured to log in to the primary virtual machine, and a secondary terminal configured to log in to a secondary virtual machine. When a user of the primary virtual machine needs to share image data of the primary virtual machine with a user of the secondary terminal for viewing, the primary virtual machine sends the image data to the primary terminal, and then the primary terminal shares the image data with the secondary terminal. This reduces data transmission pressure on a communications network between a virtual machine center and a terminal center. | 2020-10-08 |
20200319911 | CONFIGURABLE VIRTUAL MACHINES - Systems and methods for configuring a virtual machine provided by a remote computing system based on the availability of one or more remote computing resources and respective corresponding prices of the one or more remote computing resources. | 2020-10-08 |
20200319912 | TRANSITIONING VOLUMES BETWEEN STORAGE VIRTUAL MACHINES - A volume rehost tool migrates a storage volume from a source virtual server within a distributed storage system to a destination storage server within the distributed storage system. The volume rehost tool can prevent client access to data on the volume through the source virtual server until the volume has been migrated to the destination virtual server. The tool identifies a set of storage objects associated with the volume, removes configuration information for the set of storage objects, and removes a volume record associated with the source virtual server for the volume. The tool can then create a new volume record associated with the destination virtual server, apply the configuration information for the set of storage objects to the destination virtual server, and allow client access to the data on the volume through the destination virtual server. | 2020-10-08 |
20200319913 | SYSTEM, APPARATUS AND METHOD FOR ACCESSING MULTIPLE ADDRESS SPACES VIA A VIRTUALIZATION DEVICE - In one embodiment, an apparatus includes an input/output virtualization (IOV) device comprising: at least one function circuit to be shared by a plurality of virtual machines (VMs); and a plurality of assignable device interfaces (ADIs) coupled to the at least one function circuit, wherein each of the plurality of ADIs is to be associated with one of the plurality of VMs and comprises a first process address space identifier (PASID) field to store a first PASID to identify a descriptor queue stored in a host address space and a second PASID field to store a second PASID to identify a data buffer located in a VM address space. Other embodiments are described and claimed. | 2020-10-08 |
20200319914 | MEMORY MANAGEMENT METHOD AND APPARATUS - A disclosed example apparatus includes memory; and processor circuitry to: identify a lock-protected section of instructions in the memory; replace lock/unlock instructions with transactional lock acquire and transactional lock release instructions to form a transactional process; and execute the transactional process in a speculative execution. | 2020-10-08 |
20200319915 | DISAGGREGATED RACK MOUNT STORAGE SIDE TRANSACTION SUPPORT - A method is described. The method includes performing the following with a storage end transaction agent within a storage sled of a rack mounted computing system: receiving a request to perform storage operations with one or more storage devices of the storage sled, the request specifying an all-or-nothing semantic for the storage operations; recognizing that all of the storage operations have successfully completed; after all of the storage operations have successfully completed, reporting to a CPU side transaction agent that sent the request that all of the storage operations have successfully completed. | 2020-10-08 |
20200319916 | SIGNAL PROCESSOR AND SIGNAL PROCESSING METHOD - A signal processor includes a memory storing instructions and a processor that implements the stored instructions to execute a plurality of tasks, the tasks including a first input task configured to obtain a first audio signal of a first channel, a second input task configured to obtain a second audio signal of a second channel, a first signal processing task configured to perform a first signal processing on the input first audio signal, a second signal processing task configured to perform a second signal processing on the input second audio signal, and a control task configured to, when the second input task does not obtain the second audio signal, cause the second signal processing task to perform the second signal processing on the input first audio signal having undergone the first signal processing by the first signal processing task. | 2020-10-08 |
20200319917 | SYSTEMS, METHODS, AND APPARATUSES FOR PROCESSING ROUTINE INTERRUPTION REQUESTS - Methods, apparatus, systems, and computer-readable media are provided for allowing an automated assistant routine to be interrupted during performance of the routine. A routine can correspond to a set of actions to be performed at the direction of the automated assistant. When the routine is initialized and a user subsequently issues a command to interrupt the routine, the automated assistant can modify a status identifier for the routine. That status identifier can be stored at a database and allow other applications and/or devices that are operating to complete the routine to be put on notice that the user has requested the routine be interrupted. The database can be accessible to one or more devices and/or applications, such as third party applications, in order to provide a medium through which the devices and/or applications can check the statuses of various automated assistant routines. | 2020-10-08 |
20200319918 | Transferral Of Process State And/Or Components In Computing Environments - This technology relates to transferring state information between processes or active software programs in a computing environment where the a new instance of a process or software program may receive such state information even after an original or old instance of the process or software program that owned the state information has terminated either naturally or unnaturally. | 2020-10-08 |
20200319919 | SYSTEMS AND METHODS FOR SCHEDULING NEURAL NETWORKS BY VARYING BATCH SIZES - The present disclosure relates to computer-implemented systems and methods for scheduling a neural network for execution. In one implementation, a system for scheduling a neural network for execution may include at least one memory storing instructions and at least one processor configured to execute the instructions to determine a profile for one or more applications co-scheduled with at least one neural network; determine a batch size for the at least one neural network based on the determined profile for the one or more applications; and scheduling the one or more applications and the at least one neural network based on the batch size. | 2020-10-08 |
20200319920 | METHOD AND SYSTEM FOR MIGRATING XML SCHEMAS IN APPLICATION RELEASES - A method and system for migrating Extensible Markup Language (XML) schemas between releases of a computing application. The method provides first and second versions of an XML document by the computing application, each version having a different schema. The first version is migrated to the second version using a migration step. The method uses a Dependency injection Framework to abstract the characteristics of the at least one migration step. The method also transforms the first schema to the second schema, based on the abstracted characteristics of the at least one migration step, in such a way that the first version of the XML document is migrated into the second version of the XML document. The method migrates the first version into the second version in such a way that the second version can access application data from the first version. | 2020-10-08 |
20200319921 | Asynchronous Kernel - In an embodiment, an operating system for a computer system includes a kernel that assigns code sequences to execute on various processors. The kernel itself may execute on a processor as well. Specifically, in one embodiment, the kernel may execute on a processor with a relatively low instructions per clock (IPC) design. At least a portion of other processors in the system may have higher IPC designs, and processors with higher IPC designs may be used to execute some of the code sequences. A given code sequence executing on a processor may queue multiple messages to other code sequences, which the kernel may asynchronously read and schedule the targeted code sequences for execution in response to the messages. Rather than synchronously preparing a message and making a call to send the message, the executing code sequences may continue executing and queuing messages until the code has completed or is in need of a result from one of the messages. | 2020-10-08 |
20200319922 | METHOD, ELECTRONIC DEVICE AND COMPUTER PROGRAM PRODUCT FOR PROCESSING TASK - Embodiments of the present disclosure provide a method, an electronic device and a computer program product for processing a task. The method comprises: obtaining a first group of processing results generated from processing, by a first group of processing resources of a first device, a first group of sub-tasks in the task; performing a first AllReduce operation on the first group of processing results to obtain a first AllReduce result; obtaining a second AllReduce result from a second device, the second AllReduce result being obtained by performing a second AllReduce operation on a second group of processing results generated from processing, by a second group of processing resources of the second device, a second group of sub-tasks in the task; and performing a third AllReduce operation on the first AllReduce result and the second AllReduce result to obtain a processing result of the task. | 2020-10-08 |
20200319923 | HIERARCHAL REGISTER FILE DEVICE BASED ON SPIN TRANSFER TORQUE-RANDOM ACCESS MEMORY - The embodiments provide a register file device which increases energy efficiency using a spin transfer torque-random access memory for a register file used to compute a general purpose graphic processing device, and hierarchically uses a register cache and a buffer together with the spin transfer torque-random access memory, to minimize leakage current, reduce a write operation power, and solve the write delay. | 2020-10-08 |
20200319924 | SCHEDULING TASKS USING WORK FULLNESS COUNTER - A method of activating scheduling instructions within a parallel processing unit includes checking if an ALU targeted by a decoded instruction is full by checking a value of an ALU work fullness counter stored in the instruction controller and associated with the targeted ALU. If the targeted ALU is not full, the decoded instruction is sent to the targeted ALU for execution and the ALU work fullness counter associated with the targeted ALU is updated. If, however, the targeted ALU is full, a scheduler is triggered to de-activate the scheduled task by changing the scheduled task from the active state to a non-active state. When an ALU changes from being full to not being full, the scheduler is triggered to re-activate an oldest scheduled task waiting for the ALU by removing the oldest scheduled task from the non-active state. | 2020-10-08 |
20200319925 | TECHNOLOGIES FOR IMPLEMENTING CONSOLIDATED DEVICE INFRASTRUCTURE SYSTEMS - Apparatuses, methods and storage media associated with a consolidate device infrastructure to provide rapid device service are disclosed herein. In embodiments, a system comprises a portal to provide a presentation tier of services; a business logic layer to provide a logic tier of services; and a plurality of data management servers remotely and separately disposed in a plurality of locations to provide a data tier of services and a hardware tier of services. The presentation tier of services, logic tier of services, the data tier of services, and the hardware tier of services may cooperate to selectively provide a subset of a plurality of resources associated with the data management servers for use, in response to a device resource request received through the portal. Other embodiments may be described and/or claimed. | 2020-10-08 |
20200319926 | SYSTEM ON CHIP COMPRISING A PLURALITY OF MASTER RESOURCES - This system on chip comprises a plurality of master resources, a plurality of slave resources, a plurality of arbitration levels, each arbitration level being able to control the access of at least one master resource to at least one slave resource, each master resource being able to send requests to at least one slave resource according to a bandwidth associated with this slave resource and this master resource. | 2020-10-08 |
20200319927 | METHOD AND DEVICE FOR VIRTUAL RESOURCE ALLOCATION, MODELING, AND DATA PREDICTION - Evaluation results of a plurality of users are received from a plurality of data providers. The evaluation results are obtained by the plurality of data providers evaluating the plurality of users based on evaluation models of the plurality of data providers. A plurality of training samples is constructed by using the evaluation results. Each training sample includes a respective subset of the evaluation results corresponding to a same user of the plurality of users. A label for each training sample is generated based on an actual service execution status of the same user. A model is trained based on the plurality of training samples and the plurality of labels, including setting a plurality of variable coefficients, each variable coefficient specifying a contribution level of a corresponding data provider. Virtual resources to each data provider are allocated based on the plurality of variable coefficients. | 2020-10-08 |
20200319928 | HIGH-PERFORMANCE MEMORY ALLOCATOR - A system and method of allocating memory to a thread of a multi-threaded program are disclosed. A method includes determining one or more thread-local blocks of memory that are available for the thread, and generating a count of the available one or more thread-local blocks for a thread-local freelist. If a thread-local block is available, allocating one block of the one or more thread-local blocks to the thread and decrementing the count in the thread-local freelist. When the count is zero, accessing a global freelist of available blocks of memory to determine a set of available blocks represented by the global freelist. Then, the set of available blocks are allocated from the global freelist to the thread-local freelist by copying one or more free block pointers of the global freelist to a thread-local state of the thread. Blocks can also be deallocated. | 2020-10-08 |
20200319929 | ASSOCIATIVE REGISTRY - A system and method of registering one or more objects in a container of a multi-threaded computing system. A method includes prefixing, to each object of the one or more objects, an object header having a version counter with an initial version count of zero. The method further includes for each object to be allocated to a thread of the multi-threaded computing system, allocating an object frame associated with each allocated object to the thread while maintaining the object header. The method further includes constructing each allocated object in the object frame after the object header, and initializing the object header of each allocated object by executing a store/store memory barrier and incrementing the version counter by a count of one to mark the associated allocated object as valid. | 2020-10-08 |
20200319930 | Client-Side Memory Management in Component-Driven Console Applications - Embodiments regard client-side memory management in component-driven console applications. An embodiment of one or more storage mediums include instructions for performing processing of a console application on an apparatus, including downloading records from a server for a set of one or more of multiple workspaces and opening the set of workspaces in response to request by a user, and switching an active workspace from a first workspace to a second workspace of the plurality of workspaces in response to a request from the user; monitoring memory usage for the plurality of workspaces and monitoring a state of the console application; and managing the memory allocation for the console application based at least in part on the monitored memory usage and console application state. | 2020-10-08 |
20200319931 | RESOURCE ALLOCATION - An apparatus comprising means for:
| 2020-10-08 |
20200319932 | WORKLOAD AUTOMATION AND DATA LINEAGE ANALYSIS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for workload automation and job scheduling information. One of the methods includes obtaining job dependency information, the job dependency information specifying an order of execution of a plurality of jobs. The method also includes obtaining data lineage information that identifies dependency relationships between data stores and transformation, wherein at least one transformation accepts data from a first data store and produces data for a second data store. The method also includes creating links between the job dependency information and the data lineage information. The method also includes determining an impact of a change in a planned execution of an application of the plurality of applications based on the job dependency information, the created links, and the data lineage information. | 2020-10-08 |
20200319933 | SYSTEM AND METHOD OF IDENTIFYING EQUIVALENTS FOR TASK COMPLETION - A system is provided for determining equivalence to execute a task. The system includes an identity module that obtains a unique identity for each of a plurality of resources, and a metadata collection module that collects metadata information relating to the plurality of resources based on the obtained unique identifier for each resource, and that stores the collected metadata information in a metadata database, with the metadata information relating to capabilities of the respective resource for executing the task. Moreover, the system includes an equivalence processor that determines a set of resources of the plurality of resources that are configured to execute the task defined by a requesting client device in an equivalent manner based on the collected metadata information of the at least one set of resources. | 2020-10-08 |
20200319934 | SYSTEM ARCHITECTURE AND METHODS OF EXPENDING COMPUTATIONAL RESOURCES - Various embodiments of the present application relate to a resource management platform that monitors and controls the computational tasks dynamically, and improves or adapts the control during runtime. The resource management platform is able to enhance the resource usage; depending on the width of resource usage fluctuations of the original, unmanaged computational code, the performance enhancement can reach factors exceeding 3×. | 2020-10-08 |
20200319935 | SYSTEM AND METHOD FOR AUTOMATICALLY SCALING A CLUSTER BASED ON METRICS BEING MONITORED - In accordance with an embodiment, described herein is a system and method for use in a distributed computing environment, for automatically scaling a cluster based on metrics being monitored. A cluster that comprises a plurality of nodes or brokers and supports one or more colocated partitions across the nodes, can be associated with an exporter process and alert manager that monitors metrics associated with the cluster. Various metrics can be associated with user-configured alerts that trigger or otherwise indicate the cluster should be scaled. When a particular alert is raised, a callback handler associated with the cluster, for example an operator, can automatically bring up one or more new nodes, that are added to the cluster, and then reassign a selection of existing colocated partitions to the new nodes/brokers, such that computational load can be distributed within the newly-scaled cluster environment. | 2020-10-08 |
20200319936 | DISTRIBUTED PROCESSING MANAGEMENT APPARATUS, DISTRIBUTED PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - A distributed processing management apparatus | 2020-10-08 |
20200319937 | DISTRIBUTED PROCESSING QOS ALGORITHM FOR SYSTEM PERFORMANCE OPTIMIZATION UNDER THERMAL CONSTRAINTS - Methods and apparatus for a distributed processing quality of service algorithm for system performance optimization under thermal constraints are disclosed. An example method includes transmitting, at a first time, a first kernel assignment to a system on chip, the first kernel assignment including an indication of a plurality of kernels assigned to a first sub-system of the system on chip, determining, at the first time, a temperature associated with hardware of the system on chip, when the temperature is above a threshold temperature, generating a second kernel assignment including an indication of a first subset of the plurality of kernels assigned to the first sub-system and an indication of a second subset of the plurality of kernels assigned to a second sub-system of the system on chip, and transmitting, at a second time later than the first time, the second kernel assignment to the system on chip. | 2020-10-08 |
20200319938 | AUTOSCALING OF DATA PROCESSING COMPUTING SYSTEMS BASED ON PREDICTIVE QUEUE LENGTH - To enhance the scaling of data processing systems in a computing environment, a number of data objects indicated in an allocation queue and a first attribute of the allocation queue are determined, where the allocation queue is accessible to a plurality of data processing systems. A number of data objects indicated in the allocation queue at a subsequent time is predicted based on the determined number of data objects and the first attribute. It is determined whether the active subset of the plurality of data processing systems satisfies a criterion for quantity adjustment based, at least in part, on the predicted number of data objects indicated in the allocation queue and a processing time goal. Based on determining that the active subset of data processing systems satisfies the criterion for quantity adjustment, a quantity of the active subset of data processing systems is adjusted. | 2020-10-08 |
20200319939 | DISTRIBUTED SYSTEM FOR DISTRIBUTED LOCK MANAGEMENT AND METHOD FOR OPERATING THE SAME - Disclosed herein are a distributed system and a method for operating the distributed system. The method for operating a distributed system including a server and multiple clients includes acquiring, by a first client of the multiple clients, a lock on a shared resource using a first table of the server and a second table of the client, and releasing, by the first client, a lock on the shared resource using the first table and the second table, wherein the first table is a lock (DSLock) table for storing information about a distributed shared resource, and the second table is a data structure (DSLock node) table for a lock request. | 2020-10-08 |
20200319940 | MANAGEMENT OF DYNAMIC SHARING OF CENTRAL PROCESSING UNITS - It is disclosed a resource sharing manager, RSM, operative to provide efficient utilization of central processing units, CPUs, within virtual servers. The RSM dynamically obtains ( | 2020-10-08 |
20200319941 | Producer-Consumer Communication Using Multi-Work Consumers - A producer-consumer technique includes creating a pool of consumer threads. Producer threads can enqueue work items on a work queue. Consumer threads from the consumer pool are activated to process work items on the work queue. Only one consumer thread at time is activated from the consumer pool, the remaining consumer threads in the pool waiting for an activation event. When signaled by a producer thread, the activated consumer thread pops all the work items from the work queue for processing. The activate consumer thread then signals another consumer thread in the consumer pool by generating an activation event. When the consumer thread has processed its work items, it places itself in the consumer pool by blocking to wait for an activation event. | 2020-10-08 |
20200319942 | INFRASTRUCTURE BASE MODEL API - Embodiments of apparatus, systems, and methods are described for creating and managing an abstract, API-based infrastructure base model. The API-based model can abstract infrastructure assets, such as infrastructure components or connections between components, into a metadata model using standardized syntax and interfaces, for defining and building an infrastructure. Using a modeling document, connections and components of an infrastructure can be abstracted into an API-based model having semantics that covers them all. Connections and infrastructure components can be made available for selection, arrangement, and grouping to build complex infrastructure models without requiring complex API coding by the user. Other infrastructure models having different API definitions can be by abstracted to standardize the assets for building new APIs. The APIs can be further modified and exported to another or the same implementation project. | 2020-10-08 |
20200319943 | AUTO-SAVING DATA FOR SINGLE PAGE APPLICATION - One or more resources of a first computer device to monitor are determined based on a single page application of a second computer device. A value for each resource of the one or more resources is determined. A save command is issued to the save single page application based on the determined value of each resource of the one or more resources. | 2020-10-08 |
20200319944 | USER-SPACE PARALLEL ACCESS CHANNEL FOR TRADITIONAL FILESYSTEM USING CAPI TECHNOLOGY - User process to directly access a file in a file system. A user process first opens a file in the file system for access. In the process of opening the file, a file handle for the file is returned to the user process by an operating system kernel. The user process then makes a read request to a special function unit for one or more blocks of the file in the file system using the file handle. In response, the special function unit, which is coupled to the processor, bypasses the operating system kernel and returns the requested data directly to the user process. A write by the user process may be refused by the computer system or allowed on a selective basis based on a flag in a file system inode corresponding to the block. | 2020-10-08 |
20200319945 | EXTENSIBLE COMMAND PATTERN - Systems and methods for implementing a command stack for an application are disclosed and an embodiment includes receiving an input for executing a first command of the application, initiating execution of the first command, executing one or more second commands which are set to execute based on execution of the first command, completing execution of the first command, and including the first command in the command stack such that an association is defined between the first command and the one or more second commands. In one embodiment, defining the association in the command stack between the first command the one or more second commands may include generating a first nested command stack associated with the first command, including the one or more second commands in the first nested command stack, and including the first command and the first nested command stack in the command stack. | 2020-10-08 |
20200319946 | Processing System and Program Thereof - In a processing system including a data distribution server and multiple processing servers, the data distribution server transfers data to any one of the multiple processing servers, and the processing server includes a server determination information storage unit in which load information on each of the multiple processing servers is stored and an application specific transfer determination information storage unit in which a condition to be transferred peculiar to an application is stored. A processing server in question generates a message upon determining to have received data including information identifying the application, identifies the application upon determining to have received the message, selects any processing server based on the information stored in the server determination information storage unit and the application specific transfer determination information storage unit, and executes the application or transfers the message to the selected processing server according to the selected processing server. | 2020-10-08 |
20200319947 | API AND STREAMING SOLUTION FOR DOCUMENTING DATA LINEAGE - A system for tracing data lineage includes a non-transitory computer readable medium and a processor. The processor is configured to execute an application programming interface (API). The processor executes a first instance of the API to document a first data lineage in a first data transformation process. The processor executes a second instance of the API to document a second data lineage in a second data transformation process. The processor sends the first data lineage and the second data lineage for storage in the non-transitory computer readable medium. | 2020-10-08 |
20200319948 | DYNAMIC DISTRIBUTION OF MEMORY FOR VIRTUAL MACHINE SYSTEMS - Dynamic distribution of memory, including identifying memory modules; creating a system physical address (SPA) of the memory modules; assigning, for each virtual machine (VM), a respective section of the SPA to the VM; calculating, for each VM, portions of the respective section of the SPA for the VM that is being used by the VM and that is not being used by the VM; identifying a physical failure of a particular memory module; in response to identifying the physical failure: identifying a particular VM assigned to the section of the SPA associated with the particular memory module that has physically failed; accumulating, for each other VM, the unused portions of the respective SPA for the VM; marking, for each other VM, the unused portion of the SPA for the VM as read-only for the VM; and reassigning a portion of the unused portions of the SPA to the particular VM. | 2020-10-08 |
20200319949 | AUTOMOTIVE ELECTRONIC CONTROL UNIT RELIABILITY AND SAFETY DURING POWER STANDBY MODE - Disclosed are devices and methods for improved automotive electronic control unit reliability and safety during power standby mode. In one embodiment, a method is disclosed comprising periodically recording memory statistics of a dynamic random-access memory in a device, while the device is in a power on state; detecting a command to enter a standby state; analyzing the memory statistics to determine whether a health check should be performed; powering down the device when determining that a health check should be performed; and placing the device in standby mode when determining that a health check should not be performed. | 2020-10-08 |
20200319950 | Method and Apparatus for Predictive Failure Handling of Interleaved Dual In-Line Memory Modules - An information handling system includes interleaved dual in-line memory modules (DIMMs) that are partitioned into logical partitions, wherein each logical partition is associated with a namespace. A DIMM controller sets a custom DIMM-level namespace-based threshold to detect a DIMM error and to identify one of the logical partitions of the DIMM error using the namespace associated with the logical partition. The detected DIMM error is repaired if it exceeds an error correcting code (ECC) threshold. | 2020-10-08 |
20200319951 | Tuning Context-Aware Rule Engine for Anomaly Detection - The technology disclosed relates to building ensemble analytic rules for reusable operators and tuning an operations monitoring system. In particular, it relates to analyzing a metric stream by applying an ensemble analytical rule. After analysis of the metric stream by applying the ensemble analytical rule, quantized results are fed back for expert analysis. Then, one or more type I or type II errors are identified in the quantized results, and one or more of the parameters of the operators are automatically adjusted to correct the identified errors. The metric stream is further analyzed by applying the ensemble analytical rule with the automatically adjusted parameters. | 2020-10-08 |
20200319952 | CLOCK FRACTIONAL DIVIDER MODULE, IMAGE AND/OR VIDEO PROCESSING MODULE, AND APPARATUS - A clock fractional divider module which is formed as, comprises or has integrated therein a dual-core lock step unit. The dual-core lock step unit is configured in order to realize a clock fractional division arrangement, mechanism or process accompanied by an error detection, recognition and/or correction arrangement, mechanism or process. | 2020-10-08 |
20200319953 | NON-VOLATILE MEMORY DEVICE, METHOD OF OPERATING THE DEVICE, AND MEMORY SYSTEM INCLUDING THE DEVICE - A non-volatile memory device, a method of operating the non-volatile memory device, and a memory system including the non-volatile memory device are provided. A non-volatile memory device includes a memory cell array including a plurality of memory cells configured to be each programmed to one state of a plurality of states, a page buffer circuit including a plurality of page buffers configured to each store received data as state data indicating a target state of a corresponding one of the plurality of memory cells, the page buffer circuit being configured to perform a state data reordering operation of changing a first state data order into a second state data order during performance of a program operation on selected memory cells of the plurality of memory cells, and a reordering control circuit configured to control the page buffer circuit to perform the state data reordering operation simultaneously with the program operation. | 2020-10-08 |
20200319954 | WEBPAGE LOADING METHOD, WEBPAGE LOADING SYSTEM AND SERVER - A webpage loading method for application to an edge server in a content delivery network, includes: after enabling a front end optimization function, obtaining an original loading list of a to-be-loaded page according to a page loading request from a user terminal; optimizing the original loading list to obtain an optimized loading list; sending the optimized loading list to the user terminal; determining whether there is an error in loading the to-be-loaded page on the user terminal; and if it is determined that there is a loading error, sending the original loading list to the user terminal. | 2020-10-08 |
20200319955 | INFORMATION PROCESSING APPARATUS AND METHOD OF CONTROLLING THE SAME - An information processing apparatus includes a nonvolatile memory device in which a program for activating the apparatus is stored, and which has a function of, in a case where an abnormality of management information indicating a correspondence relationship between a logical address and a physical address for data stored in the memory device is detected, performing a process of restoring the an abnormality of the management information is detected at a time of activation of the memory device. In a case where activation of the apparatus based on the program stops part way through, the apparatus is reactivated. Different processes for solving a malfunction in which activation of the apparatus stops part way through are executed based on whether or not an abnormality of the management information is detected in the memory device after the reactivation. | 2020-10-08 |
20200319956 | REMEDIAL ACTION BASED ON MAINTAINING PROCESS AWARENESS IN DATA STORAGE MANAGEMENT - An illustrative data storage management system comprises “awareness logic” that executes on computing devices hosting storage management components such as storage manager, data agent, media agent, and/or other storage management applications. The illustrative awareness logic operates within each of these illustrative components, e.g., as a thread within processes of the storage management component, such as storage management core process, file identifier process, log monitoring process, etc. The awareness logic monitors the targeted process over time and triggers remedial action when criteria are met. Certain vital statistics of each process are collected periodically and analyzed by the illustrative awareness logic, such as CPU usage, memory usage, and handle counts. Criteria for corrective action include rising trends based on local minima data points for one or more vital statistics of the process. Other criteria include exceeding a threshold based on a logarithm function of the collected data points. | 2020-10-08 |
20200319957 | MULTICHIP PACKAGE LINK ERROR DETECTION - First data is received on a plurality of data lanes of a physical link and a stream signal corresponding to the first data is received on a stream lane identifying a type of the first data. A first instance of an error detection code of a particular type is identified in the first data. Second data is received on at least a portion of the plurality of data lanes and a stream signal corresponding to the second data is received on the stream lane identifying a type of the second data. A second instance of the error detection code of the particular type is identified in the second data. The stream lane is another one of the lanes of the physical link and, in some instance, the type of the second data is different from the type of the first data. | 2020-10-08 |
20200319958 | DETECT AND TRIAGE DATA INTEGRITY ISSUE FOR VIRTUAL MACHINE - One example method includes receiving an IO request that specifies an operation to be performed concerning a data block, determining if a policy exists for a device that made the IO request, when a policy is determined to exist for the device, comparing the IO request to the policy, recording the IO request, and passing the IO request to a disk driver regardless of whether the IO request is determined to violate the policy or not. | 2020-10-08 |
20200319959 | SUPPORTING RANDOM ACCESS OF COMPRESSED DATA - A processing device comprising compression circuitry to: determine a compression configuration to compress source data; generate a checksum of the source data in an uncompressed state; compress the source data into at least one block based on the compression configuration, wherein the at least one block comprises: a plurality of sub-blocks, wherein the plurality of sub-block includes a predetermined size; a block header corresponding to the plurality of sub-blocks; and decompression circuitry coupled to the compression circuitry, wherein the decompression circuitry to: while not outputting a decompressed data stream of the source data: generate index information corresponding to the plurality of sub-blocks; in response to generating the index information, generate a checksum of the compressed source data associated with the plurality of sub-blocks; and determine whether the checksum of the source data in the uncompressed format matches the checksum of the compressed source data. | 2020-10-08 |
20200319960 | SEMICONDUCTOR MEMORY DEVICE, AND MEMORY SYSTEM HAVING THE SAME - A semiconductor memory device and a memory system including the same are provided. The semiconductor memory device includes a memory cell array including memory blocks, a local parity memory block, and a register block. The memory blocks respectively store pieces of partial local data in response to a plurality of column selection signals, or a first partial global parity in response to a global parity column selection signal. The local parity memory block stores local parities of local data in response to the plurality of column selection signals, or a second partial global parity in response to the global parity column selection signal. The register block generates a global parity including the first partial global parities and the second partial global parity. Each piece of local data includes the partial local data, and the global parity is a parity of the pieces of local data and the local parities. | 2020-10-08 |
20200319961 | STORAGE DEVICE AND OPERATING METHOD THEREOF - The memory controller is provided to include: an operation controller configured to control memory devices to read first to third source pages and a source parity page in a source stripe and perform program operations on first to third target pages and a target parity page in a target stripe, a program data determiner configured to determine first to third program data to be programmed in the first to third target pages and to determine data read successfully from the first and second source pages as the first and second program data and determine recovery data as the third program data upon whether the read operation for the third source page has failed, and a parity calculator configured to generate calculation data by using the first and second program data, and generate the recovery data by using source parity data and the calculation data. | 2020-10-08 |
20200319962 | MEMORY DEVICE FOR SWAPPING DATA AND OPERATING METHOD THEREOF - An operating method of a memory device, which includes a first memory region and a second memory region, includes reading first data from the first memory region and storing the read first data in a data buffer block, performing a first XOR operation on the first data provided from the data buffer block and second data read from the second memory region to generate first result data, writing the first data stored in the data buffer block in the second memory region, performing a second XOR operation on the first data and the first result data to generate the second data, storing the generated second data in the data buffer block, and writing the second data stored in the data buffer block in the first memory region. | 2020-10-08 |
20200319963 | Data Storage System for Improving Data Throughput and Decode Capabilities - Systems and methods for storing data are described. A system can comprise a controller, one or more physical non-volatile memory devices, a bus comprising a plurality of input/output (I/O) lines. The controller configured to receive data, encode the received data into a codeword, and transfer, in parallel, different portions of the codeword to different physical non-volatile memory devices among the plurality of physical non-volatile memory devices. | 2020-10-08 |
20200319964 | STORAGE DEVICE AND OPERATING METHOD THEREOF - A memory controller includes: a read operation controller for controlling the plurality of memory devices to perform read operation on a plurality of pages included in one stripe; an over-sampling read voltage determiner for determining over-sampling read voltages, based on soft read data of a selected page among at least two pages, when read operations on the at least two pages among the plurality of pages fail; an error bit recovery for recovering error estimation bits included in read data of the selected page, based on an over-sampling read data of the selected page, which is acquired using the over-sampling read voltages; and an error corrector for performing error correction decoding on conversion data obtained by recovering the error estimation bits included in the read data of the selected page. The plurality of pages included in one stripe is included in different memory devices among the plurality of memory devices. | 2020-10-08 |
20200319965 | HARD AND SOFT BIT DATA FROM SINGLE READ - An apparatus includes memory cells programmed to one of a plurality of data states, wherein the memory cells are configured such that the plurality of data states comprise an error-prone data state. Sense circuitry of the apparatus is configured to sense first memory cells programmed to the error-prone data state, determine a bit encoding for the first memory cells, sense other memory cells programmed to other data states, and determine a bit encoding for the other memory cells. A communication circuit of the apparatus is configured to communicate the bit encoding for the other memory cells, the bit encoding for the first memory cells, and an indication that the first memory cells are programmed to the error-prone data state, in response to a single read command from a controller. | 2020-10-08 |
20200319966 | MEMORY SYSTEM FOR CONTROLLING NONVOLATILE MEMORY - According to one embodiment, a memory system copies content of a first logical-to-physical address translation table corresponding to a first region of a nonvolatile memory to a second logical-to-physical address translation table corresponding to a second region of the nonvolatile memory. When receiving a read request specifying a logical address in the second region, the memory system reads a part of the first data from the first region based on the second logical-to-physical address translation table. The memory system detects a block which satisfies a refresh condition from a first group of blocks allocated to the first region, corrects an error of data of the detected block and writes the corrected data back to the detected block. | 2020-10-08 |
20200319967 | SYMBOL-BASED VARIABLE NODE UPDATES FOR BINARY LDPC CODES - Systems and methods for implementing data protection techniques with symbol-based variable node updates for binary low-density parity-check (LDPC) codes are described. A semiconductor memory (e.g., a NAND flash memory) may read a set of data from a set of memory cells, determine a set of data state probabilities for the set of data based on sensed threshold voltages for the set of memory cells, generate a valid codeword for the set of data using an iterative LDPC decoding with symbol-based variable node updates and the set of data state probabilities, and store the valid codeword within the semiconductor memory or transfer the valid codeword from the semiconductor memory. The iterative LDPC decoding may utilize a message passing algorithm in which outgoing messages from a plurality of multi-variable nodes are generated using incoming messages (e.g., log-likelihood ratios or L-values) from a plurality of check nodes. | 2020-10-08 |
20200319968 | MAINTAINING A CONSISTENT LOGICAL DATA SIZE WITH VARIABLE PROTECTION STRIPE SIZE IN AN ARRAY OF INDEPENDENT DISKS SYSTEM - The described technology is generally directed towards maintaining a consistent logical data size with variable protection stripe size in an array of independent disks system. According to an embodiment, a system can comprise a processor that can execute computer executable components stored in a memory, and storage devices. The components can receive a configuration from another node of the redundant array of independent disks system based on a selected number of logical data blocks to configure disks, and configure, based on the selected number, the storage devices to store data in a number of stripes, wherein the number of logical data blocks maps to the storage devices. The data can be stored in the storage devices, wherein parity information for a stripe of the number of stripes is stored for the stored data, and wherein a logical data block of the number of logical data blocks corresponds to the stored data. | 2020-10-08 |
20200319969 | METHOD FOR CHECKING ADDRESS AND CONTROL SIGNAL INTEGRITY IN FUNCTIONAL SAFETY APPLICATIONS, RELATED PRODUCTS - The present disclosure provides a method for checking a to-be-checked signal and related products. The method is applied in a checking device and includes: a first obtaining module, configured to obtain a to-be-checked signal carrying first control information, wherein the first control information is generated based on original control information; a second obtaining module, configured to obtain original checking information; a determining module, configured to determine the first control information according to the to-be-checked signal; and a checking module, configured to check correctness of the first control information according to the original checking information. The present disclosure can be used to enable reliability and functional safety on devices originally designed without features intended to support those functions. | 2020-10-08 |
20200319970 | SOFT CHIP-KILL RECOVERY FOR MULTIPLE WORDLINES FAILURE - Techniques are described for memory writes and reads according to a chip-kill scheme that allows recovery of multiple failed wordlines. In an example, when reading data from a superblock of the memory, where the decoding of multiple wordlines failed, a computer system schedules the decoding of failed wordlines based on quantity of bit errors and updates soft information based on convergence or divergence of the scheduled decoding. Such a computer system significantly reduces decoding failures associated with data reads from the memory and allows improved data retention in the memory. | 2020-10-08 |
20200319971 | AUTOMATIC DATA PRESERVATION FOR POTENTIALLY COMPROMISED ENCODED DATA SLICES - A method includes detecting, by a security module of a dispersed storage network (DSN), a potentially compromised encoded data slice (EDS) of a set of EDSs. The potentially compromised EDS is stored in a storage unit of a set of storage units of the DSN. The method further includes monitoring other storage units of the set of storage units to detect one or more other potentially compromised EDSs of the set of EDSs. When the one or more other potentially compromised EDSs are detected, the method includes determining a data compromise threat level based on the potentially compromised EDSs and the one or more other potentially compromised EDSs and enabling an automatic data preservation protocol based on the data compromise threat level. The automatic data preservation protocol includes one or more of: one or more data preservation options, one or more data tracking options, and one or more notification options. | 2020-10-08 |
20200319972 | OFFLOADING RAID RECONSTRUCTION TO A SECONDARY CONTROLLER OF A STORAGE SYSTEM - A secondary controller receives, from a central storage controller, a command comprising information associated with a RAID rebuild operation to reconstruct data stored at a storage system. In response to receiving the information associated with the RAID rebuild operation, the secondary controller transmits a request to a set of storage devices of the storage system for other data and parity data associated with the data to be reconstructed and receives the other data and the parity data from the set of storage devices. The secondary controller reconstructs the data based on the other data, the parity data, and the information associated with the RAID rebuild operation. | 2020-10-08 |
20200319973 | LAYERED ERROR CORRECTION ENCODING FOR LARGE SCALE DISTRIBUTED OBJECT STORAGE SYSTEM - A method is described. The method includes fragmenting data of an object for storage into an object storage system into multiple data fragments and performing a first error correction encoding process on the data to generate one or more parity fragments for the object. The method also includes sending the multiple data fragments and the one or more parity fragments over a network to different storage servers of the object storage system. The method also includes performing the following at each of the different storage servers: i) incorporating the received one of the multiple data fragments and one or more parity fragments into an extent comprising multiple fragments of other objects; ii) performing a second error correction encoding process on multiple extents including the extent to generate parity information for the multiple extents; and, iii) storing the multiple extents and the parity information. | 2020-10-08 |
20200319974 | CHECKPOINTING - A system comprising: a first subsystem comprising at least one first processor, and a second subsystem comprising one or more second processors. A first program is arranged to run on the at least one first processor, the first program being configured to send data from the first subsystem to the second subsystem. A second program is arranged to run on the one more second processors, the second program being configured to operate on the data content from the first subsystem. The first program is configured to set a checkpoint at successive points in time. At each checkpoint it records in memory of the first subsystem i) a program state of the second program, comprising a state of one or more registers on each of the second processors at the time of the checkpoint, and ii) a copy of the data content sent to the second subsystem since the respective checkpoint. | 2020-10-08 |
20200319975 | EARLY BOOT EVENT LOGGING SYSTEM - An early boot debug system includes a first memory subsystem that includes boot instructions and a processing system that is coupled to the first memory subsystem. The processing system includes a primary processing subsystem, and a secondary processing subsystem that is coupled to the primary processing subsystem and a second memory subsystem. The secondary processing subsystem copies the boot instructions from the first memory subsystem to the second memory subsystem and executes the boot instructions from the second memory subsystem during a boot operation. The secondary processing subsystem then detects a first event during the execution of the boot instructions and, in response, generates a first event information. The secondary processing subsystem stores the first event information in the second memory subsystem to be retrieved on-demand by an administrator. | 2020-10-08 |
20200319976 | METHOD AND BACKUP SERVER FOR PROCESSING EXPIRED BACKUPS - To manage expired backups in a storage system, a backup server retrieves multiple deletion logs which record invalid data included in one or more expired backups. When a deletion condition is met, the backup server identifies a first large object based on the multiple deletion logs, and sends a data migration request to an object-based storage system to instruct the object-based storage system to copy valid data of the first large object to a second large object. Thereafter, the backup server sends an object deletion request to the object-based storage system to instruct the object-based storage system to delete the first large object, thereby clearing up the expired backups. | 2020-10-08 |
20200319977 | METHOD FOR BACKING UP AND RESTORING DIGITAL DATA STORED ON A SOLID-STATE STORAGE DEVICE AND A HIGHLY SECURE SOLID-STATE STORAGE DEVICE - The object of the invention relates to a method for the backing up and recovery of digital data stored on a solid-state data storage device ( | 2020-10-08 |
20200319978 | CAPTURING AND RESTORING PERSISTENT STATE OF COMPLEX APPLICATIONS - The disclosure herein describes generating a protected entity of a VCI. A state document is generated based on the metadata state of the VCI and an entity data stream is set to a URI associated with the data of the VCI. Components and associated URIs of the VCI are identified. A combined data stream is set to a URI configured to provide access to the state document, the entity data stream, and the URIs of the components of the VCI. A snapshot API for providing a snapshot of the state of the protected entity, a serialization API for providing a serialized version of the protected entity, and a de-serialization API for converting a serialized version of the protected entity into a de-serialized version of the protected entity are defined. The protected entity is configured to enable the data and metadata of the VCI to be efficiently backed up. | 2020-10-08 |
20200319979 | SYSTEM AND METHOD OF RESTORING A CLEAN BACKUP AFTER A MALWARE ATTACK - Disclosed herein are systems and method for restoring a clean backup after a malware attack. In one aspect, a method forms a list of files that are of a plurality of designated file types that can be infected by malicious software. The method performs one or more snapshots of the files according to a predetermined schedule over a predetermined period of time and performs one or more backups. The method determines that a malware attack is being carried out on the computing device and generates a list of dangerous objects that spread the malware attack. The method compares the list of dangerous objects with the one or more snapshots to determine when the malware attack occurred. The method identifies a clean backup that was created most recently before the malware attack as compared to other backups and recovers data for the computing device from the clean backup. | 2020-10-08 |
20200319980 | PERSISTENT MEMORY TRANSACTIONS WITH UNDO LOGGING - Undo logging for persistent memory transactions may permit concurrent transactions to write to the same persistent object. After an undo log record has been written, a single persist barrier may be issued. The tail pointer of the undo log may be updated after the persist barrier, and without another persist barrier, so the tail update may be persisted when the next log record is written and persisted. Undo logging for persistent memory transactions may rely on inferring the tail of an undo log after a failure rather than relying on a guaranteed correct tail pointer based on persisting the tail after every append. Additionally, transaction version numbers and checksum information may be stored to the undo log enabling failure recovery. | 2020-10-08 |
20200319981 | COMPUTING WITH UNRELIABLE PROCESSOR CORES - A computer system that has two or more processing engines (PE), each capable of performing one or more operations on one or more operands but one or more of the PEs performs the operations unreliably. Initial results of each operation are debiased to create a debiased result used by the system instead of the initial result. The debiased result has an expected value equal to a correct output where the correct output is the initial result the respective operation would have produced if the respective operation performed was reliable. | 2020-10-08 |
20200319982 | NOTIFICATION MECHANISM FOR DISASTER RECOVERY EVENTS - Some embodiments provide a system and method associated with disaster recovery from a primary region to a secondary region of a cloud landscape. A disaster recovery service platform may determine that a disaster recovery event has occurred and transmit an indication of the disaster recovery event. A messaging server, coupled to the disaster recovery service platform, may receive the indication of the disaster recovery event transmitted by the disaster recovery service platform and process the received indication via a message-oriented middleware protocol (e.g., in accordance with a subscription/publication framework. The messaging server may then arrange for at least one client receiver to receive information associated with the disaster recovery event. The disaster recover event might be associated with, for example, customer onboarding (or offboarding) a customer account failover (or failback), change in landscape, etc. | 2020-10-08 |
20200319983 | Redundancy Method, Device, and System - A redundancy method includes that a first disaster management function (DMF) device on a first site side receives a first request including identification information of a first virtual machine (VM) and a recovery point objective (RPO), allocates a maximum allowable delay time to each node that input/output (IO) data of the first VM passes through in a redundancy process, and sends a second request to a second DMF device on a second site side. The second request includes a maximum allowable delay time of a second replication gateway function (RGF) device on the second site side, and a maximum allowable delay time of an IO writer function (IOWF) device on the second site side and requests the second site side to perform redundancy on the first VM. Hence, the RPO requirements of the tenants can be satisfied in an entire redundancy process. | 2020-10-08 |
20200319984 | ERROR DETECTION FOR PROCESSING ELEMENTS REDUNDANTLY PROCESSING A SAME PROCESSING WORKLOAD - An apparatus has two or more processing elements to redundantly process a same processing workload; and divergence detection circuitry to detect divergence between the plurality of processing elements. When a correctable error is detected by error detection circuitry of an erroneous processing element, the erroneous processing element signals detection of the correctable error to another processing element, to control the other processing element to delay processing to maintain a predetermined time offset between the erroneous processing element and the other processing element. | 2020-10-08 |
20200319985 | SYNCHRONIZING DATA WRITES - Aspects of the present disclosure relate to synchronizing data writes. An update to a file stored on a virtual tape image is received. A position and length of the file is recorded as an invalid data area. The virtual tape image is then synchronized with a tape. The invalid data area is then released from the virtual tape image. | 2020-10-08 |
20200319986 | SYSTEMS AND METHODS FOR SEQUENTIAL RESILVERING - A method of resilvering a plurality of failed devices in storage pools includes detecting a failure of a first storage device in a storage pool, identifying data blocks that were stored on the first storage device that are also stored on other storage devices, and resilvering the first storage device by transferring the data blocks from the other storage devices. While resilvering the first storage device, the method includes detecting a failure of a second storage device in the storage pool, identifying a subset of the data blocks that were stored on the first storage device that were also stored on the second storage device, and reusing a set of sequential I/O commands to resilver at least a portion of the second storage device with the subset of the data blocks. | 2020-10-08 |
20200319987 | METHOD FOR INJECTING DELIBERATE ERRORS INTO PCIE DEVICE FOR TEST PURPOSES, APPARATUS APPLYING METHOD, AND COMPUTER READABLE STORAGE MEDIUM FOR CODE OF METHOD - A method for injecting specific errors of both correctable and non-correctable types into a PCIE device for testing purposes during fabrication stage constructs an error injecting platform based on received target information. The platform includes a control system and at least one testing system. Disabling a security boot in the connected testing system and obtaining information of the specified driver. The obtained information comprises objects to be tested according to selection, each object having a bus address and a PCIE port value. The object under test is controlled to inject a specified error, the injection and result of injection being reported by the processor and analyzed. | 2020-10-08 |
20200319988 | ENHANCED CONFIGURATION MANAGEMENT OF DATA PROCESSING CLUSTERS - Described herein are systems, methods, and software to enhance the management and deployment of data processing clusters in a computing environment. In one example, a management system may monitor data processing efficiency information for a cluster and determine when the efficiency meets efficiency criteria. When the efficiency criteria are met, the management system may identify a new configuration for the cluster and initiate an operation to implement the new configuration for the cluster. | 2020-10-08 |
20200319989 | COLLECTING PERFORMANCE METRICS OF A DEVICE - Some examples relate to collection of performance metrics from a device. In an example, performance metrics for collection from a first device may be selected. The performance metrics may be indexed by assigning an index entry to respective performance metrics on the first device. A fixed sequence of the performance metrics may be maintained on first device. The fixed sequence of the performance metrics along with the index entry assigned to the respective performance metrics may be shared with a second device. A first performance data of the respective performance metrics on first device may be determined. The first performance data of the respective performance metrics may be shared with second device. The sharing may comprise sending, to second device, the index entry and the first performance data of the respective performance metrics in an order corresponding to the fixed sequence of the performance metrics on first device. | 2020-10-08 |
20200319990 | INFORMATION PROCESSING SYSTEM, METHOD, AND STORAGE MEDIUM - An information processing system according to the present invention includes: an analysis device; and a control device. The analysis device performs first operations. The first operations includes: executing analysis, based on an analysis rule with respect to data to be input as an object of analysis; outputting an analysis result; managing the analysis rule; The analysis device store the analysis rule; and analysis state information indicating a state of the analysis to be generated or referred to by the first processor. The control device performs second operations. The second operations includes: monitoring a usage status of the first memory storing the analysis state information; acquiring and managing an evaluation result with respect to the analysis result; and controlling the analysis rule via the analysis device, based on a usage status of the first memory storing the analysis state information and the evaluation result. | 2020-10-08 |
20200319991 | Debugging Mechanism - A processor comprising at least one processing module, each processing module comprising: an execution pipeline; memory; an instruction fetch unit comprising operable to switch between an operational mode and a debugging mode, the instruction fetch unit being configured so as, when in the operational mode, to fetch machine code instructions from the memory into the execution pipeline to be executed; and a debug interface for connecting to a debug adapter. The debug interface comprises a debug instruction register enabling the debug adapter to write a machine code instruction to the debug instruction register, and wherein the instruction fetch unit is configured so as, when in the debug mode, to fetch instructions from the debug instruction register into the pipeline instead of from the memory. | 2020-10-08 |
20200319992 | PREDICTING DEFECTS USING METADATA - Systems, methods, and machine-readable instructions stored on machine-readable media are disclosed for receiving metadata associated with a source code. Prior to testing the source code, the metadata associated with the source code is analyzed to predict a likelihood of success of the testing. The source code is then tested based on the predicted likelihood of success. | 2020-10-08 |
20200319993 | SERVERS AND COMPUTER PROGRAMS FOR DEBUGGING OF NATIVE PROGRAMS AND VIRTUAL MACHINE PROGRAMS ON INTEGRATED DEVELOPMENT ENVIRONMENT - Disclosed is a server for debugging of virtual machine programs and native programs in an integrated development environment according to some exemplary embodiments of the present disclosure. The server may include: an integrated development environment interface providing unit configured to provide an integrated development environment interface; and an integrated debugging unit configured to provide integrated debugging between the native programs and the virtual machine programs, in which the integrated debugging unit may include a debugging instruction receiving module configured to receive instructions for performing debugging of a first program from the integrated development environment interface, in which the first program comprises the native programs and the virtual machine programs which may be mutually called; and a debugger allocation module configured to allocate a debugger module corresponding to an execution context of the first program to the integrated development environment interface if the debugging instruction receiving module receives the instructions for performing debugging of a first program, in which the corresponding debugger module is the native debugger module or the virtual machine debugger module. | 2020-10-08 |
20200319994 | GENERATING REPRESENTATIVE MICROBENCHMARKS - Embodiments for generating representative microbenchmarks in a computing environment are provided. One or more tracing points may be selected in a target application. Executed instructions and used data of the target application may be dynamically traced according to the one or more tracing points according to a tracing plan. Tracing information of the dynamic tracing may be replicated in an actual computing environment and a simulated computing environment. | 2020-10-08 |
20200319995 | Customizable Enterprise Automation Test Framework - Embodiments provide systems and methods for implementing a customizable enterprise automation test framework. A workflow definition, page structure definition, and function definition for an automated test of an enterprise website can be received. A hybrid script parser can parse the workflow definition, page structure definition, and function definition to generate a hybrid script for the automated test. An automation tool parser can parse the hybrid script to generate an output for an automation tool. Based on the output from the automation tool parser, a runtime script can be generated that is executed by the automation tool to generate results for the automated test, where the automation tool implements the steps of the one or more workflows on the plurality of web pages of the enterprise web site to generate the results for the automated test. | 2020-10-08 |
20200319996 | SYSTEM AND METHOD OF HANDLING COMPLEX EXPERIMENTS IN A DISTRIBUTED SYSTEM - A website building system (WBS) that enables web site designers to build and host websites for their end users. The WBS includes at least one processor and an experiment manager running on the at least one processor to manage multiple concurrent experiments at runtime with the experiments to test at least features, components or system updates for the WBS and where the experiment manager at least selects a target population for an experiment, handles conflict resolution between the experiment and at least one other concurrent experiment, and collects experiment data. The WBS also includes an experiment analyzer to analyze the experiment data during runtime and to update the experiment manager accordingly. | 2020-10-08 |
20200319997 | SOFTWARE TEST AUTOMATION SYSTEM AND METHOD - A method for testing an updated version of an existing software application. The method may comprise analyzing a user interface screen of the updated version of the existing software application to identify previously existing controls and updated controls and automatically capturing, via a capture engine, each of the updated controls present on the user interface screen of the updated version, wherein the automatic capturing is initiated by a user selecting a learn screen function. The method may further comprise automatically associating, via a rules base, control descriptions with each of the automatically captured updated controls and one or more testing actions with each of the updated controls, thereby generating a plurality of test steps each comprising one of the updated controls, a particular associated control description, and a particular testing action. The method may then comprise generating an updated test component comprised of the plurality of test steps. | 2020-10-08 |
20200319998 | MEMORY DEVICE AND WEAR LEVELING METHOD FOR THE SAME - A memory device includes: a memory array used for implementing neural networks (NN); and a controller coupled to the memory array. The controller is configured for: in updating and writing unrewritable data into the memory array in a training phase, marching the unrewritable data into a buffer zone of the memory array; and in updating and writing rewritable data into the memory array in the training phase, marching the rewritable data by skipping the buffer zone. | 2020-10-08 |