45th week of 2019 patent applcation highlights part 45 |
Patent application number | Title | Published |
20190340057 | METHODS AND SYSTEMS TO COMPOUND ALERTS IN A DISTRIBUTED COMPUTING SYSTEM - Computational methods and systems described herein are directed to compounding alerts generated in a distributed computing system. A user or system administrator may define a set of multistage process rules that can be used by a log management server application to examine log messages generated by event sources of a multistage process for alerts. A log-message database is searched to identify a log-message file used to record log messages generated by the event sources. A single compound alert indicating that the multistage process rules are satisfied is generated, when log messages of the log-message file that satisfy the rules of the multistage process rules have been identified. Methods may also execute remedial action to correct the multistate process when log messages of the log-message file fail to satisfy at least one rule of the multistage process rules. | 2019-11-07 |
20190340058 | CRASH LOG STORAGE AND RETRIEVAL USING BOOT PARTITIONS IN SOLID STATE SYSTEMS - The present disclosure describes technologies and techniques for use by a data storage controller—such as a controller for use with a NAND or other non-volatile memory (NVM)—to store crash-dump information in a boot partition following a system crash within the data storage controller. Within illustrative examples described herein, the boot partition may be read by a host device without the host first re-installing valid firmware into the data storage controller following the system crash. In the illustrative examples, the data storage controller is configured for use with versions of Peripheral Component Interconnect (PCI) Express—Non-Volatile Memory express (NVMe) that provide support for boot partitions in the NVM. The illustrative examples additionally describe virtual boot partitions in random access memory (RAM) for storing crash-dump information if the NAND has been corrupted, where the crash-dump information is retrieved from the RAM without power-cycling the RAM. | 2019-11-07 |
20190340059 | ISOLATING SERVICE ISSUES IN A MICROSERVICE ARCHITECTURE - A method, computer program product, and a computer system for mitigating a fault in an information service comprised of multiple microservices includes a processor(s) obtaining a notification of a fault in the information service which includes logs tracking execution of the information service in a shared computing environment. The processor(s) generates a dependency data structure describing interdependencies between individual microservices with respect to each other. The processor(s) mitigates the fault by replacing a faulty microservice in the microservices represented in the dependency data structure; the faulty microservice includes program code with an issue resulting in the fault. To replace the faulty microservice, the processor(s) continuously monitors the information service and progressively replaces, in accordance with the interdependencies, each microservice represented in the dependency data structure with an earlier version of the microservice, halting replacements when no notification for the fault is obtained subsequent to a replacement of a given microservice. | 2019-11-07 |
20190340060 | SYSTEMS AND METHODS FOR ADAPTIVE PROACTIVE FAILURE ANALYSIS FOR MEMORIES - In accordance with embodiments of the present disclosure, an information handling system may include a processor, a memory communicatively coupled to the processor and comprising a plurality of non-volatile memories, and a failure analysis module comprising a program of instructions, the failure analysis module configured to, when read and executed by the processor, set a predictive failure threshold for each of the plurality of non-volatile memories based at least on functional parameters of such non-volatile memory, and adapt the predictive failure threshold for each of the plurality of non-volatile memories based at least on health status parameters of such non-volatile memory. | 2019-11-07 |
20190340061 | AUTOMATIC CORRECTING OF COMPUTING CLUSTER EXECUTION FAILURE - A processor may identify, using historical data, an amount of computing resources consumed to remedy the failure with an automatic remedy step. The processor may determine that the amount of consumed computing resources to remedy the failure is less than an amount of computing resources consumed by restarting the process. The processor may perform the automatic remedy step. The processor may identify that the automatic remedy step has failed. The processor may determine a waiting period based on an estimated time to receive a user response to the failure and an estimated load on the computing cluster. The processor may display a generated alert to a user during the waiting period. The processor may identify that no user input has been received during the waiting period. The processor may release computing resources corresponding to the process. | 2019-11-07 |
20190340062 | NEIGHBOR ASSISTED CORRECTION ERROR RECOVERY FOR MEMORY SYSTEM AND METHOD THEREOF - Error recovery operations are provided for a memory system. The memory system includes a memory device including a plurality of cells and a controller. The controller performs a read on a select cell among the plurality of cells. The controller adjusts a log-likelihood ratio (LLR) value on the select cell to generate an adjusted LLR value, based on first read data on the select cell and second read data on at least one neighbor cell adjacent to the select cell, when the read on the select cell fails. | 2019-11-07 |
20190340063 | MEMORY SCRUB SYSTEM - A memory scrubbing system includes a persistent memory device coupled to an operating system (OS) and a Basic Input/Output System (BIOS). During a boot process and prior to loading the OS, the BIOS retrieves a known memory location list that identifies known memory locations of uncorrectable errors in the persistent memory device and performs a partial memory scrubbing operation on the known memory locations. The BIOS adds any known memory locations that maintain an uncorrectable error to a memory scrub error list. The BIOS then initiates a full memory scrubbing operation on the persistent memory device, cause the OS to load and enter a runtime environment while the full memory scrubbing operation is being performed, and provides the memory scrub error list to the OS. | 2019-11-07 |
20190340064 | MEMORY-BASED DISTRIBUTED PROCESSOR ARCHITECTURE - Distributed processors and methods for compiling code for execution by distributed processors are disclosed. In one implementation, a distributed processor may include a substrate; a memory array disposed on the substrate; and a processing array disposed on the substrate. The memory array may include a plurality of discrete memory banks, and the processing array may include a plurality of processor subunits, each one of the processor subunits being associated with a corresponding, dedicated one of the plurality of discrete memory banks. The distributed processor may further include a first plurality of buses, each connecting one of the plurality of processor subunits to its corresponding, dedicated memory bank, and a second plurality of buses, each connecting one of the plurality of processor subunits to another of the plurality of processor subunits. | 2019-11-07 |
20190340065 | MEMORY DEVICES HAVING DIFFERENTLY CONFIGURED BLOCKS OF MEMORY CELLS - A memory device has a plurality of individually erasable blocks of memory cells and a controller configured to configure different blocks of the plurality of blocks of memory cells in different configurations, which can include blocks configured to include only groups of user data memory cells for storing user data, blocks configured to include only groups of overhead data memory cells for storing error correction code (ECC) data, and blocks configured to include groups of user data memory cells and groups of overhead data memory cells. | 2019-11-07 |
20190340066 | MEMORY DEVICES HAVING DIFFERENTLY CONFIGURED BLOCKS OF MEMORY CELLS - A memory device has a plurality of individually erasable blocks of memory cells and a controller configured to configure a first block of memory cells of the plurality of blocks of memory cells in a first configuration comprising one or more groups of overhead data memory cells, to configure a second block of memory cells of the plurality of blocks of memory cells in a second configuration comprising a group of user data memory cells and a group of overhead data memory cells, and to configure a third block of memory cells of the plurality of blocks of memory cells in a third configuration comprising only a group of user data memory cells. The group of overhead data memory cells of the second block of memory cells has a different storage capacity than at least one group of overhead data memory cells of the one or more groups of overhead data memory cells of the first block of memory cells. | 2019-11-07 |
20190340067 | SEMICONDUCTOR MEMORY DEVICES AND MEMORY SYSTEMS INCLUDING THE SAME - A semiconductor memory device includes: a memory cell array including a plurality of memory cells; an error correction code (ECC) engine configured to detect and/or correct at least one error bit in read data and configured to generate a decoding status flag indicative of whether the at least one error bit is detected and/or corrected, wherein the read data is read from the memory cell array; a channel interface circuit configured to receive the read data and the decoding status flag from the ECC engine and configured to transmit the read data and the decoding status flag to a memory controller, wherein the channel interface circuit is configured to transmit the decoding status flag to the memory controller through a pin; and a control logic circuit configured to control the ECC engine and the channel interface circuit in response to an address and a command from the memory controller. | 2019-11-07 |
20190340068 | ENCODER AND DECODER FOR MEMORY SYSTEM AND METHOD THEREOF - Encoders and decoders are provided for memory systems. An encoder scrambles data bits corresponding to a logical page, selected from among multiple logical pages, using a plurality of random sequences, to generate a plurality of scrambled sequences; selects, as an encoded sequence, a scrambled sequence among the plurality of scrambled sequences; and provides a memory device with the encoded sequence to store the encoded sequence in multiple level cells. The selected scrambled sequence has the lowest number of logical high values among the plurality of scrambled sequences. | 2019-11-07 |
20190340069 | MEMORY SYSTEM WITH DEEP LEARNING BASED INTERFERENCE CORRECTION CAPABILITY AND METHOD OF OPERATING SUCH MEMORY SYSTEM - Memory systems, controllers, decoders and methods execute decoding with a mufti-level interference correction scheme. A decoder performs first soft decoding to generate log likelihood ratio (LLR) values of a select bit and bits of memory cells neighboring a memory cell of the select bit. A quantizer obtains an estimated LLR value of the select bit based on the LLR values of the select bit and the bits of the memory cells neighboring the memory cell of the select bit, when the first soft decoding fails. The decoder performs second soft decoding using the estimated LLR value when the first soft decoding fails, and performs third soft decoding using information obtained from application of a deep learning model to provide a more accurate estimate of the LLR value of the select bit when the second soft decoding fails. | 2019-11-07 |
20190340070 | ENCODING METHOD AND MEMORY STORAGE APPARATUS USING THE SAME - An encoding method for a memory storage apparatus adopting an ECC algorithm is provided. The memory storage apparatus comprises an ECC encoder. The encoding method includes: receiving a write command comprising a write address and a write data; reading an existing codeword; attaching a flip bit to the write data; encoding the write data and the flip bit to generate parity bits based on the ECC algorithm by the ECC encoder and attaching the write data and the flip bit to the plurality of parity bits to generate a new codeword; flipping the new codeword based on a number of bits among selected bits required to be changed from the existing codeword to the new codeword; and writing one of the new codeword and the flipped new codeword to the write address. In addition, a memory storage apparatus using the encoding method is provided. | 2019-11-07 |
20190340071 | MEMORY SYSTEM WITH HYBRID ITERATIVE DECODING CAPABILITY AND METHOD OF OPERATING SUCH MEMORY SYSTEM - Memory controllers, decoders and methods to perform decoding of user bits and parity bits including those corresponding to low degree variable nodes. For each of the user bits, the decoder performs a variable node update operation and a check node update operation for connected check nodes. After all of the user bits are processed, the decoder performs a parity node update operation for the parity bits using results of the variable node and check node update operations performed on the user bits. | 2019-11-07 |
20190340072 | ELASTIC STORAGE IN A DISPERSED STORAGE NETWORK - A method for execution by a dispersed storage and task (DST) processing unit includes: generating an encoded data slice from a dispersed storage encoding of a data object and determining when the encoded data slice will not be stored in local dispersed storage. When the encoded data slice will not be stored in the local dispersed storage, the encoded data slice is stored via at least one elastic slice in an elastic dispersed storage, an elastic storage pointer is generated indicating a location of the elastic slice in the elastic dispersed storage, and the elastic storage pointer is stored in the local dispersed storage. | 2019-11-07 |
20190340073 | DYNAMIC AUTHORIZATION BATCHING IN A DISPERSED STORAGE NETWORK - A method for execution by a dispersed storage and task (DST) processing unit includes queuing authorization requests, corresponding to received operation requests, in response to determining that first system utilization data indicates a first utilization level that compares unfavorably to a normal utilization threshold. A first batched authorization request that includes the queued authorization requests is generated for transmission to an Identity and Access Management (IAM) system in response to determining that the first request queue compares unfavorably to a first queue limit condition. A second queue limit condition that is different from the first queue limit condition is determined based on second system utilization data. A second batched authorization request that includes a second plurality of authorization requests of a second request queue is generated in response to determining that the second request queue compares unfavorably to the second queue limit condition. | 2019-11-07 |
20190340074 | MODIFYING A CONTAINER INSTANCE NETWORK - A method, computer program product, and system includes a processor(s) progressively recording data modifications to an object (e.g., a virtual resource or a container), in an in-memory resource of the shared computing environment. Based on receiving an indication of a system failure or a system reboot, the processor(s) writes the data modifications to a non-volatile storage resource, where the non-volatile storage resource is readable by an object manager communicatively coupled to the non-volatile storage resource, and where the object manager utilizes the data modifications to recover the object at reboot following the system failure. | 2019-11-07 |
20190340075 | EMULATING HIGH-FREQUENCY APPLICATION-CONSISTENT SNAPSHOTS BY FORMING RESTORE POINT DATA SETS BASED ON REMOTE SITE REPLAY OF I/O COMMANDS - The disclosed systems emulate high-frequency application-consistent snapshots by forming restore point data sets based on remote site replay of I/O commands. A method embodiment commences upon identifying a primary computing site and a secondary computing site, then identifying an application to be restored from the secondary computing site after a disaster. Prior to the disaster, a group of computing entities of the application to be restored from the secondary computing site are identified. Input/output operations that are performed over any of the computing entities at the primary site are streamed to the secondary site where they are stored. An I/O map that associates a time with an indication of a last received I/O command that had been performed over a changing set of computing entities is sent to the secondary site. An agent at the secondary site accesses the I/O map and the streamed-over I/Os to construct recovery data. | 2019-11-07 |
20190340076 | UNIFIED PROTECTION OF CLUSTER SUITE - Techniques to back up data associated with a cluster environment are disclosed. In various embodiments, an indication is received to back up data associated with the cluster. A backup configuration data associated with the cluster is used to back up, in a unified backup operation, one or more save sets associated with virtual resources associated with the cluster and one or more saves sets associated with physical nodes associated with the cluster, including by storing each respective save set in a manner that associates the save set with a virtual or physical node comprising the cluster suite. | 2019-11-07 |
20190340077 | OPTIMIZATION TO PERMIT BLOCK BASED INCREMENTAL BACKUP ACROSS SYSTEM REBOOT OR CRASH - Techniques to back up data are disclosed. In various embodiments, a copy of a free block map as of a first time associated with a first backup is stored in persistent data storage. Writes made subsequent to the first backup to blocks not listed as free in the copy of the free block map as of the first time are tracked in a persistently-stored change block tracking log. A free block map as of a second time and the previously-stored copy of the free block map as of the first time are used to determine which blocks listed as free in the free block map as of the first time have been written to since the first time. At least a subset of blocks determined to have been written to since the first time are including in an incremental backup. | 2019-11-07 |
20190340078 | DYNAMIC TRIGGERING OF BLOCK-LEVEL BACKUPS BASED ON BLOCK CHANGE THRESHOLDS AND CORRESPONDING FILE IDENTITIES USING INDEXING IN A DATA STORAGE MANAGEMENT SYSTEM - A data storage management approach is disclosed that performs backup operations flexibly, based on a dynamic scheme of monitoring block changes occurring in production data. The illustrative system monitors block changes based on certain block-change thresholds and triggers block-level backups of the changed blocks when a threshold is passed. Block changes may be monitored in reference to particular files based on a reverse lookup mechanism. The illustrative system also collects and stores historical information on block changes, which may be used for reporting and predictive analysis. | 2019-11-07 |
20190340079 | POLICY DRIVEN DATA UPDATES - A method, executed by at least one processor, includes generating a snapshot for a plurality of data files, receiving an update request for a selected file of the plurality of data files, determining if the selected file is subject to a backup policy, updating the selected file without preserving the snapshot of the selected file if the selected file is not subject to the backup policy, and updating the selected file while preserving the snapshot of the selected file if the selected file is subject to the backup policy. A corresponding computer program product and computer system are also disclosed herein. | 2019-11-07 |
20190340080 | HYBRID MEMORY SYSTEM WITH CONFIGURABLE ERROR THRESHOLDS AND FAILURE ANALYSIS CAPABILITY - A system and method for configuring fault tolerance in nonvolatile memory (NVM) are operative to set a first threshold value, declare one or more portions of NVM invalid based on an error criterion, track the number of declared invalid NVM portions, determine if the tracked number exceeds the first threshold value, and if the tracked number exceeds the first threshold value, perform one or more remediation actions, such as issue a warning or prevent backup of volatile memory data in a hybrid memory system. In the event of backup failure, an extent of the backup can still be assessed by determining the amount of erased NVM that has remained erased after the backup, or by comparing a predicted backup end point with an actual endpoint. | 2019-11-07 |
20190340081 | CLIENT MANAGED DATA BACKUP PROCESS WITHIN AN ENTERPRISE INFORMATION MANAGEMENT SYSTEM - Certain embodiments disclosed herein reduce or eliminate a communication bottleneck at the storage manager by reducing communication with the storage manager while maintaining functionality of an information management system. In some implementations, a client obtains information for enabling a secondary storage job (e.g., a backup or restore) from a storage manager and stores the information (which may be referred to as job metadata) in a local cache. The client may then reuse the job metadata for multiple storage jobs reducing the frequency of communication with the storage manager. When a configuration of the information management system changes, or the availability of resources changes, the storage manager can push updates to the job metadata to the clients. Further, a client can periodically request updated job metadata from the storage manager ensuring that the client does not rely on out-of-date job metadata. | 2019-11-07 |
20190340082 | MULTI-TIERED BACKUP INDEXING - Certain embodiments disclosed herein reduce or eliminate a communication bottleneck at the storage manager by reducing communication with the storage manager while maintaining functionality of an information management system. In some implementations, a client obtains information for enabling a secondary storage job (e.g., a backup or restore) from a storage manager and stores the information (which may be referred to as job metadata) in a local cache. The client may then reuse the job metadata for multiple storage jobs reducing the frequency of communication with the storage manager. When a configuration of the information management system changes, or the availability of resources changes, the storage manager can push updates to the job metadata to the clients. Further, a client can periodically request updated job metadata from the storage manager ensuring that the client does not rely on out-of-date job metadata. | 2019-11-07 |
20190340083 | CONFIGURABLE RECOVERY STATES - In a first area of a persistent memory, data is stored that defines a known good state that is operable to launch the computing device to the known good state in response to a reboot. In response to a write request to the first area of persistent memory, the requested write is directed to a second area of the persistent memory and a record of redirected writes to the second area of persistent memory is updated. A request is received to establish an update to the known good state. The updated known good state is operable to launch the computing device to the updated known good state in response to a reboot. In response to the request, the record is persisted such that in response to a reboot, the record is usable to restore the redirected writes, thereby launching the computing device to the updated known good state. | 2019-11-07 |
20190340084 | BACKUP-BASED MEDIA AGENT CONFIGURATION - Certain embodiments disclosed herein reduce or eliminate a communication bottleneck at the storage manager by reducing communication with the storage manager while maintaining functionality of an information management system. In some implementations, a client obtains information for enabling a secondary storage job (e.g., a backup or restore) from a storage manager and stores the information (which may be referred to as job metadata) in a local cache. The client may then reuse the job metadata for multiple storage jobs reducing the frequency of communication with the storage manager. When a configuration of the information management system changes, or the availability of resources changes, the storage manager can push updates to the job metadata to the clients. Further, a client can periodically request updated job metadata from the storage manager ensuring that the client does not rely on out-of-date job metadata. | 2019-11-07 |
20190340085 | CREATING CUSTOMIZED BOOTABLE IMAGE FOR CLIENT COMPUTING DEVICE FROM BACKUP COPY - According to certain aspects, a method of creating customized bootable images for client computing devices in an information management system can include: creating a backup copy of each of a plurality of client computing devices, including a first client computing device; subsequent to receiving a request to restore the first client computing device to the state at a first time, creating a customized bootable image that is configured to directly restore the first client computing device to the state at the first time, wherein the customized bootable image includes system state specific to the first client computing device at the first time and one or more drivers associated with hardware existing at time of restore on a computing device to be rebooted; and rebooting the computing device to the state of the first client computing device at the first time from the customized bootable image. | 2019-11-07 |
20190340086 | PLUGGABLE RECOVERY IN A DATA PROTECTION SYSTEM - Systems and methods for performing a recovery operation for a host. A user interface is provided than enables user interface interactions that are common to or independent of the host and user interface interactions that are specific to a client backup module selected for the recovery operation. The user interface retrieves a plug-in to enable the user interface interactions that are specific to the client backup module. | 2019-11-07 |
20190340087 | REPAIRING PARTIALLY COMPLETED TRANSACTIONS IN FAST CONSENSUS PROTOCOL - In an approach, a processor detects a transmission control protocol disconnection of a first distributed storage unit from a distributed storage network, wherein the distributed storage network comprises a set of distributed storage units. A processor identifies a transaction, wherein: the transaction is not in a final state, the transaction is a first proposal, from the first distributed storage unit, for the set of distributed storage units to store a dataset with a first revision number within the distributed storage network, and the dataset is broken into one or more data pieces to be written on the set of distributed storage units of the distributed storage network that approve the proposal. A processor identifies a timestamp of the transaction. A processor determines a stage the transaction has reached. A processor places the transaction in a final state based on the determined stage the transaction has reached. | 2019-11-07 |
20190340088 | HEARTBEAT MONITORING OF VIRTUAL MACHINES FOR INITIATING FAILOVER OPERATIONS IN A DATA STORAGE MANAGEMENT SYSTEM, USING PING MONITORING OF TARGET VIRTUAL MACHINES - An illustrative “VM heartbeat monitoring network” of heartbeat monitor nodes monitors target VMs in a data storage management system. Accordingly, target VMs are distributed and re-distributed among illustrative worker monitor nodes according to preferences in an illustrative VM distribution logic. Worker heartbeat monitor nodes use an illustrative ping monitoring logic to transmit special-purpose heartbeat packets to respective target VMs and to track ping responses. If a target VM is ultimately confirmed failed by its worker monitor node, an illustrative master monitor node triggers an enhanced storage manager to initiate failover for the failed VM. The enhanced storage manager communicates with the heartbeat monitor nodes and also manages VM failovers and other storage management operations in the system. Special features for cloud-to-cloud failover scenarios enable a VM in a first region of a public cloud to fail over to a second region. | 2019-11-07 |
20190340089 | METHOD AND APPARATUS TO PROVIDE UNINTERRUPTED OPERATION OF MISSION CRITICAL DISTRIBUTED IN-MEMORY APPLICATIONS - Data is mirrored in persistent memory in nodes in a computer cluster for redundancy. The data can be recovered from the persistent memory in a failed node by another node in the computer cluster through a low power network interface in the failed node. | 2019-11-07 |
20190340090 | CONTROL SYSTEM FOR A MOTOR VEHICLE, MOTOR VEHICLE, METHOD FOR CONTROLLING A MOTOR VEHICLE, COMPUTER PROGRAM PRODUCT, AND COMPUTER-READABLE MEDIUM - A control system for a motor vehicle having a first control unit for controlling a first function of the motor vehicle, a second control unit for controlling a second function of the motor vehicle and a backup control unit. At least the first or the second control unit is connected in a signal-transmitting manner with the backup control unit. In order to ensure the proper execution of functions of a motor vehicle controlled by the control units with the least possible additional overhead, even with a faulty control unit, the backup control unit is configurable in response to the input of an error signal from the first or the second control unit such that the function of the motor vehicle corresponding to the faulty control unit is be controlled via the backup control unit. | 2019-11-07 |
20190340091 | EFFICIENT DATA RESTORATION - A data center communicates with a cloud-based backup system. Client-server roles are established such that a client role is assigned to the data center and a server role is assigned to the cloud-based backup system. On an ongoing basis, backup operations are performed. In the event of disaster or other cause of an outage of the data center, a failover protocol might be invoked such that the cloud-based backup system takes on additional processing operations beyond the aforementioned backup operations. After remediation, the data center issues a data restoration message to the cloud-based backup system. The remediated data center initiates a failback protocol that reverses the client-server roles of the data center and the cloud-based backup system such that the server role is assigned to the data center and the client role is assigned to the cloud-based backup system. After performing system restoration operations, the roles may be reversed again. | 2019-11-07 |
20190340092 | LINK DOWNGRADE DETECTION SYSTEM - A link downgrade detection system includes an interface that is coupled to an endpoint device. The endpoint device is configured to provide an endpoint link that includes a first link capability at a maximum first link capability level and a second link capability at a maximum second link capability level. The endpoint device stores a working first link capability level and a working second link capability level in a first memory device included on the endpoint device. A BIOS coupled to the chassis interface enumerates the endpoint device, determines an actual first link capability level and an actual second link capability level, and retrieves the working link capability levels. The BIOS then determines, based on the working link capability levels and the actual link capability levels that the endpoint link is downgraded, and in response provides a notification that the endpoint link of the endpoint device is downgraded. | 2019-11-07 |
20190340093 | Widget Provisioning of User Experience Analytics and User Interface / Application Management - A method for tracking user interactions with an application includes: storing the application in a memory of a mobile device, the application being associated with an instrumented widget and a library, the widget including an event logger; executing the application and the widget; receiving, through a user interface of the mobile device, an input corresponding to the event logger of the widget; logging, by the library, the input corresponding to the event logger of the widget in the memory of the mobile device; filtering a plurality of events, including the input corresponding to the event logger of the widget, to manage what data is reported to a monitor; and transmitting the input corresponding to the event logger of the widget to a server as monitored data. | 2019-11-07 |
20190340094 | COMPUTING SYSTEM MONITORING - Systems for alerting in computing systems. A method commences by defining a plurality of analysis zones bounded by respective ranges of system metric values, which ranges in turn correspond a plurality of system behavior classifications. System observations are taken while the computing system is running. A system observation comprising a measured metric value is classified into one or more of the behavior classifications. Based on the classification, one or more alert analysis processes are invoked to analyze the system observation and make a remediation recommendation. An alert or remediation is raised or suppressed based on one or more zone-based analysis outcomes. An alert is raised when anomalous behavior is detected. The system makes ongoing observations to learn how and when to classify a measured metric value into normal or anomalous behaviors. As changes occur in the system configuration, the analysis zones are adjusted to reflect changing bounds of the zones. | 2019-11-07 |
20190340095 | PREDICTING PERFORMANCE OF APPLICATIONS USING MACHINE LEARNING SYSTEMS - A method is used in predicting performance of applications using machine learning systems. A machine learning system is trained on a sample server executing an application. An expected performance of the application is determined using the machine learning system for a server having different characteristics than the sample server by predicting the expected performance of the application on the server without having to actually measure a performance of the application on the server. | 2019-11-07 |
20190340096 | PERFORMANCE EVALUATION AND COMPARISON OF STORAGE SYSTEMS - Described embodiments provide storage system evaluation and comparison processes. An aspect includes sampling data points for a workload running on system over a sampling period. The data points indicate a performance metric with respect to operational characteristics of the system. An aspect further includes subtracting a system specification value from each of the averaged sampled data points, thereby producing deviation values reflecting a deviation of the sampled data points from the system specification value. An aspect also includes averaging the sampled data points, calculating a standard deviation of the averaged sampled data points, and dividing the variance value by the standard deviation, thereby producing a modified performance value that accounts for a deviation in the operational characteristics of the system over the sampling period. | 2019-11-07 |
20190340097 | DIAGNOSTIC DATA CAPTURE - Statistical sampling of diagnostic data within an apparatus for processing data | 2019-11-07 |
20190340098 | SYSTEM AND METHOD FOR PROVIDING AUDIO PROCESSING AND INVENTORY TRACKING AND MANAGEMENT IN A CLOUD ENVIRONMENT - One embodiment provides a method for inventory tracking and management in a cloud environment. The method comprises maintaining a plurality of on demand computing resources in the cloud environment. The computing resources include one or more cloud applications. The method further comprises creating a job-specific device by flexibly configuring an end user device connected to the cloud environment to execute a specific job, and tracking and managing usage of the job-specific device utilizing at least one of the computing resources. | 2019-11-07 |
20190340099 | SYSTEM RESOURCE COMPONENT UTILIZATION - A computer implemented method including receiving a set of utilization metrics for a system comprising at least an average number of concurrent requests to the system and a maximum concurrency that the system is capable of supporting, providing a function that incorporates two curve segments, computing a utilization according to a ratio of the average concurrent requests to the function, and managing performance problems indicated by the utilization. A computer implemented method including receiving a set of response time metrics comprising at least an average response time, average concurrent requests, and a minimum interference response time, computing a current response ratio of the minimum interference response time and the average response time, computing a maximum response ratio corresponding to a maximum concurrency, determining the maximum concurrency is inaccurate by comparing the maximum response ratio and the current response ratio, and replacing the maximum concurrency. | 2019-11-07 |
20190340100 | APPLICATION CURATION - Methods, systems and computer program products for user-specific curation of applications from heterogeneous application sources. Multiple components are interconnected to perform user-specific curation operations. The user-specific curation operations comprise accessing application metadata corresponding to a plurality of applications from a plurality of application sources. The application sources may be heterogeneous and may be situated at local sites or at remote sites. A set of rules are applied to the application metadata to determine if one or more applications are authorized for use by a particular user or group. Publication attributes that control accessibility by a particular user or particular group of users are associated with the authorized applications. Based on the publication attributes as they apply to a particular user, one or more curated applications are selected from the authorized applications. A user-specific application marketplace is presented in a user interface to show a portion of the user-specific curated applications. | 2019-11-07 |
20190340101 | SYSTEM, COMPUTER PROGRAM PRODUCT AND METHOD FOR ENHANCED PRODUCTION ENVIRONMENT BEHAVIOR MIRRORING E.G. WHILE CONDUCTING PILOT ON PROOF-OF-CONCEPT (POC) PLATFORMS - A method for running proof-of-concepts for software solutions, including receiving, from an enterprise, an indication of network locations of servers in a production environment for a software solution selected from among plural candidates software solutions participating in a proof-of-concept running in a proof-of-concept (aka PoC) environment on a PoC platform; providing at least one recording, uploaded onto the platform, of traffic between the servers in the production environment; providing a mapping of the network locations to, respectively, PoC platform local network addresses of servers within the PoC environment; adapting the recording by replacing each occurrence of an individual one of the network locations, within the recording, with a PoC environment server local network PoC platform address to which the individual one was mapped, thereby to generate at least one adapted file; and replaying the at least one adapted file on the servers within the PoC environment. | 2019-11-07 |
20190340102 | Dataflow Analysis to Reduce the Overhead of On Stack Replacement - An approach is provided in which an information handling system selects an assumption point in a software program corresponding to a compile-time assumption made by a compiler, and selects an assumption violation point in the software program corresponding to a location at which the compile-time assumption can be violated at runtime. The information handling system propagates backwards in the software program from the assumption point and reaches the assumption violation point. The information handling system determines that the assumption point corresponds to a first method and the assumption violation point corresponds to a second method that is different from the first method, and inserts a conditional transition in the software program at the assumption violation point. The information handling system executes a compiled version of the software program that includes the conditional transition. | 2019-11-07 |
20190340103 | EXECUTION CONTROL WITH CROSS-LEVEL TRACE MAPPING - Described technologies aid execution control during replays of traced program behavior. Cross-level mapping correlates source code, an intermediate representation, and native instructions in a trace. The trace includes a record of native code instructions which were executed by a runtime-managed program. The trace does not include any executing instance of the runtime. Breakpoints are set to align trace locations with source code expressions or statements, and to skip over garbage collection and other code unlikely to interest a developer. A live debugging environment is adapted to support trace-based reverse execution. An execution controller in a debugger or other tool may utilize breakpoint ranges, cross-level mappings, backward step-out support, and other items to control a replay execution of the trace. Aspects of familiar compilers or familiar runtimes may be re-purposed for innovative execution control which replays previously generated native code, as opposed to their established purpose of generating native code. | 2019-11-07 |
20190340104 | ERROR FINDER TOOL - According to some embodiments, systems and methods are provided, comprising receiving at least one filter parameter in a filter parameter field and tracing features for tracing execution of an application, wherein the application includes a source code; executing the application while tracing the execution, based on the tracing features, to generate a trace; analyzing the generated trace; and determining a portion of the source code associated with a software bug based on the analysis. Numerous other aspects are provided. | 2019-11-07 |
20190340105 | ADVANCED BINARY INSTRUMENTATION FOR DEBUGGING AND PERFORMANCE ENHANCEMENT - Systems and methods for integrating, into a first software program binary, segments of a second software program are disclosed. The integration causes the execution of segments of the second software program as the first software program binary is executed. In one embodiment, a second software program, such as an embeddable software application, is received and divided into a plurality of segments, each segment corresponding to a portion of the embeddable software application. Instrumentation points corresponding to the segments of the embeddable software application are inserted into a plurality of locations within a software binary to create a modified software binary. The modified software binary thus includes the selected software binary and the embeddable software program. | 2019-11-07 |
20190340106 | DEBUGGING SUPPORT APPARATUS AND DEBUGGING SUPPORT METHOD - A debugging support apparatus supports debugging of a sequence program executed by a control apparatus. The debugging support apparatus includes a recording unit and a graph display processing unit which is a presentation processing unit. The recording unit records step numbers which are order information indicating the execution order of arithmetic processing for components constituting the sequence program, and operation data handled in step-by-step arithmetic processing. The graph display processing unit presents a relationship between the order information and the operation data. | 2019-11-07 |
20190340107 | SIGNAL CONTROL CIRCUIT - According to one embodiment, a signal control circuit includes a high-speed serial bus I/F circuit, a data conversion circuit, a trace circuit, and a memory arbitration circuit. The high-speed serial bus I/F circuit receives serial data from an external device by high-speed serial bus communication, and converts the serial data to parallel data. The data conversion circuit converts one of the parallel data to common data to be stored in an external memory. The trace circuit converts the other parallel data to trace data to be stored in the external memory. The memory arbitration circuit stores the common data in a common memory area of the external memory, stores the trace data in a trace memory area being different from the common memory area of the external memory, and when null is supplied from outside, does not store the trace data in the trace memory area. | 2019-11-07 |
20190340108 | SYSTEM AND METHOD FOR MICROSERVICE VALIDATOR - Example implementations described herein are directed to systems and methods for validating and deploying microservices. In an example implementation, a plurality of similarities are calculated between a user environment and multiple pilot environments from application deployment test results. The test results are based on compatibility of catalogs of applications with each of the pilot environments. A list is presented with one or more of the catalogs of applications that are indicated as compatible and similar to the user environment based on the calculated similarities. The user can select catalog from the list that is deployed in the user environment. | 2019-11-07 |
20190340109 | POST-UPGRADE DEBUGGING IN A REMOTE NETWORK MANAGEMENT PLATFORM - An example embodiment may involve receiving, from a client device, a request to access a web-based resource of a computational instance. One or more server devices disposed within the instance may be configured to be able to execute a plurality of program code units. A software application may be configured to identify one or more of the program code units that, since a previous software release for the instance or in a subsequent software release for the instance, have been modified or added, and store a corresponding change indication for each identified program code unit. The embodiment may also involve, as part of carrying out the request, executing a subset of the program code units, and may further involve generating and providing for display a representation of the web-based resource including a region specifying each of the subset of program code units for which there is a stored change indication. | 2019-11-07 |
20190340110 | POC PLATFORM WHICH COMPARES STARTUP S/W PRODUCTS INCLUDING EVALUATING THEIR MACHINE LEARNING MODELS - A proof-of-concept (PoC) method comprising: on a networked platform, serving a population of enterprise end-users and a population of ISV end-users, on which PoCs are run, providing a PoC-defining user interface via which at least one enterprise end-user generates a definition of at least one PoC; and using a processor to automatically assess whether an individual machine learning model embodied in a body of code of an individual software product registered for an individual PoC, is suitable for the individual PoC as defined by the definition. | 2019-11-07 |
20190340111 | METHOD OF TESTING PROTOTYPE LINKED WITH EXISTING APPLICATION - Provided is a method of testing a prototype linked with an application without rebuilding the application. The method is performed by a user terminal in which the application is installed and comprises activating the application built to comprise a prototype controller, loading the prototype and setting the prototype to be displayed on a prototype area, which is at least part of a prototype controller area allocated to the prototype controller, in a screen of the application by using the prototype controller, receiving at least some of input events generated for the application with top priority by using the prototype controller and sending the received input events to the application as they are or sending the received input events to the loaded prototype by using the prototype controller. | 2019-11-07 |
20190340112 | TEST DEVICE, TEST METHOD, AND COMPUTER READABLE MEDIUM - In a test device, a communication unit sequentially receives messages partially including a test target signal of a plurality of bits transmitted from an ECU. A judgment unit checks the value of the test target signal included in a message received by the communication unit in a first duration against a first expected value, checks the value of the test target signal included in a message received by the communication unit in a second duration different from the first duration against a second expected value acquired by inverting each bit of the first expected value, and judges a test on the ECU as pass or fail based on both of the check results. | 2019-11-07 |
20190340113 | TEST MANAGER TO COORDINATE TESTING ACROSS MULTIPLE TEST TOOLS - In some examples, a server may perform various operations, including receiving a set of tests to be performed across multiple software components, receiving one or more inputs, selecting a first test of the set of tests, instructing a first test tool to perform the first test on a first software component using the one or more inputs, and receiving first results from the first test tool performing the first test to the first software component using the one or more inputs. The multiple software components may include at least the first software component written in a first language and tested with the first test tool and a second software component written in a second language and tested with a second test tool. The operations may include selecting a second test, instructing a second test tool to perform the second test to a second software component, and receiving second results. | 2019-11-07 |
20190340114 | METHOD AND APPARATUS FOR AUTOMATIC TESTING OF WEB PAGES - A computer-implemented method, apparatus and computer program product, the method comprising: obtaining attribute weights associated with element attributes in a web page comprising elements, in regard of a specific element to be operated upon, a first margin, and a second margin; based on the attribute weights, determining a probability for each element in the web page to be the specific element; determining a first threshold indicating a difference between probabilities of two elements having the highest probabilities; determining a second threshold indicating a difference between a probability of an element having the highest probability and one; based on the first threshold, second threshold, first margin and second margin, determining whether the element having the highest probability is the specific element; and subject to the specific element being identified, performing an action upon the specific element. | 2019-11-07 |
20190340115 | SYSTEM AND METHOD FOR AUTOMATED THIN CLIENT CONTACT CENTER AGENT DESKTOP TESTING - A system for centralized testing of web-based agent desktops has been devised. The invention uses a test control portal. The test control portal acts as the interface between the client interaction software systems testing system and analyst controlled test device, executes an extensive set of robust test directive commands with underlying routines to be used to specify test conditions without the use of programming ability on the part of the analyst, uses a robust set of report item and format choice designators to allow easy selection of a range of report content and styles. | 2019-11-07 |
20190340116 | SHARED BACKUP UNIT AND CONTROL SYSTEM - In a shared backup ECU, a diagnostic section diagnoses an abnormality in a plurality of ECUs which, in order to perform an individual function, execute a program that is different according to the function. A loading section loads, from a storage section storing a plurality of programs in advance, a program which is the same as a program to be executed by an abnormal unit being an ECU whose abnormality has been detected by the diagnostic section. An execution section executes the program loaded by the loading section, thereby performing a function which is the same as a function of the abnormal unit on behalf of the abnormal unit. | 2019-11-07 |
20190340117 | GUARANTEED FORWARD PROGRESS MECHANISM - An apparatus to facilitate guaranteed forward progress for graphics data is disclosed. The apparatus includes a plurality of ports to receive and transmit streams of graphics data, one or more buffers associated with each of the plurality of ports to store the graphics data and switching logic to virtually partition each of the one or more buffers to allocate a dedicated buffer to receive each of a plurality of independent streams of graphics data. | 2019-11-07 |
20190340118 | In-Memory Database Page Allocation - A provisional page to be filled with data is allocated in an in-memory database system in which pages are loaded into memory and having associated physical disk storage a provisional page to be filled with data. Thereafter, the provisional page is filled with data. The provisional page is register after the provisional page has been filled with data such that consistent changes in the database are not required for the provisional page prior to the registering. | 2019-11-07 |
20190340119 | COMPUTER MEMORY MANAGEMENT WITH PERSISTENT BACKUP COPIES - A method, system, and computer readable storage medium for managing computer memory by an intelligent memory manager. The intelligent memory manager performs a method including: initializing a memory allocator within an intelligent memory manager in a computing system; allocating, by the memory allocator, a plurality of main memory objects; backing up, with the intelligent memory manager, at least one main memory object in the plurality of main memory objects in a persistent storage utilizing a backup operation; monitoring, with the intelligent memory manager, input-output bandwidth being consumed for storing information in the persistent storage; and modifying, with the intelligent memory manager, the backup operation based on monitoring the bandwidth being consumed. | 2019-11-07 |
20190340120 | METHOD, APPARATUS FOR DATA MANAGEMENT, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING PROGRAM - A method for data management by a computer coupled to a solid state drive (SSD), the SSD being configured to include blocks and channels, each of the blocks being a first area as a unit of data deletion and being configured to include pages, each of the pages being a second area as a unit of data access in the SSD, each of the channels being a transmission and reception route of data to and from the block, the method includes: executing allocation processing for allocating, to a management target having a fixed length determined, a logical block including the blocks coupled to different channels, executing management processing for, when a size of a division management target is more than a size of the page, allocating the division management target to the pages coupled to the different channels included in the logical block allocated to the management target. | 2019-11-07 |
20190340121 | ACCELERATING GARBAGE COLLECTION OF FLUSHED LOGICAL ERASE BLOCKS IN NON-VOLATILE MEMORY - A controller of a non-volatile memory tracks identifiers of logical erase blocks (LEBs) for which programming has closed. A first subset of the closed LEBs tracks LEBs that are ineligible for selection for garbage collection, and a second subset of the closed LEBs tracks LEBs that are eligible for selection for garbage collection. The controller continuously migrates closed LEBs from the first subset to the second subset over time. In response to closing a particular LEB, the controller places an identifier of the particular LEB into one of the first and second subsets selected based on a first amount of dummy data programmed into the closed LEBs tracked in the first subset. Thereafter, in response to selection of the particular LEB for garbage collection, the controller performs garbage collection on the particular LEB. | 2019-11-07 |
20190340122 | DATA STORAGE LAYOUT - Examples of the present disclosure provide apparatuses and methods for determining a data storage layout. An example apparatus comprising a first address space of a memory array comprising a first number of memory cells coupled to a plurality of sense lines and to a first select line. The first address space is configured to store a logical representation of a first portion of a value. The example apparatus also comprising a second address space of the memory array comprising a second number of memory cells coupled to the plurality of sense lines and to a second select line. The second address space is configured to store a logical representation of a second portion of the value. The example apparatus also comprising sensing circuitry configured to receive the first value and perform a logical operation using the value without performing a sense line address access. | 2019-11-07 |
20190340123 | CONTROLLER FOR LOCKING OF SELECTED CACHE REGIONS - Examples provide an application program interface or manner of negotiating locking or pinning or unlocking or unpinning of a cache region by which an application, software, or hardware. A cache region can be part of a level-1, level-2, lower or last level cache (LLC), or translation lookaside buffer (TLB) are locked (e.g., pinned) or unlocked (e.g., unpinned). A cache lock controller can respond to a request to lock or unlock a region of cache or TLB by indicating that the request is successful or not successful. If a request is not successful, the controller can provide feedback indicating one or more aspects of the request that are not permitted. The application, software, or hardware can submit another request, a modified request, based on the feedback to attempt to lock a portion of the cache or TLB. | 2019-11-07 |
20190340124 | DETECTION CIRCUITRY - An apparatus ( | 2019-11-07 |
20190340125 | APPARATUSES AND METHODS TO PERFORM CONTINUOUS READ OPERATIONS - Apparatuses, systems, and methods to perform continuous read operations are described. A system configured to perform such continuous read operations enables improved access to and processing of data for performance of associated functions. For instance, one apparatus described herein includes a memory device having an array that includes a plurality of pages of memory cells. The memory device includes a page buffer coupled to the array and a continuous read buffer. The continuous read buffer includes a first cache to receive a first segment of data values and a second cache to receive a second segment of the data values from the page buffer. The memory device is configured to perform a continuous read operation on the first and second segments of data from the first cache and the second cache of the continuous read buffer. | 2019-11-07 |
20190340126 | TABLE OF CONTENTS CACHE ENTRY HAVING A POINTER FOR A RANGE OF ADDRESSES - Table of contents (TOC) pointer cache entry having a pointer for a range of addresses. An address of a called routine and a pointer value of a pointer to a reference data structure to be entered into a reference data structure pointer cache are obtained. The reference data structure pointer cache includes a plurality of entries, and an entry of the plurality of entries includes a stored pointer value for an address range. A determination is made, based on the pointer value, whether an existing entry exists in the reference data structure pointer cache for the pointer value. Based on determining the existing entry exists, one of an address_from field of the existing entry or an address_to field of the existing entry is updated using the address of the called routine. The stored pointer value of the existing entry is usable to access the reference data structure for the address range defined by the address_from field and the address_to field. | 2019-11-07 |
20190340127 | TABLE OF CONTENTS CACHE ENTRY HAVING A POINTER FOR A RANGE OF ADDRESSES - Table of contents (TOC) pointer cache entry having a pointer for a range of addresses. An address of a called routine and a pointer value of a pointer to a reference data structure to be entered into a reference data structure pointer cache are obtained. The reference data structure pointer cache includes a plurality of entries, and an entry of the plurality of entries includes a stored pointer value for an address range. A determination is made, based on the pointer value, whether an existing entry exists in the reference data structure pointer cache for the pointer value. Based on determining the existing entry exists, one of an address_from field of the existing entry or an address_to field of the existing entry is updated using the address of the called routine. The stored pointer value of the existing entry is usable to access the reference data structure for the address range defined by the address_from field and the address_to field. | 2019-11-07 |
20190340128 | LOW-OVERHEAD INDEX FOR A FLASH CACHE - Systems and methods for a low-overhead index for a cache. The index is used to access content or segments in the cache by storing at least an identifier and a location. The index is accessed using the identifier. The identifier may be shortened or be a short identifier. Because a collision may occur, the index may also include one or more meta-data values associated with the data segment. Collisions can be resolved by also comparing the metadata of the segment with the metadata stored in the index. If both the short identifier and metadata match those of the segment, the segment is likely in the cache and can be accessed. Segments can also be inserted into the cache. | 2019-11-07 |
20190340129 | UNIFIED IN-MEMORY CACHE - A pinned memory space for caching data can be provided in a data node. The data that is cached in the pinned memory space can be prevented from being swapped out. A virtual address can be assigned to the data. The virtual address can be mapped to a memory address of the data in the pinned memory space for accessing the data by an application. A first command can be received from the application for caching the data. The first command can indicate an attribute associated with the caching of the data. Responsive to receiving the first command from the application for caching the data, the data associated with the first command can be cached by storing the attribute in association with the data in the pinned memory space. | 2019-11-07 |
20190340130 | METHODS AND SYSTEMS FOR HANDLING DATA RECEIVED BY A STATE MACHINE ENGINE - A data analysis system to analyze data. The data analysis system includes a data buffer configured to receive data to be analyzed. The data analysis system also includes a state machine lattice. The state machine lattice includes multiple data analysis elements and each data analysis element includes multiple memory cells configured to analyze at least a portion of the data and to output a result of the analysis. The data analysis system includes a buffer interface configured to receive the data from the data buffer and to provide the data to the state machine lattice. | 2019-11-07 |
20190340131 | TAPE DRIVE WITH INTELLIGENT SELECTION OF WRAP / TRACK FOR TEMPORARILY STORING FLUSHED DATA - A tape drive that can select one or more wraps from any available wraps on a tape medium for writing temporary data upon detecting a flush condition. The one or more wraps selected for writing temporary data can be selected from wraps otherwise reserved for normal writing operations. Selection of the one or more wraps for temporary writing may be based on multiple considerations, including proximity to the wrap of current data writing operations and tape medium degradation. The one or more wraps selected for writing temporary data may be selected with or without regard of their assigned read/write direction. Assigning wraps based on proximity and/or degradation can lead to certain operational advantages including reducing tape write head movement in the transverse direction and spreading tape medium wear more evenly across the surface of the tape medium. | 2019-11-07 |
20190340132 | FLUSHING PAGES FROM SOLID-STATE STORAGE DEVICE - Embodiments of the present disclosure relate to a method and device for flushing pages from a solid-state storage device. Specifically, the present disclosure discloses a method of flushing pages from a solid-state storage device comprising: determining a first number based on a period length of one flushing cycle and a period length required for building one flushing transaction, the first number indicating a maximum number of flushing transactions that can be built in the flushing cycle; and flushing pages from the solid-state storage device with an upper limit of the first number in the flushing cycle. The present disclosure also discloses a device for flushing pages from a solid-state storage device and a computer program product for implementing steps of a method of flushing pages from a solid-state storage device. | 2019-11-07 |
20190340133 | VIRTUALIZING NVDIMM WPQ FLUSHING WITH MINIMAL OVERHEAD - Techniques for virtualizing NVDIMM WPQ flushing with minimal overhead are provided. In one set of embodiments, a hypervisor of a computer system can allocate a virtual flush hint address (FHA) for a virtual machine (VM), where the virtual flush hint address is associated with one or more physical FHAs corresponding to one or more physical memory controllers of the computer system. The hypervisor can further determine whether one or more physical NVDIMMs of the computer system support WPQ flushing. If so, the hypervisor can write protect a guest physical address (GPA) to host physical address (HPA) mapping for the virtual FHA in the page tables of the computer system, thereby enabling the hypervisor to trap VM writes to the virtual FHA and propagate those write to the physical FHAs of the system. | 2019-11-07 |
20190340134 | CONFIGURABLE MEMORY SYSTEM AND METHOD OF CONFIGURING AND USING SUCH MEMORY SYSTEM - Memory systems that include NAND flash memory and dynamic random access memory (DRAM) are configured to allow a considerably higher ratio of NAND to DRAM without a significant increase in write amplitude. The NAND includes a logical-to-physical (L2P) table. The DRAM includes a buffer divided into regions, an update table of recently written data and linked lists, one for each region of the buffer linking all items in the update table in that region, the DRAM maintaining a set of linked lists, each identifying all regions with the same number of updates in the update table. | 2019-11-07 |
20190340135 | CACHE EVICTION SCHEME FOR ACCEPTABLE SUBSTITUTES IN ONLINE MEDIA - Among other things, this document describes systems, devices, and methods for improving cache performance when caching multiple versions of an object. In some embodiments, a network cache can execute a cache eviction algorithm that considers the versatility of object versions when making eviction decisions. The techniques described herein can be applied to wide variety of media objects, such as an original image and a set of derivative images in various formats, sizes, or compression levels. A versatile version is versatile because it can be substituted for one or more other versions requested by a client. Hence, the techniques described herein may prefer, under certain conditions, to evict from a network cache less versatile versions prior to evicting more versatile versions. | 2019-11-07 |
20190340136 | STORAGE EFFICIENCY OF ENCRYPTED HOST SYSTEM DATA - A storage controller coupled to a storage array comprising one or more storage devices that performs at least one data reduction operation on decrypted data, encrypts the reduced data using a second encryption key to generate a second encrypted data, and stores the second encrypted data on the storage array. | 2019-11-07 |
20190340137 | WIRELESS DOCKING - A method of controlling a wireless docking station, which has one or more peripheral devices connected thereto, which are controllable from a mobile device when the mobile device is docked with the wireless docking station. The method involves receiving (S | 2019-11-07 |
20190340138 | SEPARATING COMPLETION AND DATA RESPONSES FOR HIGHER READ THROUGHPUT AND LOWER LINK UTILIZATION IN A DATA PROCESSING NETWORK - In a data processing network comprising a Request, Home and Slave Nodes coupled via a coherent interconnect, a Home Node performs a read transaction in response to a read request from a Request Node. In a first embodiment, the transaction is terminated in the Home Node upon receipt of a read receipt from a Slave Node, acknowledging a read request from the Home Node. In a second embodiment, the Home Node sends a message to the Request Node indicating that a read transaction has been ordered in the Home Node and further indicating that data for the read transaction is provided in a separate data response. The transaction is terminated in the Home Node upon receipt of an acknowledge from the Request Node of this message. In this manner, the transaction is terminated in the Home Node without waiting for acknowledgement from the Request Node of completion of the transaction. | 2019-11-07 |
20190340139 | COMPONENT LOCATION IDENTIFICATION AND CONTROLS - An information handling system may include a processor; a plurality of connectors configured to receive a corresponding plurality of information handling resources, wherein each connector is located within a particular physical region of the information handling system and is communicatively coupled to the processor; and a management controller communicatively coupled to the processor and configured to provide out-of-band management of the information handling system. The management controller may be further configured to store a data structure that includes a mapping between respective ones of the physical regions and the connectors located within those physical regions; receive a command relating to a particular physical region; and based on the mapping in the data structure, transmit the command, via the connectors that are located in the particular physical region, to the information handling resources received by those connectors. | 2019-11-07 |
20190340140 | CERTIFIABLE DETERMINISTIC SYSTEM SOFTWARE FRAMEWORK FOR HARD REAL-TIME SAFETY-CRITICAL APPLICATIONS IN AVIONICS SYSTEMS FEATURING MULTI-CORE PROCESSORS - An avionics system comprising a central processing unit to implement one or more hard real-time safety-critical applications, the central processing unit comprises a multi-core processor with a plurality of cores, an avionics system software executable by the multi-core processor, a memory, and a common bus though which the multi-core processor can access the memory; the avionics system is characterized in that the avionics system software is designed to cause, when executed, the cores in the multi-core processor to access the memory through the common bus by sharing bus bandwidth according to assigned bus bandwidth shares. | 2019-11-07 |
20190340141 | DDR5 RCD INTERFACE PROTOCOL AND OPERATION - An apparatus including a host interface and a registered clock driver interface. The host interface may be configured to receive an enable command from a host. The registered clock driver interface may be configured to perform power management for a dual in-line memory module, generate data for the dual in-line memory module, communicate the data, receive a clock signal and communicate an interrupt signal. The registered clock driver interface may be disabled at power on. The registered clock driver interface may be enabled by in response to the enable command. The apparatus may be implemented as a component on the dual in-line memory module. | 2019-11-07 |
20190340142 | DDR5 PMIC INTERFACE PROTOCOL AND OPERATION - An apparatus including a host interface and a power management interface. The host interface may be configured to receive control words from a host. The power management interface may be configured to (i) enable the host to read/write data from/to a power management circuit of a dual in-line memory module, (ii) communicate the data, (iii) generate a clock signal and (iv) communicate an interrupt signal. The power management interface is disabled at power on. The apparatus is configured to (i) decode the control words, (ii) enable the power management interface when the control words provide an enable command and (iii) perform a response to the interrupt signal. The clock signal may operate independently from a host clock. | 2019-11-07 |
20190340143 | Protocol including a command-specified timing reference signal - Apparatus and methods for operation of a memory controller, memory device and system are described. During operation, the memory controller transmits a read command which specifies that a memory device output data accessed from a memory core. This read command contains information which specifies whether the memory device is to commence outputting of a timing reference signal prior to commencing outputting of the data. The memory controller receives the timing reference signal if the information specified that the memory device output the timing reference signal. The memory controller subsequently samples the data output from the memory device based on information provided by the timing reference signal output from the memory device. | 2019-11-07 |
20190340144 | DETECTION CONTROL DEVICE - A detection control device including a USB connection port, a first detection circuit, a second detection circuit, a control circuit, a first switching circuit and a second switching circuit is provided. When a first pin group of the USB connection port is coupled to an external device, the first detection circuit generates a first detection signal according to a first time constant. When a second pin group of the USB connection port is coupled to the external device, the second detection circuit generates a second detection signal according to a second time constant. The control circuit generates a first control signal and a second control signal according to the first and second detection signals. Each of the first and second switching circuits communicates with the external device via the first or second pin groups according to either the first control signal or the second control signal. | 2019-11-07 |
20190340145 | PCIE TRAFFIC TRACKING HARDWARE IN A UNIFIED VIRTUAL MEMORY SYSTEM - Techniques are disclosed for tracking memory page accesses in a unified virtual memory system. An access tracking unit detects a memory page access generated by a first processor for accessing a memory page in a memory system of a second processor. The access tracking unit determines whether a cache memory includes an entry for the memory page. If so, then the access tracking unit increments an associated access counter. Otherwise, the access tracking unit attempts to find an unused entry in the cache memory that is available for allocation. If so, then the access tracking unit associates the second entry with the memory page, and sets an access counter associated with the second entry to an initial value. Otherwise, the access tracking unit selects a valid entry in the cache memory; clears an associated valid bit; associates the entry with the memory page; and initializes an associated access counter. | 2019-11-07 |
20190340146 | POWER MANAGEMENT OF RE-DRIVER DEVICES - An apparatus, such as a re-driver, can include a receiver port coupled to a first link partner across a first link; a transmitter port coupled to a second link partner across a second link; and a power management (PM) controller implemented in hardware. The PM controller can detect a PM control signal, determine a PM state for the apparatus based on the PM control signal, and cause the apparatus to enter the PM state. The apparatus can transmit electrical signals to the second link partner based on the PM state. The PM management control signal can include a clock request, an electrical idle, a common mode voltage, or other electrical signal indicative of a PM link state change of a link partner coupled to the re-driver. | 2019-11-07 |
20190340147 | HIGH-PERFORMANCE STREAMING OF ORDERED WRITE STASHES TO ENABLE OPTIMIZED DATA SHARING BETWEEN I/O MASTERS AND CPUS - A data processing network and method of operation thereof are provided for efficient transfer of ordered data from a Request Node to a target node. The Request Node send write requests to a Home Node and the Home Node responds to a first write request when resources have been allocated the Home Node. The Request Node then sends the data to the written. The Home Node also responds with a completion message when a coherency action has been performed at the Home Node. The Request Node acknowledges receipt of the completion message with a completion acknowledgement message that is not sent until completion messages have been received for all write requests older than the first write request for the ordered data, thereby maintaining data order. Following receipt of the completion acknowledgement for the first write request, the Home Node sends the data to be written to the target node. | 2019-11-07 |
20190340148 | DYNAMIC PRESENTATION OF INTERCONNECT PROTOCOL CAPABILITY STRUCTURES - A device connected by a link to a host system can include a first port to receive a capability configuration message across a link and a message request receiving logic comprising hardware circuitry to identify a capability of the device identified in the capability configuration message, determine that the capability is to be presented or hidden from operation based on a capability hide enable bit in the capability configuration message, and configure a capability linked list to present or hide the capability based on the determination. The device can also include a message response generator logic comprising hardware circuitry to generate a response message indicating that the capability is to be presented or hidden from operation. The device can include a second port to transmit the response message across the link. | 2019-11-07 |
20190340149 | INTERFACE ARRANGEMENT ON A SYSTEM BOARD AND COMPUTER SYSTEM - An interface arrangement on a system board includes at least two data lines for a differential signal transmission, at least one first mounting location for at least one first connector and at least one second mounting location for at least one second connector, and a third mounting location for an integrated circuit, wherein at the at least one first mounting location the data lines are divided into first and second paths, at the at least one second mounting location, the second and first paths are joined, the third mounting location for the integrated circuit is arranged in the first path, and the at least one and second connectors can be mounted at the at least one first and second mounting locations in a first or a second position, respectively, so that signals in the data lines run via the first path or via the second path. | 2019-11-07 |
20190340150 | SYSTEM FOR SHARING CONTENT BETWEEN ELECTRONIC DEVICES, AND CONTENT SHARING METHOD FOR ELECTRONIC DEVICE - According to an embodiment the electronic device comprising a communication module, a memory configured to store contents and device information on a first external electronic device, and a processor configured to control to identify a communication mode between the electronic device and the first external electronic device, the communication mode being determined based on at least one of information on some contents selected from the contents or the device information, transmit at least part of the selected contents to a second external electronic device using the communication module based on the communication mode being a first communication mode, and transmit the at least partial contents to the first external electronic device using the communication module based on the communication mode being a second communication mode. | 2019-11-07 |
20190340151 | SCALABLE COMMUNICATION SYSTEM - A centralized communication system (CCS) is disclosed that provides a modular, extendible, and scalable communication system that can exchange information between any information systems or networked devices. Information from a single source device or system can be selectively broadcast to one or more predetermined destination devices and systems rather than broadcast to every device on the network. Information may be filtered and processed at one or more selectable points in the communication flow between systems. In certain embodiments, an incoming message is received from the source device in the native message format using the native protocol of the source device and converted to an internal messaging format for internal handling within the CCS, then converted to the native message format of a receiving system and sent to the receiving system using its native protocol. In certain embodiments, a graphical representation of the topology of the CCS may be provided. | 2019-11-07 |
20190340152 | RECONFIGURABLE REDUCED INSTRUCTION SET COMPUTER PROCESSOR ARCHITECTURE WITH FRACTURED CORES - Systems and methods for reconfiguring a reduced instruction set computer processor architecture are disclosed. Exemplary implementations may: provide a primary processing core consisting of a RISC processor; provide a node wrapper associated with each of the plurality of secondary cores, the node wrapper comprising access memory associates with each secondary core, and a load/unload matrix associated with each secondary core; operate the architecture in a manner in which, for at least one core, data is read from and written to the at least cache memory in a control-centric mode; the secondary cores are selectively partitioned to operate in a streaming mode wherein data streams out of the corresponding secondary core into the main memory and other ones of the plurality of secondary cores. | 2019-11-07 |
20190340153 | MEMORY-BASED DISTRIBUTED PROCESSOR ARCHITECTURE - Distributed processors and methods for compiling code for execution by distributed processors are disclosed. In one implementation, a distributed processor may include a substrate; a memory array disposed on the substrate; and a processing array disposed on the substrate. The memory array may include a plurality of discrete memory banks, and the processing array may include a plurality of processor subunits, each one of the processor subunits being associated with a corresponding, dedicated one of the plurality of discrete memory banks. The distributed processor may further include a first plurality of buses, each connecting one of the plurality of processor subunits to its corresponding, dedicated memory bank, and a second plurality of buses, each connecting one of the plurality of processor subunits to another of the plurality of processor subunits. | 2019-11-07 |
20190340154 | Multi-Threaded, Self-Scheduling Processor - Representative apparatus, method, and system embodiments are disclosed for a self-scheduling processor which also provides additional functionality. Representative embodiments include a self-scheduling processor, comprising: a processor core adapted to execute a received instruction; and a core control circuit adapted to automatically schedule an instruction for execution by the processor core in response to a received work descriptor data packet. In another embodiment, the core control circuit is also adapted to schedule a fiber create instruction for execution by the processor core, to reserve a predetermined amount of memory space in a thread control memory to store return arguments, and to generate one or more work descriptor data packets to another processor or hybrid threading fabric circuit for execution of a corresponding plurality of execution threads. Event processing, data path management, system calls, memory requests, and other new instructions are also disclosed. | 2019-11-07 |
20190340155 | Event Messaging in a System Having a Self-Scheduling Processor and a Hybrid Threading Fabric - Representative apparatus, method, and system embodiments are disclosed for a self-scheduling processor which also provides additional functionality. Representative embodiments include a self-scheduling processor, comprising: a processor core adapted to execute a received instruction; and a core control circuit adapted to automatically schedule an instruction for execution by the processor core in response to a received work descriptor data packet. In another embodiment, the core control circuit is also adapted to schedule a fiber create instruction for execution by the processor core, to reserve a predetermined amount of memory space in a thread control memory to store return arguments, and to generate one or more work descriptor data packets to another processor or hybrid threading fabric circuit for execution of a corresponding plurality of execution threads. Event processing, data path management, system calls, memory requests, and other new instructions are also disclosed. | 2019-11-07 |
20190340156 | BATCH JOB PROCESSING USING A DATABASE SYSTEM - Disclosed are examples of systems, apparatus, methods and computer program products for batch job processing using a database system. In some implementations, a data object relationship structure of a first record can be identified. Based on a type of data dependency of the data object relationship structure, a first record and a second record can be determined to be associated. A first batch number can be assigned to the first record and the second record. A first batch job can be defined. It can be determined that a third record is not associated with the first record. A second batch number can be assigned to the third record and a second batch job can be defined. | 2019-11-07 |