50th week of 2019 patent applcation highlights part 39 |
Patent application number | Title | Published |
20190377605 | FUZZY MANAGEMENT OF HIGH-VOLUME CONCURRENT PROCESSES - Embodiments relate to dynamically configuring a swarm of processing bots to autonomously execute tasks corresponding to a request. A communications fabric enables broadcasts of processing and status data from individual bots to other bots, which can locally determine whether and/or how the communications are to affect its processing. | 2019-12-12 |
20190377606 | SMART ACCELERATOR ALLOCATION AND RECLAMATION FOR DEEP LEARNING JOBS IN A COMPUTING CLUSTER - Embodiments for accelerator allocation and reclamation for deep learning jobs in a computing cluster. Metrics are recorded of each accelerator of a set of accelerators allocated to a deep learning job including computing a gain of computational power by an additional allocation of new accelerators and computing a cost of transferring data among the new accelerators and the set of allocated accelerators. Ones of the new accelerators are allocated to the deep learning job or ones of the set of allocated accelerators assigned to perform the deep learning job are reclaimed upon determining an optimal accelerator topology by comparing the gain of computation power and the cost of transferring data. | 2019-12-12 |
20190377607 | PROCESSOR FOR IMPROVED PROCESS SWITCHING AND METHODS THEREOF - A processor includes processor memory arrays including one or more volatile memory arrays and one or more Non-Volatile Memory (NVM) arrays. Volatile memory locations in the one or more volatile memory arrays are paired with respective NVM locations in the one or more NVM arrays to form processor memory pairs. Process data is stored for different processes executed by at least one core of the processor in respective processor memory pairs. Processes are executed using the at least one core to directly access the process data stored in the respective processor memory pairs. | 2019-12-12 |
20190377608 | SHARING EXPANSION DEVICE, CONTROLLING METHOD AND COMPUTER USING THE SAME - A sharing expansion device, a controlling method and a computer using the same are provided. The computer has at least one first user account and a second user account. The first user account has been logged in the computer. The computer is connected to a first input device and a first monitor. The first input device provides at least one first command. The sharing expansion device includes at least two first ports, a second port, a hub unit and a graphic processor. The first ports connect the computer and a second input device. The second input device provides at least one second command. The computer executes the first command and the second command by way of time division multiplexing. The computer provides a first frame and a second frame to the first monitor and the second monitor according to the first user account and the second user account respectively. | 2019-12-12 |
20190377609 | DETERMINING AN ALLOCATION OF STAGE AND DESTAGE TASKS BY USING A MACHINE LEARNING MODULE - Provided are a computer program product, system, and method for using a machine learning module to determine an allocation of stage and destage tasks. Storage performance information related to processing of Input/Output (I/O) requests with respect to the storage unit is provided to a machine learning module. The machine learning module receives a computed number of stage tasks and a computed number of destage tasks. A current number of stage tasks allocated to stage tracks from the storage unit to the cache is adjusted based on the computed number of stage tasks. A current number of destage tasks allocated to destage tracks from the cache to the storage unit is adjusted based on the computed number of destage tasks. | 2019-12-12 |
20190377610 | DETERMINING AN ALLOCATION OF STAGE AND DESTAGE TASKS BY TRAINING A MACHINE LEARNING MODULE - Provided are a computer program product, system, and method for using a machine learning module to determine an allocation of stage and destage tasks. Storage performance information related to processing of Input/Output (I/O) requests with respect to the storage unit is provided to a machine learning module. The machine learning module receives a computed number of stage tasks and a computed number of destage tasks. A current number of stage tasks allocated to stage tracks from the storage unit to the cache is adjusted based on the computed number of stage tasks. A current number of destage tasks allocated to destage tracks from the cache to the storage unit is adjusted based on the computed number of destage tasks. | 2019-12-12 |
20190377611 | RULE GENERATION AND TASKING RESOURCES AND ATTRIBUTES TO OBJECTS SYSTEM AND METHOD - Apparatus and associated methods relate to constructing a resource and attribute tasking solution to complete a user's objective with resource and attribute characteristics defining the tasked objects, in response to receiving a task definition, and satisfying a task definition constraint. An object includes of a set of resource and attribute dimensions each of which have a set of possible values. A tasking solution is the union of all valid solution vectors in the dimension space. The solution identifies all valid values to present to a user or other agent to make further decisions on further constraining resource and attribute characteristics for a final materializable tasking. In an illustrative example, the objective may be mapping resource supply to task demand. The task demand may be, for example, delivering a database-as-a-service platform configured based on rules generated to satisfy task definition capacity constraints. In some examples, the resource supply may include computing and communication resources configurable to provide access to the platform via a network cloud. Exemplary platforms delivered at different times may be configured with different resources and attributes, based on constructing a tasking solution determined from the available resource supply as a function of time. Various examples may advantageously provide optimized on-demand tasking solutions, configuring a user's available resource supply to complete the user's objective while satisfying task definition constraints. | 2019-12-12 |
20190377612 | VCPU Thread Scheduling Method and Apparatus - A virtual central processing unit (VCPU) thread scheduling method and apparatus includes obtaining a performance indicator required by a VCPU thread in a to-be-created VM, where the performance indicator indicates a specification feature required by the VM; creating the VCPU thread according to the performance indicator required by the VCPU thread; determining, from physical CPU information, a target physical CPU group that satisfies the performance indicator of the VCPU thread, where the physical CPU information includes at least one physical CPU group and each physical CPU group includes at least one physical CPU with a same performance indicator; and running the VCPU thread on at least one physical CPU in the target physical CPU group. | 2019-12-12 |
20190377613 | SYNCHRONIZATION OF HARDWARE UNITS IN DATA PROCESSING SYSTEMS - A data processing system includes one or more producer processing units operable to produce data outputs, and one or more consumer processing units operable to use a data output produced by a producer processing unit, and a synchronization unit that is operable to communicate with the one or more producer processing units and the one or more consumer processing units, so as to synchronize the production and use of data outputs by the producer and consumer processing units. | 2019-12-12 |
20190377614 | VERIFICATION OF ATOMIC MEMORY OPERATIONS - A computer-implemented method, computerized apparatus and computer program product for verification of atomic memory operations are disclosed. The method comprising: independently generating for each of a plurality of threads at least one instruction for performing an atomic memory operation of a predetermined type on an allocated shared memory location accessed by the plurality of threads; and, determining an evaluation function over arguments comprising values operated on or obtained in performing the atomic memory operation of the predetermined type on the allocated shared memory location by each of the plurality of threads; wherein the evaluation function is determined based on the atomic memory operation of the predetermined type such that a result thereof is not effected by an order in which each of the plurality of threads performs the atomic memory operation of the predetermined type on the allocated shared memory location. | 2019-12-12 |
20190377615 | INSTRUCTING THE USE OF APPLICATION PROGRAMMING INTERFACE COMMANDS IN A RUNTIME ENVIRONMENT - A method, computer system, and a computer program product for instructing the use of application programming interface (API) commands in a runtime environment is provided. The present invention may include receiving, by a computer processor, a source code with a high level language API command. The present invention may include accessing, by a computer processor, metadata for the source code and determining whether the metadata includes an instruction to be applied to the high level language API command, and applying, by a computer processor, the instruction to the high level language API command. The present invention may include processing, by a computer processor, the high level language API command to a low level code using a command translator, wherein the processing occurs after the applying the instruction. | 2019-12-12 |
20190377616 | PLAYER SOFTWARE ARCHITECTURE - Embodiments disclosed herein are related to a method that can include discovering, by a platform abstraction layer (PAL), configuration data related to hardware associated with a media display, generating and supporting, by the platform abstraction layer, a common set of platform application programming interfaces (APIs) for displaying content on the media display according to the discovered hardware, receiving, by a platform shim, check-in instructions from a content management system, and causing, by a media player engine, the media display to display the content in accordance with the check-in instructions by using the common set of platform APIs generated by the PAL. | 2019-12-12 |
20190377617 | DOMAIN AND EVENT TYPE-SPECIFIC CONSENSUS PROCESS FOR A DISTRIBUTED LEDGER - Domain and/or event type-specific consensus processes for distributed ledger are provided. A consensus request is received by a core consensus engine. The consensus request corresponds to an event, the event (i) corresponds to a domain and (ii) has a type, and the consensus request comprises information corresponding to the event. Information corresponding to the event and the type are provided to the processing manager corresponding to the domain. The processing manager identifies a set of processing objects based on the type. The processing manager calls at least one processing object of the set via a corresponding interface and provides information corresponding to the event to the called processing object. The processing object is executed to generate a corresponding object result. The processing manager generates an aggregate result based on the object results. The core consensus engine determines a consensus response based at least in part on the aggregate result. | 2019-12-12 |
20190377618 | SYSTEMS AND METHODS FOR TRIGGERING USER INTERFACE TOAST NOTIFICATIONS FOR HYBRID APPLICATIONS - In accordance with embodiments of the present disclosure, an information handling system may include a processor and a non-transitory computer-readable medium embodying a program of instructions, the program of instructions configured to, when read and executed by the processor: receive a toast notification trigger; and responsive to the toast notification trigger, launch an instance of an application associated with the toast notification trigger and communicate arguments related to a toast notification to the application via an application protocol of the application in order to bridge communication between legacy stack components and containerized stack solutions of an operating system, such that the application issues to the operating system a request to display a toast notification responsive to the toast notification trigger and completes a toast action responsive to user interaction with the toast notification. | 2019-12-12 |
20190377619 | AUTOMATICALLY GENERATING CONVERSATIONAL SERVICES FROM A COMPUTING APPLICATION - The automatic generation of one or more task-oriented conversational bots is disclosed. Illustratively, systems and methods are provided that allow for tracing the interactions of one or more computing applications inclusive of the interaction with one or more programmatic elements of the one or more computing applications, interaction with the graphical user interface(s) of the one or more computing applications, and/or the operation of the one or more computing environments on which the one or more computing applications are executing to collect various state data. The state data can be illustratively graphed to show the overall execution paths of one or more functions/operations of the one or more computing applications for use in generating one or more instructions representative of a desired task-oriented conversational bot that can be operatively executed through one or more application program interfaces of the one or more computing applications. | 2019-12-12 |
20190377620 | EXTENSIBILITY FOR THIRD PARTY APPLICATION PROGRAMMING INTERFACES - Techniques are disclosed for extending an API using remote, synchronous, user-defined extensions in a microservices environment. A request can be received to perform at least one action on at least one object type, the at least one action defined by an application programming interface (API). At least one extension associated with the at least one action and at least one object type can be determined. An object of the at least one object type and the at least one action can be performed on the object to generate an intermediate object. The intermediate object can be sent to the at least one extension for processing, the at least one extension hosted by a remote service. A response from the at least one extension can be received and the intermediate object can be updated based on the response. | 2019-12-12 |
20190377621 | METHOD OF RUNNING NETWORK APPLICATION BASED ON POS PAYMENT TERMINAL, TERMINAL AND NON-VOLATILE READABLE STORAGE MEDIUM - A method of running network application based on POS terminal, including: receiving an operation on a network application; calling a first interface of a JS layer according to the operation; parsing the first interface and acquiring an object corresponding to the first interface; transmitting a corresponding signal through the object and executing a slot function associated with the signal; and calling a second interface of a plug-in layer through the slot function, and calling a hardware module corresponding to the second interface to perform the operation. | 2019-12-12 |
20190377622 | Processing System For Performing Predictive Error Resolution And Dynamic System Configuration Control - Aspects of the disclosure relate to error resolution processing systems with improved error prediction features and enhanced resolution techniques. A computing platform may receive error log files identifying error codes corresponding to error occurrences on one or more different virtual machine host platforms. The computing platform may aggregate the error codes corresponding to the error occurrences to generate an error lattice. Using the error lattice, the computing platform may predict an error outcome. Based on the predicted error outcome, the computing platform may determine a system configuration update to be applied to the one or more virtual machine host platforms. The computing platform may direct a dynamic resource management computing platform to distribute relevant portions of the system configuration update to each of the one or more virtual machine host platforms. This may cause the one or more virtual machine host platforms to implement the system configuration update. | 2019-12-12 |
20190377623 | Processing System For Performing Predictive Error Resolution and Dynamic System Configuration Control - Aspects of the disclosure relate to dynamic system configuration control systems with improved resource allocation techniques. A computing platform may receive commands directing the computing platform to distribute relevant portions of a system configuration update. The computing platform may identify one or more virtual machine host platforms to which the system configuration update is applicable, and may direct applicable virtual machine host platforms to perform system updates based on the system configuration update. The computing platform may generate an error map identifying correlations between error codes and a respective operator for each error code. The computing platform may determine, based on the error map, an operator associated with resolution of various error codes. The computing platform may direct user devices associated with the determine operators to cause display of an operator interface, and may direct a client management computing platform to cause display of an error correction hub. | 2019-12-12 |
20190377624 | DATA VALIDATION - In an example, data, such as, a journal entry in a ledger, to be validated and associated supporting documents may be extracted. Further, an entity, indicative of a feature of the data may be extracted. Based on the extracted entity, one or more probable values for a field of the data may be determined. A probability score may be associated each of the probable values of the field. At least one of the probable values of the field may be compared with an actual value of the field of the data. Based on comparison, a notification indicative of a potential error in the data may generated. The data and historical data associated with the data may be processed, based on at least one of predefined rules and a machine learning technique, to detect an anomaly in the data, the anomaly being related to a contextual information associated with the data. | 2019-12-12 |
20190377625 | COMPUTING NODE FAILURE AND HEALTH PREDICTION FOR CLOUD-BASED DATA CENTER - A system may include a node historical state data store having historical node state data, including a metric that represents a health status or an attribute of a node during a period of time prior to a node failure. A node failure prediction algorithm creation platform may generate a machine learning trained node failure prediction algorithm. An active node data store may contain information about computing nodes in a cloud computing environment, including, for each node, a metric that represents a health status or an attribute of that node over time. A virtual machine assignment platform may then execute the node failure prediction algorithm to calculate a node failure probability score for each computing node based on the information in the active node data store. As a result, a virtual machine may be assigned to a selected computing node based at least in part on node failure probability scores. | 2019-12-12 |
20190377626 | METHOD FOR ANALYZING A CAUSE OF AT LEAST ONE DEVIATION - Provided is a method and corresponding unit for analyzing a cause of at least one deviation, having the steps of: receiving a state data record which has at least one deviation; determining at least one preceding state data record; determining at least one alternative preceding state data record based on the at least one preceding state data record; determining at least one simulated data record by simulating the at least one alternative preceding state data record; comparing the at least one simulated data record with the state data record to be analyzed; determining a similarity value between the at least one simulated data record and the state data record; outputting the at least one simulated data record of the at least one alternative preceding state data record as the cause of the at least one deviation or at least one error message on the basis of the similarity value. | 2019-12-12 |
20190377627 | ROOT CAUSE ANALYSIS - A method and system for performing a root cause analysis. A central processing unit (CPU) tracks a focal point of a user's eye gaze. The CPU correlates the focal point of the user's eye gaze to a viewing position of a display device displaying a file that includes event data being viewed by the user. The CPU identifies, as a function of the viewing position, events of interest in the event data and an amount of time that the event data is viewed by the user. The CPU outputs, as a function of a linear regression model, an interest score pertaining to one or more events of interest that were previously identified as a function of the user's eye gaze. The interest score is a probability of each identified event of interest being a root cause of a defect. | 2019-12-12 |
20190377628 | DYNAMICALLY CONTROLLING RUNTIME SYSTEM LOGGING BASED ON END-USER REVIEWS - Runtime system statistics logging is dynamically controlled at code and application levels, based on user reviews. Logging of specific code components in specific application instances, identified based on user reviews, is automatically turned on, based on the user reviews indicating defects. Logging for other components or application instances, however, remains off or is automatically turned off. | 2019-12-12 |
20190377629 | DYNAMICALLY CONTROLLING RUNTIME SYSTEM LOGGING BASED ON END-USER REVIEWS - Runtime system statistics logging is dynamically controlled at code and application levels, based on user reviews. Logging of specific code components in specific application instances, identified based on user reviews, is automatically turned on, based on the user reviews indicating defects. Logging for other components or application instances, however, remains off or is automatically turned off. | 2019-12-12 |
20190377630 | VALIDATION OF A SYMBOL RESPONSE MEMORY - Configuration content of electronic devices used for data analysis may be altered due to bit failure or corruption, for example. Accordingly, in one embodiment, a device includes a plurality of blocks, each block of the plurality of blocks includes a plurality of rows, each row of the plurality of rows includes a plurality of configurable elements, each configurable element of the plurality of configurable elements includes a data analysis element including a memory component programmed with configuration data. The data analysis element is configured to analyze at least a portion of a data stream based on the configuration data and to output a result of the analysis. The device also includes an error detection engine (EDE) configured to perform integrity validation of the configuration data. | 2019-12-12 |
20190377631 | VARIABLE RESISTANCE RANDOM-ACCESS MEMORY AND METHOD FOR WRITE OPERATION HAVING ERROR BIT RECOVERING FUNCTION THEREOF - Provided is a variable resistance random-access memory for suppressing degradation of performance by recovering a memory cell that fails. A variable resistance random-access memory of the disclosure includes a memory array, a row selection circuit, a column selection circuit, a controller, an error checking and correcting (ECC) circuit, an error bit flag register, and an error bit address register. The memory array includes a plurality of memory cells. The column selection circuit includes a sense amplifier and a write driver/read bias circuit. The error bit flag register stores bits for indicating presence/absence of an error bit in a write operation. The error bit address register stores an address of the error bit. The controller recovers the error bit when a predetermined event occurs. | 2019-12-12 |
20190377632 | METHOD OF EQUALIZING BIT ERROR RATES OF MEMORY DEVICE - Provided is a bit error rate equalizing method of a memory device. The memory device selectively performs an error correction code (ECC) interleaving operation according to resistance distribution characteristics of memory cells, when writing a codeword including information data and a parity bit of the information data to a memory cell array. In the ECC interleaving operation according to one example, an ECC sector including information data is divided into a first ECC sub-sector and a second ECC sub-sector, the first ECC sub-sector is written to memory cells of a first memory area having a high bit error rate (BER), and the second ECC sub-sector is written to memory cells of a second memory area having a low BER. | 2019-12-12 |
20190377633 | PROVIDING ADDITIONAL PARITY FOR NON-STANDARD SIZED PARITY DATA SETS - Apparatus and method for storing data in a non-volatile memory (NVM), such as a flash memory in a solid-state drive (SSD). In some embodiments, a distributed storage space of the NVM is defined to extend across a plural number of regions of the NVM. A non-standard parity data set is provided having a plural number of data elements greater than or equal to the plural number of regions in the storage space. The data set is written by storing a first portion of the data elements and a first parity value to the plural number of regions and a remaining portion of the data elements and a second parity value to a subset of the plural number of regions. The regions can comprise semiconductor dies in a flash memory, and the distributed storage space can be a garbage collection unit formed using one erasure block from each flash die. | 2019-12-12 |
20190377634 | MEMORY CONTROLLER AND MEMORY SYSTEM INCLUDING THE SAME - There are provided a memory controller and a memory system including the same. The memory controller includes: a processor for generating a command and an address in response to a request from a host, and generating a bin label and a Log Likelihood Ratio (LLR), based on data received from memory devices; a buffer memory for temporarily storing the data, the bin label, and the LLR; and an error correction circuit for performing error correction decoding on the data, using the LLR. | 2019-12-12 |
20190377635 | DECODER FOR MEMORY SYSTEM AND METHOD THEREOF - Decoder is provided for memory systems. The decoder receives data from a memory device including a plurality of pages, each storing data, and decoding the data based on a type of a page in which the data is stored, among the plurality of pages and life cycle information indicating a current state of the memory device in its life cycle. | 2019-12-12 |
20190377636 | MEMORY SYSTEM - In general, according to an embodiment, a memory system includes a memory device including a memory cell; and a controller. The controller is configured to: receive first data from the memory cell in a first data reading; receive second data from the memory cell in a second data reading that is different from the first data reading; convert a first value that is based on the first data and the second data, to a second value in accordance with a first relationship; and convert the first value to a third value in accordance with a second relationship that is different from the first relationship. | 2019-12-12 |
20190377637 | SYSTEM, DEVICE AND METHOD FOR STORAGE DEVICE ASSISTED LOW-BANDWIDTH DATA REPAIR - According to one general aspect, an apparatus may include a regeneration-code-aware (RCA) storage device configured to calculate at least one type of data regeneration code for data error correction. The RCA storage device may include a memory configured to store data in chunks which, in turn, comprise data blocks. The RCA storage device may include a processor configured to compute, when requested by an external host device, a data regeneration code based upon a selected number of data blocks. The RCA storage device may include an external interface configured to transmit the data regeneration code to the external host device. | 2019-12-12 |
20190377638 | STORAGE SYSTEM SPANNING MULTIPLE FAILURE DOMAINS - A plurality of failure domains are communicatively coupled to each other via a network, and each of the plurality of failure domains is coupled to one or more storage devices. A failure resilient stripe is distributed across the plurality of storage devices, such that two or more blocks of the failure resilient stripe are located in each failure domain. | 2019-12-12 |
20190377639 | PROTECTING IN-MEMORY CONFIGURATION STATE REGISTERS - Protecting in-memory configuration state registers. A request to access an in-memory configuration state register, such as a read or write request, is obtained. The in-memory configuration state register is mapped to memory. Error correction code of the memory is used to protect the access to the in-memory configuration state register. | 2019-12-12 |
20190377640 | RAID SYSTEMS AND METHODS FOR IMPROVED DATA RECOVERY PERFORMANCE - A RAID system, RAID controller, method, and computer program product for reducing the number of reads of XOR data in a multi-storage-enclosure RAID array includes a RAID array controller that implements a selected distributed RAID scheme. The RAID array controller determines a set of drives and logical block addresses corresponding to a parity group and divides the set of drives into subsets of drives that are located within each individual storage enclosure of the multiple storage enclosures. The controller issues a single EnclosureXOR Read to each storage enclosure corresponding to the subsets of drives to read enclosure-level intermediate XOR data calculated by each storage enclosure for each subset of drives and in response to receiving the enclosure-level intermediate XOR data results from all storage drives in the parity group, and calculates an array level XOR result by XORing the enclosure-level intermediate XOR data results from the storage enclosures. | 2019-12-12 |
20190377641 | MEMORY SYSTEM HAVING STORAGE DEVICE AND MEMORY CONTROLLER AND OPERATING METHOD THEREOF - A memory system includes: a storage device including a plurality pages for storing data; and a memory controller configured to determine, when sudden power-off occurs, whether there is a high probability of a program disturb of unselected pages sharing a word line coupled to a selected page among the pages in rebooting, and output a command to perform an over-write operation for programming data in the selected page or skip the over-write operation, based on a result of the determination. | 2019-12-12 |
20190377642 | DECOUPLED BACKUP SOLUTION FOR DISTRIBUTED DATABASES ACROSS A FAILOVER CLUSTER - A decoupled backup solution for distributed databases across a failover cluster. Specifically, a method and system disclosed herein improve upon a limitation of existing backup mechanisms involving distributed databases across a failover cluster. The limitation entails restraining backup agents, responsible for executing database backup processes across the failover cluster, from immediately initiating these aforementioned processes upon receipt of instructions. Rather, due to this limitation, these backup agents must wait until all backup agents, across the failover cluster, receive their respective instructions before being permitted to initiate the creation of backup copies of their relative distributed database. Subsequently, the limitation imposes an initiation delay on the backup processes, which the disclosed method and system omit, thereby granting any particular backup agent the capability to immediately (i.e., without delay) initiate those backup processes. | 2019-12-12 |
20190377643 | AUTOMATED BACKUP AND RESTORE OF A DISK GROUP - Restoring a clustered database having a plurality of nodes each having database from a failed storage device by receiving a request to restore a backup image of a failed shared storage device associated with the clustered database to a time; performing a preflight check including at least one checklist process; terminating the restore when any checklist process fails; when each checklist process succeeds completing the restore by creating at least one flashcopy associated with the backup image, mapping to each of the plurality of nodes an associated portion of the at least one flashcopy, mounting the at least one flashcopy to the node as a diskgroup, and switching the clustered database to run from the diskgroup. | 2019-12-12 |
20190377644 | BOOST ASSIST METADATA TABLES FOR PERSISTENT MEMORY DEVICE UPDATES DURING A HARDWARE FAULT - A method and data processing device for enabling a write operation to track meta-data changes during a hardware fault in an information handling system (IHS). The method includes generating an indexing map to track memory space attributes of a persistent memory device. The method includes generating a subsequent indexing map that is a duplicate of a first indexing map. The method includes communicatively linking each of the indexing maps. The method includes distributing a subsequent indexing map to one or more memory devices. In response to detection of an update to meta-data associated during a hardware fault, the method includes identifying an indexing map that is stored on a writeable memory device. In response to detection of the hardware fault the method includes writing memory space attributes to the writeable indexing map. The method includes synchronizing a master indexing map to each other indexing map to coordinate changes to the memory space attributes. | 2019-12-12 |
20190377645 | Linear View-Change BFT with Optimistic Responsiveness - Techniques for implementing linear view-change with optimistic responsiveness in a BFT protocol running on a distributed system comprising n replicas are provided. According to one set of embodiments, the replicas can execute, during a view v of the BFT protocol, a first voting round comprising communicating instances of a first type of COMMIT certificate among the replicas. Further, when 2f+1 instances of the first type of COMMIT certificate associated with view v have been received by the replicas, the replicas can execute a second voting round comprising communicating instances of a second type of COMMIT certificate among the replicas. If 2f+1 instances of the second type of COMMIT certificate associated with view v are not received by the replicas within a predetermined timeout period, a view change can be initiated from view v to a view v+1. | 2019-12-12 |
20190377646 | Managing A Pool Of Virtual Functions - Managing a pool of virtual functions including generating a virtual function pool comprising a plurality of virtual functions for at least one single root input/output virtualization (SR-IOV) adapter; creating a control path from a client virtual network interface controller (VNIC) driver in a first client partition to a target network using an active virtual function; receiving a failure alert indicating that the control path from the client VNIC driver in the first client partition to the target network using the active virtual function has failed; selecting, from the virtual function pool, a backup virtual function for the first client partition based on the failure alert; and recreating the control path from the client VNIC driver in the first client partition to the target network using the backup virtual function. | 2019-12-12 |
20190377647 | Method and Apparatus for Ensuring Data Integrity in a Storage Cluster With the Use of NVDIMM - An information handling system includes a persistent storage and a memory controller. The persistent storage includes a volatile memory and a non-volatile memory. The memory controller stores data and metadata for a data file within the volatile memory, and the data file is synchronized within other information handling systems of a storage cluster. The memory controller updates the metadata in response to a change in the data of the data file, stores the data and the metadata for the data file within the non-volatile memory prior to a power loss of the information handling system, and synchronizes the data and the metadata of the data file with current data and current metadata for the data file found in the other information handling systems in response to the information handling system being back online. The data is synchronized with the current metadata based on a transform for the data file being received from the other information handling systems. | 2019-12-12 |
20190377648 | LINEAR VIEW-CHANGE BFT - Techniques for implementing linear view-change in a Byzantine Fault Tolerant (BFT) protocol running on a distributed system comprising n replicas are provided. According to one set of embodiments, at a time of performing a view-change from a current view number v to a new view number v+1, a replica in the n replicas corresponding to a new proposer for new view number v+1 can generate a PREPARE message comprising a single COMMIT certificate, where the single COMMIT certificate is the highest COMMIT certificate the new proposer is aware of. The new proposer can then transmit the PREPARE message with the single COMMIT certificate to all other replicas in the n replicas. | 2019-12-12 |
20190377649 | COPYING DATA FROM MIRRORED STORAGE TO AUXILIARY STORAGE ARRAYS CO-LOCATED WITH PRIMARY STORAGE ARRAYS - Methods that copy data from mirrored storage to auxiliary storage arrays co-located with primary storage arrays are provided. One method includes requesting a subset of the data from a backup system mirroring the set of data at a remote location in response to detecting an error in a storage device of an array of primary storage devices storing a set of data. The method further includes receiving the subset of the data from the backup system and storing the subset of the data in an array of auxiliary storage devices co-located with the array of primary storage devices in which the subset of the data can correspond to data stored on the storage device. Systems and computer program products for performing the above method are also provided. | 2019-12-12 |
20190377650 | SYSTEMS AND METHODS TO PREVENT SYSTEM CRASHES DUE TO LINK FAILURE IN MEMORY MIRRORING MODE - Systems and methods for preventing system crashes due to memory link failure in memory mirroring mode in an information handling system (IHS). The IHS may include a first memory device, a second memory device, and an integrated memory controller (IMC). The IMC may issue write transactions to both the first and second memory devices and issue read transactions to the first memory device when the IMC is in memory mirroring mode. The IMC may transmit a system management interrupt (SMI) with an IMC error to a basic input/output system (BIOS) when a persistent uncorrected IMC error is detected within the first memory device. The BIOS may perform a memory mirror failover process that may cause the IMC to issue the write transactions and the read transactions to the second memory device when the IMC error is a fatal memory link error. | 2019-12-12 |
20190377651 | SWITCHING OVER FROM USING A FIRST PRIMARY STORAGE TO USING A SECOND PRIMARY STORAGE WHEN THE FIRST PRIMARY STORAGE IS IN A MIRROR RELATIONSHIP - A computer program product, system, and method for switching over from using a first primary storage to using a second primary storage when the first primary storage is in a mirror relationship. Migration operations are initiated to migrate data in the first primary storage to a second primary storage while the data in the first primary storage indicated in first change recording information is mirrored to a secondary storage and switch from using the first primary storage to the second primary storage. Resynchronization operations are initiated to indicate changes to data in the second primary storage in a second change recording information, copy writes from the second primary storage indicated in the first and the second change recording information to the secondary storage, and mirror writes to the second primary storage to the secondary storage in response to the copying the writes. | 2019-12-12 |
20190377652 | APPLICATION HEALTH MONITORING BASED ON HISTORICAL APPLICATION HEALTH DATA AND APPLICATION LOGS - Techniques for monitoring health of an application based on historical application health data and application logs are disclosed. In one embodiment, the historical application health data and the historical application logs associated with a period may be obtained. The application may include multiple services running therein. Priority of services may be determined based on the historical application health data associated with a portion of the period. Priority of exceptions associated with each of the services may be determined based on the historical application health data and the historical application logs associated with the portion of the period. Further, an application regression model may be trained by correlating the priority of the services, the associated priority of the exceptions, and the corresponding historical application health data. The health of the application may be monitored by analyzing real-time application logs using tested application regression model. | 2019-12-12 |
20190377653 | SYSTEMS AND METHODS FOR MODELING COMPUTER RESOURCE METRICS - This disclosure relates generally to system modeling, and more particularly to systems and methods for modeling computer resource metrics. In one embodiment, a processor-implemented computer resource metric modeling method is disclosed. The method may include detecting one or more statistical trends in aggregated interaction data for one or more interaction types, and mapping each interaction type to one or more devices facilitating the transactions. The method may further include generating one or more linear regression models of a relationship between device utilization and interaction volume, and calculating one or more diagnostic statistics for the one or more linear regression models. A subset of the linear regression models may be filtered out based on the one or more diagnostic statistics. One or more forecasts may be generated using the remaining linear regression models, using which a report may be generated and provided. | 2019-12-12 |
20190377654 | SYSTEMS AND METHODS FOR GENERATING A SNAPSHOT VIEW OF NETWORK INFRASTRUCTURE - A computer may receive a request to generate a snapshot view of the enterprise network infrastructure. The computer may implement a multithread process to contemporaneously query a plurality of blade servers and server enclosures within the entire network infrastructure. The computer may contemporaneously receive a plurality of information files from the queried network resources (e.g. the blade servers, server enclosures). An information file for a network resource may contain information of the network resource such as the operating status, currency (also referred to as assembly date), hardware serial number, firmware version, and/or other information of the network resources. Integrating the information in the received files, the computer may generate snapshot view file. The snapshot view file may be in hypertext markup language (HTML) format. The computer may transmit a selectable link to the snapshot view file to multiple user devices to be displayed in the respective web browsers. | 2019-12-12 |
20190377655 | TWO-STAGE DISTRIBUTED ESTIMATION SYSTEM - Metadata received from each worker computing device describes EDF estimates for samples of marginal variables stored on each respective worker computing device. Combinations of the EDF estimates are enumerated and assigned to each worker computing device based on the metadata. A request to compute outcome expectation measure values for an outcome expectation measure is initiated to each worker computing device based on the assigned combinations. The outcome expectation measure values computed by each worker computing device are received from each respective worker computing device. The received outcome expectation measure values are accumulated for the outcome expectation measure. A mean value and a standard deviation value are computed for the outcome expectation measure from the accumulated, received outcome expectation measure values. The computed mean and standard deviation values for the outcome expectation measure are output to represent an expected outcome based on the marginal variables. | 2019-12-12 |
20190377656 | Integrated Management System for Container-Based Cloud Servers - Disclosed is a method for monitoring and controlling a container-based cloud server. In a computer program stored in a computer-readable storage medium, including encoded commands, which causes one or more processors to perform operations for monitoring respective containers operating in a container-based cloud server when the computer program is executed by the one or more processors of a computer system, the operations including: an operation of monitoring static resource information from a host OS; an operation of monitoring container information of each of a plurality of containers from the host OS; an operation of determining whether a predetermined event occurs; an operation of driving an event processing module corresponding to an event which occurs among a plurality of event processing modules when an event occurs based on the determination as to whether the event occurs; and an operation of performing a predetermined operation by using the driven event processing module. | 2019-12-12 |
20190377657 | SYSTEM AND METHOD OF CAPTURING SYSTEM CONFIGURATION DATA TO RESOLVE AN APPLICATION MALFUNCTION - Systems and methods for tracking mobile device software errors are disclosed. A mobile device, in response to receiving an indication of shaking, may capture error diagnostic information including, for instance, a screen shot, user information, and/or a session log. The mobile device may generate an error report including the error diagnostic information, and may submit the error report to a server after a user authorizes the submission. The mobile device may further subscribe the user to error report tracking, which may include periodically receiving and displaying progress status updates for a software error indicated by the error report. The progress status update may indicate that, for instance, the software error has previously been reported by a second user, that a solution for the software error is pending, or that a solution for the software error has been found. | 2019-12-12 |
20190377658 | METHOD AND SYSTEM FOR VERIFYING PROPERTIES OF SOURCE CODE - This disclosure relates generally to method and system for verifying properties of source code. Verifying sufficient subset of properties by identifying implication relations between their verification outcomes is time consuming because of increased size of source code with large number of properties. The proposed disclosure processes the received source code for verifying properties by analyzing the source code to merge the plurality of properties into a plurality of groups based on a grouping technique. Then, slice for each group among the plurality of groups are created. Further, each slice created for each group is verified; verification for the one or more properties within each group is performed simultaneously. The system groups the properties thereby providing efficient and scalable system for verifying properties which reduces cost with increased efficiency and improved performance. | 2019-12-12 |
20190377659 | NOTIFICATION CONTROL METHOD AND INFORMATION PROCESSING APPARATUS - An information processing apparatus includes a memory and a processor coupled to the memory. The processor is configured to acquire identification information of a process executed on a virtual machine and information indicating a behavior of the process at a time when the process is executed, and sequentially store the information in the memory. The processor is configured to refer to the information stored in the memory when a deployment of software in the virtual machine is detected, identify first identification information of a first process such that a change of the behavior at the time when the first process is executed before and after the deployment exceeds a predetermined first criterion. The processor is configured to notify the first identification information to a monitoring process that monitors an operation of the software. | 2019-12-12 |
20190377660 | METHODS AND SYSTEMS FOR BLOCKCHAIN TESTING - Methods and systems for testing blockchain technology measuring blockchain performance, calculating blockchain performance metrics, and presenting blockchain test results to a user. Considering both network size and workload level, the system automatically identifies potential flaws in a blockchain technology solution, evaluates operational performance criteria, including scalability, scalability robustness, workload, workload robustness, security and privacy. | 2019-12-12 |
20190377661 | INFLUENCE EXTRACTION DEVICE, COMPUTER READABLE MEDIUM AND INFLUENCE EXTRACTION METHOD - The influence extraction device includes an output result acquisition unit, a change extraction unit and a search unit. The output result acquisition unit acquires a first output result of a subsystem before a change in a program, and a second output result of the subsystem after the change in the program. The change extraction unit compares the first output result and the second output result, and extracts an interface item whose item value is changed from a plurality of interface items that the subsystem includes. The search unit searches subsystem information for another subsystem including an interface item related to the interface item extracted. | 2019-12-12 |
20190377662 | IDENTIFYING A SOURCE FILE FOR USE IN DEBUGGING COMPILED CODE - Method and system are provided for identifying a source file for use in debugging compiled code. The method includes referencing a compiled file for debugging and searching for potential source files of the compiled file from configured repositories. The method obtains the potential source files from the configured repositories and iterates over the obtained potential source files to compile and compare each potential source file to the compiled file. One or more matching source files are identified for use in debugging the compiled file. | 2019-12-12 |
20190377663 | PERFORMANCE TESTING PLATFORM THAT ENABLES REUSE OF AUTOMATION SCRIPTS AND PERFORMANCE TESTING SCALABILITY - A testing platform receives a code for testing, where the code is to be tested using a browser. The testing platform determines a number of a plurality of browsers that are to be used to test the code and generates a number of a plurality of virtual machines to host the plurality of browsers, where the number of the plurality of virtual machines is based on the number of the plurality of browsers. The testing platform assigns an automation script to each virtual machine of the virtual machines to test the code, and monitors execution of the automation script by each virtual machine of the plurality of virtual machines. The testing platform performs an action associated with the execution of the automation script by each virtual machine of the plurality of virtual machines. | 2019-12-12 |
20190377664 | PROBATIONARY SOFTWARE TESTS - A method, computer program product, and system is described. A continuous integration environment is identified. A first software test associated with the continuous integration environment is identified. A probationary status for the first software test is determined, the probationary status indicating, at least in part, a potential lack of reliability for the first software test. | 2019-12-12 |
20190377665 | EVALUATING AND PRESENTING SOFTWARE TESTING PROJECT STATUS INDICATORS - Systems and methods for evaluating and presenting software testing project status indicators. An example method may comprise: determining, by a computer system, a plurality of project status indicators comprising one or more average test execution rates, a required test execution rate, a test execution schedule variance, an actual test completion ratio, and/or a test completion schedule variance; and causing one or more project status indicators to be displayed in a visual relation to each other, to a timeline, and/or to another project's status indicators. | 2019-12-12 |
20190377666 | OPTIMIZED TESTING SYSTEM - Described herein includes a software testing system that optimizes test case scheduling to efficiently and speedily analyze a block of code. The system enhances the performance of software testing by implementing a test controller using test statistics to optimize testing performance. The test controller may use the test statistics to determine relevant test cases to execute, and to provide better and/or faster feedback to users. | 2019-12-12 |
20190377667 | TEST CASE SELECTION APPARATUS AND COMPUTER READABLE MEDIUM - A non-equivalence set extraction unit ( | 2019-12-12 |
20190377668 | REGRESSION TESTING OF CLOUD-BASED SERVICES - A method for regression testing may include detecting a client request sent from a client to a cloud-based service. One or more actions triggered at the cloud-based service by the one or more actions may be detected. The one or more actions may include a change to a database coupled with the cloud-based service. A test case may be generated for regression testing the cloud-based service. The test case may include the client request and an expected result of the client request. The expected result of the client request may include the one or more actions triggered at the cloud-based service by the client request. The cloud-based service may be regression tested by at least executing the test case. Related systems and articles of manufacture, including computer program products, are also provided. | 2019-12-12 |
20190377669 | FRAMEWORK FOR VISUAL AUDIT EMULATION FOR APPLICATION - A system and method to generate an audit trail based on operation of a target application. The system includes a computing device operable to execute the target application. The target software generates user interfaces audit data in response to user inputs. The audit data generated by the target application is stored. An audit visualization framework reads the audit database and creates a video playback file of user actions that occur as the user interacts with the audited target application. | 2019-12-12 |
20190377670 | TESTER AND METHOD FOR TESTING A DEVICE UNDER TEST USING RELEVANCE SCORES - A tester for testing a device under test is shown, having a test unit configured for performing a test of the device under test using multiple test cases, each test case having variable values of a set of predetermined variables, the test units configured to derive an output value for each test case indicating whether the device under test validly operates at a current test case or whether the device under test provides an error at the current test case; and an evaluation unit configured for evaluating the multiple test cases based on a plurality of subsets of the predetermined input variables with respect to the output value, the evaluation unit configured for providing a number of plots of the evaluation of the multiple test cases where each plot indicates the impact of one subset of the plurality of subsets of the predetermined input variables to the output value in dependence on respective relevance scores or associated with the respective relevance scores. | 2019-12-12 |
20190377671 | MEMORY CONTROLLER WITH MEMORY RESOURCE MEMORY MANAGEMENT - In an example implementation according to aspects of the present disclosure, a memory controller is disclosed. The memory controller is communicatively coupleable to a memory resource having a plurality of memory resource regions, which may be associated with a plurality of computing resources. The memory controller may include a memory resource interface to communicatively couple the memory controller to the memory resource and a computing resource interface to communicatively couple the memory controller to the plurality of computing resources. The memory controller may further include a memory resource memory management unit to manage the memory resource. | 2019-12-12 |
20190377672 | METHOD AND SYSTEM FOR IMPROVED PERFORMANCE OF A VIDEO GAME ENGINE - Methods and apparatuses to improve the performance of a video game engine using an Entity Component System (ECS) are described herein. In accordance with an embodiment, the ECS creates and uses entities, to represent game objects, which are constructed entirely using value data types. The ECS constructs the entities within a memory in a densely packed linear way, and whereby the ECS constantly monitors (e.g., during game play) objects within a game and adjusts the entity distribution within the memory so that a maximum density of memory usage is maintained in real time as the game is being played. | 2019-12-12 |
20190377673 | UPDATING CACHE USING TWO BLOOM FILTERS - Updating cache devices includes a processor to detect a first set of hash functions and a first bit array corresponding to elements of a cache. In some examples, the processor detects a first instruction to add a new element to the cache and modify the first bit array based on the new element. Additionally, the processor processes a first invalidation operation and generates a second bit array and a second set of hash functions, while processing additional instructions. The processor deletes the first bit array and the first set of hash functions in response to detecting that the second bit array and the second set of hash functions have each been generated. Some examples process a second invalidation operation with the second set of hash functions and the second bit array. | 2019-12-12 |
20190377674 | DATA STORAGE DEVICE WITH WEAR RANGE OPTIMIZATION - A data storage device can be arranged with a semiconductor memory having a plurality of erasure blocks accessed by a controller to store data. An access count for each respective erasure block can be generated to allow a wear range for the semiconductor memory to be computed based on the respective access counts with the controller. A performance impact of the wear range is evaluated with the controller in order to intelligently alter a deterministic window of a first erasure block of the plurality of erasure blocks in response to the performance impact. | 2019-12-12 |
20190377675 | APPARATUSES AND METHODS FOR CONCURRENTLY ACCESSING DIFFERENT MEMORY PLANES OF A MEMORY - Apparatuses and methods for concurrently accessing different memory planes are disclosed herein. An example apparatus may include a controller associated with a queue configured to maintain respective information associated with each of a plurality of memory command and address pairs. The controller is configured to select a group of memory command and address pairs from the plurality of memory command and address pairs based on the information maintained by the queue. The example apparatus further includes a memory configured to receive the group of memory command and address pairs. The memory is configured to concurrently perform memory access operations associated with the group of memory command and address pairs. | 2019-12-12 |
20190377676 | EXPEDITED CACHE DESTAGE FOR POWER INTERRUPTION IN A VIRTUAL STORAGE APPLIANCE - A computing device includes an interface configured to interface and communicate with a communication system, a memory that stores operational instructions, and processing circuitry operably coupled to the interface and to the memory that is configured to execute the operational instructions to perform various operations. The computing device determines to de-stage information stored in a cache memory to a nonvolatile memory device. The computing device determines whether the de-stage is based on a power interruption and when the de-stage is not based on a power interruption the computing device updates access counters associated with the information and the target location for the information in the nonvolatile memory, updates a data access tracking module and initiates a data relocation function to transfer the information to the nonvolatile memory device. When the de-stage is based on a power interruption the computing device initiates relocation of the information from the cache memory to the nonvolatile memory without updating the access counters. | 2019-12-12 |
20190377677 | ARITHMETIC PROCESSING APPARATUS AND CONTROL METHOD FOR ARITHMETIC PROCESSING APPARATUS - An apparatus includes an instruction issuer that issues an instruction; and a cache including a cache data memory and a cache tag including cache entries, and a cache controller configured to perform cache-hit judgement, in response to a memory-access instruction issued from the instruction issuer, based on an address of the memory-access instruction and configured to issue a memory-access request to a memory in a case where the cache-hit judgement is a cache miss, wherein the cache controller registers, when issuing the memory-access request, data obtained by the memory-access request in the cache data memory, and registers provisional registration information of a provisional registration state indicating that cache registration is performed by execution of a speculative memory-access instruction in the cache tag, and judges as a speculative entry cache miss and issues the memory-access request. | 2019-12-12 |
20190377678 | DIADIC MEMORY OPERATIONS AND EXPANDED MEMORY FRONTEND OPERATIONS - A method of performing diadic operations in a processor is provided that includes receiving a first request packet initiating a read operation from a first memory address in the first request packet, and executing a first operation in the first request packet once the read request is completed. Also, the method includes generating a second request packet at a second memory address by combining the results of the first operation with the unused information in the first request packet. Furthermore, the method includes sending the second request packet to the Memory-Side Processor (MSP). When the MSP receives the second request, the MSP checks to determine if a write operation is requested and writes data to the second memory address, if a read operation is requested, the MSP reads data from the second memory address. | 2019-12-12 |
20190377679 | EXTENDED LINE WIDTH MEMORY-SIDE CACHE SYSTEMS AND METHODS - The present disclosure techniques for implementing an apparatus, which includes processing circuitry that performs an operation based a target data block, a processor-side cache that implements a first cache line, memory-side cache that implements a second cache line having line width greater than the first cache line, and a memory array. The apparatus includes one or more memory controllers that, when the target data block results in a cache miss, determine a row address that identifies a memory cell row as storing the target data block, instruct the memory array to successively output multiple data blocks from the memory cell row to enable the memory-side cache to store each of the multiple of data blocks in the second cache line, and instruct the memory-side cache to output the target data block to a coherency bus to enable the processing circuitry to perform the operation based on the target data block. | 2019-12-12 |
20190377680 | SET TABLE OF CONTENTS (TOC) REGISTER INSTRUCTION - A Set Table of Contents (TOC) Register instruction. An instruction to provide a pointer to a reference data structure, such as a TOC, is obtained by a processor and executed. The executing includes determining a value for the pointer to the reference data structure, and storing the value in a location (e.g., a register) specified by the instruction. | 2019-12-12 |
20190377681 | METHODS AND APPARATUS FOR WORKLOAD BASED DYNAMIC CACHE CONTROL IN SSD - Aspects of the present disclosure provide various apparatus, devices, systems and methods for dynamically configuring a cache partition in a solid state drive (SSD). The SSD may include non-volatile memory (NVM) that can be configured to store a different number of bits per cell. The NVM is partitioned into a cache partition and a storage partition, and the respective sizes of the partitions is dynamically changed based on a locality of data (LOD) of the access pattern of the NVM. | 2019-12-12 |
20190377682 | METHOD FOR CONTROLLING NEAR CACHES IN DISTRIBUTED CACHE ENVIRONMENT, AND DISTRIBUTED CACHE SERVER USING THE SAME - A method controlling near caches in a distributed cache environment including distributed cache servers is provided. The method includes steps of: a specific distributed cache server among the distributed cache servers, if a request signal for original cache data is obtained from a client node, transmitting replicated cache data for the original cache data to the client node, to support the client node to store and refer to the replicated cache data in its corresponding near cache storage part, and managing a reference map with a correspondence between the client node referring to the replicated cache data, and the original cache data; and if the original cache data is changed, checking the number of the client nodes referring to the replicated cache data by referring to the reference map, and invalidating the replicated cache data according to the number of the checked client nodes. | 2019-12-12 |
20190377683 | CACHE PRE-FETCHING USING CYCLIC BUFFER - A computer system comprises memory to store computer-executable instructions. The computer system may, as a result of execution of the instructions by one or more processors, cause the system to load a first subset of a set of data elements into a first cache, load a second subset of the set of data elements into a second cache, and as a result of elements of the first subset being processed, issue commands to place elements of the second subset into the first cache to enable processing the second subset to be processed from the first cache. | 2019-12-12 |
20190377684 | STORAGE CONTROL SYSTEM AND STORAGE CONTROL METHOD - A storage control system reads a data set from a storage apparatus if necessary in response to an I/O request. A data set contains data and an address. The storage control system performs an all-type address check which is a check to determine whether one of a first address and second address which correspond to a read-target data set in address translation information, which is information indicating a mapping relationship between one or more first addresses and one or more second addresses, matches the address among the data and address contained in the data set. The one or more first addresses are each an address which belongs to the first address type. The one or more second addresses are each an address which belongs to the second address type. The storage control system performs processing according to the I/O request when the result of the all-type address check is true. | 2019-12-12 |
20190377685 | MMIO ADDRESSING USING A TRANSLATION TABLE - A method for processing an instruction by a processor operationally connected to one or more buses comprises determining the instruction is to access an address of an address space. The address space maps a memory and comprises a range of MMIO addresses. The method determines the address being accessed is within the range of MMIO addresses and translates, based on determining that the address being accessed is within the range of MMIO addresses, the address being accessed using a translation table to a bus identifier identifying one of the buses and a bus address of a bus address space. The bus address space is assigned to the identified bus. The bus address resulting from the translation is assigned to a device accessible via the identified bus. Based on the instruction a request directed to the device is sent via the identified bus to the bus address resulting from the translation. | 2019-12-12 |
20190377686 | ARITHMETIC PROCESSOR, INFORMATION PROCESSING APPARATUS, AND CONTROL METHOD OF ARITHMETIC PROCESSOR - A request generation circuit generates an information request including a request address. A translation buffer associates a virtual address of a page with a physical address (PA) of the page and stores the associated addresses. A page-table buffer associates data in a page table in a level other than the last level with PA of the data and stores the associated data and address. A controller circuit obtains, from the request address, PA of data in a page table to be accessed when the request address is not stored in the translation buffer. The controller circuit searches in the page-table buffer for the data when the page table to be accessed is in a level other than the last level. Meanwhile, the controller circuit obtains the data from a memory when the page table to be accessed is in the last level and registers the data in the translation buffer. | 2019-12-12 |
20190377687 | MMIO ADDRESSING USING A TRANSLATION LOOKASIDE BUFFER - A method for processing an instruction by a processor operationally connected to one or more buses comprises determining the instruction is to access an address of an address space that maps a memory and comprises a range of MMIO addresses. The method determines the address being accessed is within the range of MMIO addresses and generates, based on the determination, a first translation of the address being accessed to a bus identifier identifying one of the buses and a bus address of a bus address space. The bus address resulting from the translation is assigned to a device accessible via the identified bus. The method generates an entry in a translation lookaside buffer. A request directed to the device is sent via the identified bus to the bus address resulting from the translation. | 2019-12-12 |
20190377688 | DYNAMICALLY ADAPTING MECHANISM FOR TRANSLATION LOOKASIDE BUFFER SHOOTDOWNS - An operating system (OS) of a processing system having a plurality of processor cores determines a cost associated with different mechanisms for performing a translation lookaside buffer (TLB) shootdown in response to, for example, a virtual address being remapped to a new physical address, and selects a TLB shootdown mechanism to purge outdated or invalid address translations from the TLB based on the determined cost. In some embodiments, the OS selects an inter-processor interrupt (IPI) as the TLB shootdown mechanism if the cost associated with sending an IPI is less than a threshold cost. In some embodiments, the OS compares the cost of using an IPI as the TLB shootdown mechanism versus the cost of sending a hardware broadcast to all processor cores of the processing system as the shootdown mechanism and selects the shootdown mechanism having the lower cost. | 2019-12-12 |
20190377689 | ARITHMETIC PROCESSING DEVICE, INFORMATION PROCESSING APPARATUS, AND METHOD FOR CONTROLLING ARITHMETIC PROCESSING DEVICE - A TLB receives an access request with respect to a first address and access authorization assigned to the request from an arithmetic operation control unit, translates the first address to a second address, determines the suitability of the access authorization, and outputs the access request with respect to the first address when the access authorization is not suitable. An MMU receives the access request with respect to the first address output from the TLB, translates the first address to the second address, determines the suitability of the access authorization, and outputs a notification of access prohibition to the arithmetic operation control unit when the access authorization is not suitable. | 2019-12-12 |
20190377690 | Method and Apparatus for Vector Permutation - A method is provided that includes performing, by a processor in response to a vector permutation instruction, permutation of values stored in lanes of a vector to generate a permuted vector, wherein the permutation is responsive to a control storage location storing permute control input for each lane of the permuted vector, wherein the permute control input corresponding to each lane of the permuted vector indicates a value to be stored in the lane of the permuted vector, wherein the permute control input for at least one lane of the permuted vector indicates a value of a selected lane of the vector is to be stored in the at least one lane, and storing the permuted vector in a storage location indicated by an operand of the vector permutation instruction. | 2019-12-12 |
20190377691 | METHOD AND ELECTRONIC DEVICE FOR DATA PROCESSING BETWEEN MULTIPLE PROCESSORS - An electronic device may comprise: a first memory for storing first data at a designated rate; a first processor connected to the first memory and configured to divide the first data into multiple second data, each having a size smaller than the size of the first data; a second memory for storing at least some of the multiple second data at a rate faster than the designated rate; a second processor connected to the second memory and configured to process the at least some of the multiple second data; and a DMA control module, connected to the second processor, for transmitting/receiving data between the first memory and the second memory, wherein the DMA control module is configured to: at least on the basis of a processing command for the multiple second data which is transmitted from the first processor to the second processor, receive, from the first memory, the at least some of the multiple small-sized second data divided from the first data; transmit the at least some of the multiple second data to the second processor; and transmit, to the first memory, third data processed by the second processor by using the at least some of the multiple second data. | 2019-12-12 |
20190377692 | INDEXING OF MEMORY PAGES TO PROVIDE SECURE MEMORY ACCESS - An input data may be received. Memory pages may be identified where each of the memory pages includes one or more cache lines. A first index table that includes cache lines may be generated from the memory pages based on the input data. Subsequently, an output data may be provided based on a particular cache line from the cache lines of the first index table. | 2019-12-12 |
20190377693 | METHOD TO GENERATE PATTERN DATA OVER GARBAGE DATA WHEN ENCRYPTION PARAMETERS ARE CHANGED - A memory device is provided that includes a memory location configured to store information representing data written using a first encryption/decryption method, a read channel configured to read and decrypt information using a second encryption/decryption method and an apparatus configured to prevent the read channel from reading the memory location using the second encryption/decryption method. | 2019-12-12 |
20190377694 | FINE GRAINED MEMORY AND HEAP MANAGEMENT FOR SHARABLE ENTITIES ACROSS COORDINATING PARTICIPANTS IN DATABASE ENVIRONMENT - Many computer applications comprise multiple threads of executions. Some client application requests are fulfilled by multiple cooperating processes. Techniques are disclosed for creating and managing memory namespaces that may be shared among a group of cooperating processes in which the memory namespaces are not accessible to processes outside of the group. The processes sharing the memory each have a handle that references the namespace. A process having the handle may invite another process to share the memory by providing the handle. A process sharing the private memory may change the private memory or the processes sharing the private memory according to a set of access rights assigned to the process. The private shared memory may be further protected from non-sharing processes by tagging memory segments allocated to the shared memory with protection key and/or an encryption key used to encrypt/decrypt data stored in the memory segments. | 2019-12-12 |
20190377695 | TERMINAL MANAGEMENT DEVICE AND TERMINAL DEVICE - An optimal network is constructed in a case in which a plurality of IoT standards or IoT platforms coexist. According to the present disclosure, provided is a terminal management device including a receiving unit that receives, from a terminal that collects information from a sensor, access timing information related to an accessible timing to the terminal and a transmitting unit that transmits the access timing information to a server that searches for the information. With this configuration, it is possible to construct an optimal network in a case in which a plurality of IoT standards or IoT platforms coexist. | 2019-12-12 |
20190377696 | USING STORAGE CONTROLLERS TO RESPOND TO MULTIPATH INPUT/OUTPUT REQUESTS - A computer program product, according to one embodiment, includes a computer readable storage medium having program instructions embodied therewith. The computer readable storage medium is not a transitory signal per se. Moreover, the program instructions are readable and/or executable by a controller to cause the controller to perform a method which includes: receiving a same input/output request along more than one communication paths, and evaluating a workload associated with each of the communication paths. A communication path having a lowest workload associated therewith is selected. Moreover, information corresponding to the input/output request as well as a status are sent along the selected communication path. The status sent indicates that the selected communication path was chosen to satisfy the input/output request. A special status indicating that none of the remaining communication paths were chosen to satisfy the input/output request is also sent along each of the remaining communication paths. | 2019-12-12 |
20190377697 | MICROCOMPUTER - A microcomputer including first and second CPUs is provided. The first and second CPUs may execute identical control programs in parallel. The microcomputer may control a write access by the first or second CPU. The microcomputer may compare an output of the first CPU with an output of the second CPU. Data is written to a write target unit. The microcomputer outputs a write response signal to the first and second CPUs when a data write destination of the first and second CPUs is the write target unit. The microcomputer outputs an abnormality determination signal when data output from the first CPU mismatches with data output from the second CPU. The microcomputer writes the data to the write target unit when the data write destination of the first and second CPUs is the write target unit and the abnormality determination signal is not input. | 2019-12-12 |
20190377698 | In-Connector Data Storage Device - A data storage device includes a case and a connector housed within the case. The connector includes a first connection interface having a plurality of connection fingers and a second connection interface having a plurality of springs. The case is positionable within a data storage device port such that the data storage device is completely disposed within the data storage device port when used. | 2019-12-12 |
20190377699 | ANALOG INDUSTRIAL CONTROL SYSTEMS ANTI-DETECTION ARCHITECTURE AND METHOD - An electronic signal transmission control system and method thereof comprising a plurality of hosts connected to a transmission control system. The plurality of hosts respectively connect with a plurality of command data modules and provide execution command data to corresponding address conversion modules. The command data is converted into an electronic signal and is provided to a processing module for analysis and integration. The integrated electronic signal is transmitted to an application interface module. A data processing module receives an execution command issued by an external device. The execution command is analyzed and then transmitted to a control engine module for integration. The external device transmits the data signal via a system hub module | 2019-12-12 |
20190377700 | SYSTEM AND METHOD FOR SELECTIVE COMMUNICATION THROUGH A DUAL-IN-LINE MODULE (DIMM) SOCKET VIA A MULTIPLEXER - Systems and methods for selective communication through a DIMM socket via a multiplexer. A system comprises a computer interface board that includes at least two DIMM sockets, a communication bus circuitry and a control circuitry coupled to the at least two DIMM sockets. The communication bus circuitry includes a first portion of a first bus configured to receive a first set of data, and a second portion of the first bus configured to receive a second set of the data. The control circuitry includes a multiplexer coupled to a first DIMM socket and the first portion of the first bus, the first multiplexer configured to enable the control circuitry to selectively communicate through the first DIMM socket, via the first portion of the first bus, using one of the number of communication protocols. | 2019-12-12 |
20190377701 | VECTOR DECODING IN TIME-CONSTRAINED DOUBLE DATA RATE INTERFACE - Systems, methods, and apparatus for improving throughput of a serial bus are described. A method performed at a device coupled to a serial bus includes detecting a transition in signaling state of a first wire of the serial bus while a first pair of consecutive bits is being received from the first wire of the serial bus, determining that no transition in signaling state of the first wire occurred while a second pair of consecutive bits is being received from the first wire, defining bit values for the first pair of consecutive bits based on direction of the transition in signaling state detected while the first pair of consecutive bits is being received, and sampling the signaling state of the first wire while the second pair of consecutive bits is being received to obtain a bit value used to represent both bits in the second pair of consecutive bits. | 2019-12-12 |
20190377702 | I3C SINGLE DATA RATE WRITE FLOW CONTROL - Systems, methods, and apparatus for communication over a serial bus in accordance with an I3C protocol are described that enable a slave device to request that a bus master device terminate a write transaction with the slave device. The serial bus may be operated according to an I3C single data rate protocol. In various aspects of the disclosure, a method performed at a master device coupled to a serial bus includes initiating a write transaction between the master device and a slave device, where the write transaction includes a plurality of data frames, and at least one data frame is configured with a transition bit in place of a parity bit. The method may include terminating the write transaction when the slave device drives a data line of the serial bus while receiving the transition bit. | 2019-12-12 |
20190377703 | METHODS AND APPARATUS FOR REDUCED-LATENCY DATA TRANSMISSION WITH AN INTER-PROCESSOR COMMUNICATION LINK BETWEEN INDEPENDENTLY OPERABLE PROCESSORS - Methods and apparatus for data transmissions over an inter-processor communication (IPC) link between two (or more) independently operable processors. In one embodiment, the IPC link is configured to enable an independently operable processor to transact data to another independently operable processor, while obviating transactions (such as via direct memory access) by encapsulating a payload within a data structure. For example, a host processor may insert the payload into a transfer descriptor (TD), and transmit the TD to a peripheral processor. The host processor may also include a head index and/or a tail index within a doorbell message sent to the peripheral processor, obviating another access of memory. The peripheral processor may perform similar types of transactions via a completion descriptor (CD) sent to the host processor. In some variants, the peripheral may be a Bluetooth-enabled device optimized for low-latency, low-power, and/or low-throughput transactions. | 2019-12-12 |
20190377704 | PROGRAMMED INPUT/OUTPUT MODE - A data processing system and method are provided. A host computing device comprises at least one processor. A network interface device is arranged to couple the host computing device to a network. The network interface device comprises a buffer for receiving data for transmission from the host computing device. The processor is configured to execute instructions to transfer the data for transmission to the buffer. The data processing system further comprises an indicator store configured to store an indication that at least some of the data for transmission has been transferred to the buffer wherein the indication is associated with a descriptor pointing to the buffer. | 2019-12-12 |