20th week of 2016 patent applcation highlights part 47 |
Patent application number | Title | Published |
20160139972 | REDUNDANT ENCODING - Analyzing data is disclosed. Error events are tracked. The error events are classified based on a number of errors included in each event. A desired level of error event to be able to be corrected in order to maintain an acceptable rate of uncorrected errors is determined. A redundancy level is selected so that new error events corresponding to the desired level of error event or a lower level of error event are corrected. | 2016-05-19 |
20160139973 | METHOD AND SEQUENCER FOR DETECTING A MALFUNCTION OCCURRING IN MAJOR IT INFRASTRUCTURES - A method for monitoring the operation of an IT infrastructure including a plurality of calculation nodes, includes selecting calculation nodes for performing a calculation, performing the calculation via the selected calculation nodes, attributing, via the sequencer, a score to each one of the calculation nodes having participated in the calculation performed, with each score reflecting a difference between a measured operating parameter of the calculation node for which the score is attributed and a reference operating parameter of the calculation node for which the score is attributed, verifying the operation of the calculation nodes having participated in the calculation performed, the verification being carried out using scores attributed to the calculation nodes having participated in the calculation. | 2016-05-19 |
20160139974 | SELF-HEALING CHARGING DEVICE - Self-healing charging devices and techniques for identifying and/or troubleshooting causes of performance degradation in user devices are described. A charging device described herein can receive first data from a user device connected to the charging device and analyze the first data to determine diagnostic data associated with the user device. Based at least in part determining the diagnostic data, the charging device described herein can provide an indication via at least one of the charging device or the user device, the indication signifying available actions that can be taken to improve performance of the user device. The charging devices described herein can enable users to easily identify issues causing and/or leading to performance degradation on their user devices and remedy and/or prevent problems that cause the performance degradation while corresponding user devices are charging via the charging devices. | 2016-05-19 |
20160139975 | RECORDING THE CORE DATA OF A COMPUTER PROCESS WHICH PROVIDES TRACE DATA - A method, apparatus and computer program for recording the core data of a computer process, the computer process comprising trace points and core data is provided for each such trace point. A first set of core data comprising an image of a memory for the computer process is stored in response to a first set of trace data being produced for the computer process for a first trace point. A second set of core data is stored in response to a second set of trace data being produced for the computer process for a second trace, where the second set of core data comprises a record of any change in current memory contents for the computer process with respect to the first set of core data. | 2016-05-19 |
20160139976 | MEMORY DEVICE WITH SECURE TEST MODE - A method in a memory device that operates in a testing mode, includes receiving a vector to be written to the memory device. The vector is written to the memory device only if the vector belongs to a predefined set of test vectors. If the vector does not belong to the set of test vectors, the vector is converted to one of the test vectors, and the converted vector is written to the memory device. | 2016-05-19 |
20160139977 | System and method for abnormality detection - A system and method for use in data analysis are provided. The system comprises a data processing utility configured to receive and process input data, comprising: plurality of neural network modules capable for operating in a training mode and in a data processing mode in accordance with the training; a network training utility configured for operating the neural network modules in the training mode utilizing selected set of training data pieces for sequentially training of the neural network modules in a cascade order to reduce an error value with respect to the selected set of the training data pieces for each successive neural network module in the cascade; and an abnormality detection utility configured for sequentially operating said neural network modules for processing input data, and classifying said input data as abnormal upon identifying that all the neural network modules provide error values being above corresponding abnormality detection thresholds. | 2016-05-19 |
20160139978 | FIRMWARE DUMP COLLECTION FROM PRIMARY SYSTEM DUMP DEVICE ADAPTER - A method of firmware dump collection from a primary dump adapter is provided. The method includes identifying a primary system dump device and a secondary system dump device. An operating system (OS) dump coordinator writes non-disruptive state data to the primary system dump device, and writes disruptive state data to the secondary system dump device. Non-disruptive state data is requested from a hardware device adapter that is connected to the non-primary system dump device. Disruptive state data is requested from the hardware device adapter that is connected to the primary system dump device. The non-disruptive state data is written to the primary system dump device. Disruptive state data is written to the secondary system dump device. | 2016-05-19 |
20160139979 | DEMYSTIFYING OBFUSCATED INFORMATION TRANSFER FOR PERFORMING AUTOMATED SYSTEM ADMINISTRATION - Techniques for automating the administration of computer systems. In one set of embodiments, information can be received specifying one or more commands and a list of target computer systems. Upon receiving this information, the one or more commands can be automatically executed in parallel against the target computer systems. In certain embodiments, executing the one or more commands in parallel can include forking a child process for each target computer system, and executing the one or more commands against that target computer system in the context of the child process. Output and error information that is collected by each child process as a result of executing the one or more commands can be aggregated and made available to a system administrator upon completion. Further, error information that is generated as a result of the automated administration process itself can be stored and made available to the system administrator for review. | 2016-05-19 |
20160139980 | ERASURE-CODING EXTENTS IN AN APPEND-ONLY STORAGE SYSTEM - A data storage system stores sets of data blocks in extents located on storage devices. During operation, the system performs an erasure-coding operation by obtaining a set of source extents, wherein each source extent is stored on a different machine in the data storage system. The system also selects a set of destination machines for storing destination extents, wherein each destination extent is stored on a different destination machine. Next, the system performs the erasure-coding operation by retrieving data from the set of source extents, performing the erasure-coding operation on the retrieved data to produce erasure-coded data, and then writing the erasure-coded data to the set of destination extents on the set of destination machines. Finally, after the erasure-coding operation is complete, the system commits results of the erasure-coding operation to enable the set of destination extents to be accessed in place of the set of source extents. | 2016-05-19 |
20160139981 | TECHNIQUES FOR INFORMATION PROTECTION IN A SOLID-STATE DEVICE BASED STORAGE POOL - A technique for protecting stored information from read disturbance includes receiving a first write request to a solid-state device (SSD) in a storage pool that employs an erasure code. The first write request has an associated identifier and associated data. In response to receiving the first write request, the first write request is assigned to two or more SSD blocks of the SSD device based on the identifier. Pages of the associated data are then written to the assigned SSD blocks, such that each SSD block holds data associated with only a single identifier. | 2016-05-19 |
20160139982 | GREEN NAND SSD APPLICATION AND DRIVER - A GNSD Driver coupled to host DRAM, and having a memory manager, a data grouper engine, a data ungrouper engine, a power manager, and a flush/resume manager. The GNSD driver is coupled to a GNSD application, and the host DRAM to a Non-Volatile Memory Device. The GNSD Driver further includes a compression/decompression engine, a de-duplication engine, an encryption/decryption engine, or a high-level error correction code engine. The encryption/decryption engine encrypts according to DES or AES. A method of operating a GNSD Driver and a GNSD application coupled to DRAM of a host, includes coupling: Configuration and Register O/S Settings to the host and the GNSD Application; a data grouper and data ungrouper to the host DRAM and to Upper and a Lower Filter; a power manager and a memory manager to the host; a flush/resume manager to the DRAM; and the DRAM to an SEED SSD. | 2016-05-19 |
20160139983 | DEVICE AND METHOD FOR DETECTING CONTROLLER SIGNAL ERRORS IN FLASH MEMORY - In accordance with the disclosure, there is provided a memory device configured to implement an error detection protocol. The memory device includes a memory array and a first input for receiving a control signal corresponding to a command cycle. The memory device also includes a second input for receiving an access control signal during a command cycle and for receiving an error detection signal during the command cycle, wherein the error detection signal includes information corresponding to the access control signal. The memory device further includes control logic configured to verify the correctness of the access control signal by a comparison with the error detection signal and perform an operation on the memory array during the command cycle when the correctness of the access control signal is verified. | 2016-05-19 |
20160139984 | DATA STORAGE DEVICE AND OPERATING METHOD THEREOF - A data storage device may include a memory device suitable for storing data and reading stored data as read data, and a bit distribution check unit suitable for performing a first error detection operation on the read data, based on a bit distribution of the read data. | 2016-05-19 |
20160139985 | RECOVERY IMPROVEMENT FOR QUIESCED SYSTEMS - Methods and apparatuses for performing a quiesce operation during a processor recovery action is provided. A processor performs a processor recovery action. A processor retrieves a quiesce status of a computer system from a shared cache with a second processor. A processor determines a quiesce status of the first processor based, a least in part, on the retrieved quiesce status of the computer system. | 2016-05-19 |
20160139986 | DATA STORAGE DEVICE AND ERROR CORRECTION METHOD THEREOF - A data reading method, applied to a data storage device that includes a flash memory capable of operating in a SLC mode and a multi-level cell mode. The data reading method includes reading a page corresponding to a first word line of the flash memory in the SLC mode according to a read command of a host to obtain a first data segment, writing a predetermined data into a most-significant-bit page corresponding to the first word line in the multi-level cell mode when the first data segment has an error, and reading the page corresponding to the first word line in the SLC mode again to obtain a second data segment. | 2016-05-19 |
20160139987 | GENERATING SOFT READ VALUES USING MULTIPLE READS AND/OR BINS - A starting read threshold is received. A first offset and a second offset is determined. A first read is performed at the starting read threshold offset by the first offset to obtain a first hard read value and a second read is performed at the starting read threshold offset by the second offset to obtain a second hard read value. A soft read value is generated based at least in part on the first hard read value and the second hard read value. | 2016-05-19 |
20160139988 | MEMORY UNIT - Operating a memory unit during a memory access operation. The memory unit includes a configuration of N data chips. A line of data stored in the memory unit is divided, with a controller, into a first portion and a second portion. The first portion of the line of data is encoded, with an outer code encoder, to generate an outer code output. The second portion of the line of data and the outer code output from the outer code encoder are encoded, with an inner code encoder, to generate an inner code output. A first layer of protection for the line of data is generated based on the inner code output and is stored to the memory unit, where the first layer of protection includes local error detection (LED) information combined with the line of data. A second layer of protection for the line of data is generated based on the first layer of protection and is stored to the memory unit. A decoding operation to retrieve the line of data is performing at the controller. | 2016-05-19 |
20160139989 | GLOBAL ERROR CORRECTION - A method that includes evaluating, with a controller, local error detection (LED) information in response to a first memory access operation is disclosed. The LED information is evaluated per cache line segment of data associated with a rank of a memory. The method further includes determining an error in at least one of the cache line segments based on an error detection code and determining whether global error correction (GEC) data for a first cache line associated with the at least one cache line segment is stored in a GEC cache in the controller. The method also includes correcting the first cache line associated with the at least one cache line segment based on the GEC data retrieved from the GEC cache in the controller without accessing GEC data from a memory. | 2016-05-19 |
20160139990 | STORAGE SYSTEM AND STORAGE APPARATUS - A storage apparatus includes a processor. The processor is configured to sequence a plurality of data pieces. The plurality of data pieces are respectively stored in a plurality of memory devices. The processor is configured to set compensation ranges to be respectively compensated by a first predetermined number of parities. The compensation ranges are respective portions of consecutive data pieces among the sequenced data pieces. The compensation ranges include a variably set number of data pieces for the respective parities. Each of the plurality of data pieces is included in a second predetermined number of compensation ranges. | 2016-05-19 |
20160139991 | PARITY-LAYOUT GENERATING METHOD, PARITY-LAYOUT GENERATING APPARATUS, AND STORAGE SYSTEM - A parity-layout generating method, includes: creating a first local parity layout with a first calculation range for calculating local parity; creating a second local parity layout with a second calculation range for calculating local parity, a length of the second calculation range being different from a length of the first calculation range; and creating, by a computer, a third local parity layout with the first calculation range and the second calculation range by combining the first local parity layout and the second local parity layout. | 2016-05-19 |
20160139992 | SEGMENT DEDUPLICATION SYSTEM WITH ENCRYPTION AND COMPRESSION OF SEGMENTS - A system for storing encrypted compressed data comprises a processor and a memory. The processor is configured to determine whether an encrypted compressed segment has been previously stored. The encrypted compressed segment was determined by breaking a data stream, a data block, or a data file into one or more segments and compressing and then encrypting each of the one or more segments. The processor is further configured to store the encrypted compressed segment in the event that the encrypted compressed segment has not been previously stored. The memory is coupled to the processor and configured to provide the processor with instructions. | 2016-05-19 |
20160139993 | STORAGE DEVICE FAILURE RECOVERY SYSTEM - A storage device failure recovery system includes a storage IHS and a user IHS coupled together over a network. The user IHS includes a storage system having a storage device, and a storage repair function that periodically provides a storage device image over the network to the storage IHS using data from the storage device. The storage repair function detects a failure of the storage device and streams an operating system on the user IHS using the storage device image stored on the storage IHS. While streaming the operating system on the user IHS using the storage device image stored on the storage IHS, the storage repair function analyzes the failure of the storage device, determines a storage system failure recovery procedure, and performs the storage system failure recovery procedure to restore the storage system while a user remains productive on the user IHS via the streamed operating system. | 2016-05-19 |
20160139994 | FAULT MANAGEMENT SYSTEM, FAULT MANAGEMENT SERVER, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM IN WHICH FAULT MANAGEMENT PROGRAM IS STORED - In a fault management system, each of one or more image forming apparatuses includes an agent unit that transmits, upon occurrence of a fault, a notice that the fault has occurred and service call information indicative of details of the fault to a fault management server. The fault management server includes a manager unit that transmits a reboot command to the agent unit of the image forming apparatus in which the fault has occurred, upon reception of the notice and the service call information from the agent unit of the image forming apparatus in which the fault has occurred. The agent unit of the image forming apparatus in which the fault has occurred reboots the image forming apparatus in which the fault has occurred, according to the reboot command. | 2016-05-19 |
20160139995 | INFORMATION PROCESSING APPARATUS, MEMORY DUMP METHOD, AND STORAGE MEDIUM - An information processing apparatus includes a processor that executes an operating system, a nonvolatile main memory device to which the processor is directly accessible and that has a controller, and an external storage device to which the processor is not directly accessible. When the processor detects an error of the operating system, the processor resets devices other than the nonvolatile main memory device and restarts the operating system, and the controller writes data of the nonvolatile main memory device to the external storage device. | 2016-05-19 |
20160139996 | METHODS FOR PROVIDING UNIFIED STORAGE FOR BACKUP AND DISASTER RECOVERY AND DEVICES THEREOF - A method, non-transitory computer readable medium, and device that providing a unified storage for backup and disaster recovery includes capturing a recent snapshot of one or more file systems associated with a client computing device. The captured recent snapshot is stored in a secondary storage device. One or more changes to one or more files in the one or more file systems are determined by comparing the stored recent snapshot against an initial snapshot. A response to a disaster recovery request or a backup request based on the determined one or more changes to the one or more file systems is provided. | 2016-05-19 |
20160139997 | DATASETS PROFILING TOOLS, METHODS, AND SYSTEMS - A dataset profiling tool configured to identify unique and non-unique column combinations in a dataset which comprises a plurality of tuples, the tool including: an inserts handler module configured to: receive one or more new tuples for insertion into the dataset, receive one or more minimal uniques and one or more maximal non-uniques for the dataset, identify and group, for each minimal unique, any tuples of the dataset and any of the one or more new tuples which contain duplicate values in the column combinations of the minimal unique, to form grouped tuples which are grouped according to the minimal unique to which the tuples relate, validate the grouped tuples to identify supersets of the minimal uniques for which duplicate values were identified, to generate a new set of one or more minimal uniques and one or more maximal non-uniques, and output the new set of one or more updated minimal uniques and one or more maximal non-uniques. | 2016-05-19 |
20160139998 | System, Method and Computer Program Product for Contact Information Backup and Recovery - A method for contact information backup and recovery comprising receiving, by a contact information backup and recovery system, subscriber information, storing the subscriber information in a contacts data store, receiving, by the contact information backup and recovery system, contacts information associated with the subscriber information, storing the contacts information in the contacts data store, wherein the contacts information is stored in relation to the subscriber information, receiving, by the contact information backup and recovery system, a contacts recovery request, and performing, by the contact information backup and recovery system, a contacts recovery operation, comprising acquiring target contact information from the stored contacts information in the contacts data store, performing a telephonic communication using the target contact information, and sending the stored contacts information to a remote storage device. | 2016-05-19 |
20160139999 | UNIFIED COMMUNICATIONS MODULE (UCM) - A fault tolerant control system delivers an embedded functional safety core and a distributed control engine with an onboard communication link in an industrial process control environment. The fault tolerant control system includes a process control workstation connected to a first network and a fault tolerant safety controller connected to a second network, wherein a process controller module, a safety controller module and a field device system integration module are co-located on a power interface board. | 2016-05-19 |
20160140000 | SVC CLUSTER CONFIGURATION NODE FAILOVER - An SVC cluster manages a plurality of storage devices and includes a plurality of SVCs interconnected via a network, each SVC acting as a separate node. A new configuration node is activated in response to configuration node failures. The new configuration node retrieves client subscription information about events occurring in storage devices managed by the SVC cluster from the storage devices. In response to events occurring in the storage device managed by the SVC cluster, the new configuration node obtains storage device event information from a storage device event monitoring unit. The new configuration node sends storage device events to clients who have subscribed to this information according to subscription information obtained. The storage device is not installed in the original configuration node. | 2016-05-19 |
20160140001 | ADAPTIVE DATACENTER TOPOLOGY FOR DISTRIBUTED FRAMEWORKS JOB CONTROL THROUGH NETWORK AWARENESS - Systems, methods, and computer program products to perform an operation comprising receiving a priority of a distributed computing job, an intermediate traffic type of the distributed computing job, and a set of candidate compute nodes available to process the distributed computing job, the candidate compute nodes each available to process at least one input split of the distributed computing job, and selecting a mapper node from the candidate compute nodes, for one of the input splits, wherein the mapper node is selected based on the priority and the intermediate traffic type of the distributed computing job, wherein the mapper compute node is further selected upon determining that the mapper node is not affected by an error, and a resource utilization score for the mapper node does not exceed a utilization threshold. | 2016-05-19 |
20160140002 | RECOVERY IMPROVEMENT FOR QUIESCED SYSTEMS - Methods and apparatuses for performing a quiesce operation during a processor recovery action is provided. A processor performs a processor recovery action. A processor retrieves a quiesce status of a computer system from a shared cache with a second processor. A processor determines a quiesce status of the first processor based, a least in part, on the retrieved quiesce status of the computer system. | 2016-05-19 |
20160140003 | NON-DISRUPTIVE CONTROLLER REPLACEMENT IN A CROSS-CLUSTER REDUNDANCY CONFIGURATION - During a storage redundancy giveback from a first node to a second node following a storage redundancy takeover from the second node by the first node, the second node is initialized in part by receiving a node identification indicator from the second node. The node identification indicator is included in a node advertisement message sent by the second node during a giveback wait phase of the storage redundancy giveback. The node identification indicator includes an intra-cluster node connectivity identifier that is used by the first node to determine whether the second node is an intra-cluster takeover partner. In response to determining that the second node is an intra-cluster takeover partner, the first node completes the giveback of storage resources to the second node. | 2016-05-19 |
20160140004 | APPARATUS, SYSTEM, AND METHOD FOR CONDITIONAL AND ATOMIC STORAGE OPERATIONS - An apparatus, system, and method are disclosed for implementing conditional storage operations. Storage clients access and allocate portions of an address space of a non-volatile storage device. A conditional storage request is provided, which causes data to be stored to the non-volatile storage device on the condition that the address space of the device can satisfy the entire request. If only a portion of the request can be satisfied, the conditional storage request may be deferred or fail. An atomic storage request is provided, which may comprise one or more storage operations. The atomic storage request succeeds if all of the one or more storage operations are complete successfully. If one or more of the storage operations fails, the atomic storage request is invalidated, which may comprise deallocating logical identifiers of the request and/or invalidating data on the non-volatile storage device pertaining to the request. | 2016-05-19 |
20160140005 | PULSED-LATCH BASED RAZOR WITH 1-CYCLE ERROR RECOVERY SCHEME - Systems and methods for error recovery include determining an error in at least one stage of a plurality of stages during a first cycle on a hardware circuit, each of the plurality of stages having a main latch and a shadow latch. A first signal is transmitted to an output stage of the at least one stage to stall the main latch and the shadow latch of the output stage during a second cycle. A second signal is transmitted to an input stage of the at least one stage to stall the main latch of the input stage during the second cycle and to stall the main latch and the shadow latch of the input stage during a third cycle. Data is restored from the shadow latch to the main latch for the at least one stage and the input stage to recover from the error. | 2016-05-19 |
20160140006 | TESTBENCH BUILDER, SYSTEM, DEVICE AND METHOD HAVING AGENT LOOPBACK FUNCTIONALITY - A testbench for testing a device under test (DUT), wherein the testbench has a verification environment including a reference model, a scoreboard and a customized agent for each interface that the DUT needs to receive input from and/or transmit output on. The testbench system is able to be generated by a testbench builder that automatically creates a scoreboard, a reference model, a dispatcher and generic agents including generic drivers, loopback ports, sequencers and/or generic monitors for each interface and then automatically customize the generic agents based on their corresponding interface such that the agents meet the requirements of the interface for the DUT. | 2016-05-19 |
20160140007 | STORAGE DEVICE AND OPERATING METHOD OF THE SAME - An operating method is disclosed for a storage device configured to receive a command from an external device through a command pad, transmit a response to the external device through the command pad, and exchange data with the external device through a plurality of data pads. The operating method includes receiving a debug command through the command pad by the storage device and outputting internal information through the command pad in response to the debug command as the response by the storage device. | 2016-05-19 |
20160140008 | FLASH COPY FOR DISASTER RECOVERY (DR) TESTING - In one embodiment, a computer program product for disaster recovery (DR) testing includes a computer readable storage device having program code embodied therewith. The program code is readable and/or executable by a hardware processor to define a DR family including one or more DR clusters accessible to a DR host and one or more production clusters accessible to a production host, create a backup copy of data stored to the one or more production clusters, store the backup copy to the one or more DR clusters, establish a time-zero in the DR family, create a snapshot of each backup copy stored to the one or more DR clusters, share a point-in-time data consistency at the time-zero among all clusters within the DR family and perform DR testing. The DR host is configured to replicate data from the one or more production clusters to the one or more DR clusters. | 2016-05-19 |
20160140009 | SPECIFYING METHOD AND SPECIFYING APPARATUS - A specifying method executed by a computer, the specifying method includes: acquiring, every specific time interval, a measurement value of a specific property from each of a plurality of devices which have the specific property; calculating a variation between the measurement value for each of the plurality of devices and an estimated value based on a plurality of past measurement values which are acquired from the plurality of devices prior to the measurement value; and specifying at least one device, which expresses a different behavior from other devices, from among the plurality of devices based on a set of variations including the variation regarding the plurality of devices. | 2016-05-19 |
20160140010 | Self-Healing Charging Device - Self-healing charging devices and techniques for identifying and/or troubleshooting causes of performance degradation in user devices are described. The self-healing charging devices described herein can leverage performance logs associated with user devices to identify problems on the user devices while the user devices are charging. Additionally or alternatively, the self-healing charging devices can leverage predictive models learned from a collection of data derived from a plurality of users associated with a network to identify usage and/or performance patterns for predicting issues that can arise based on usage patterns of the user of the user device. In some examples, the self-healing charging devices can be communicatively coupled to at least one network for offloading some of the processing. The self-healing charging devices can enable users to easily identify issues causing and/or leading to performance degradation on their user devices and remedy and/or prevent problems that cause the performance degradation. | 2016-05-19 |
20160140011 | VISUAL INDICATOR FOR PORTABLE DEVICE - A portable device may perform a method that includes detecting that the portable device is coupled to a host device via a host interface of the portable device. The method includes generating a visual indication at a visual indicator of the portable device. The visual indication is indicative of a data transfer capacity of the host interface. | 2016-05-19 |
20160140012 | Methods And Systems For Status Determination - Methods and systems for status determination are disclosed. Operational status of a node can be considered based on operational rates of a plurality of nodes in a system. An example method can comprise determining a first operational rate of a first node and determining a second operational rate of a second node. A difference between the first operational rate and the second operational rate can be analyzed. For example, the difference can be compared to a threshold to determine an operational status of the first node. If the difference is above the threshold, the operational status can be given a first value, but if the difference is below the threshold, the operational status can be given a second value. The operational status can be sent to a load balancer. | 2016-05-19 |
20160140013 | Machine-Implemented Method for the Adaptive, Intelligent Routing of Computations in Heterogeneous Computing Environments - A machine-implemented method for the intelligent, adaptive routing of computations in heterogeneous GPU computing environments is provided herein. The method is implemented by a machine as a series of machine-executable steps that cause the machine to route mathematical and statistical computations in engineering, scientific, financial, and general-purpose applications to the processor, or a plurality of processors, that is best able to process the computations. | 2016-05-19 |
20160140014 | APPARATUS AND METHOD FOR DISTRIBUTED INSTRUCTION TRACE IN A PROCESSOR SYSTEM - One disclosed embodiment provides an integrated circuit that has a plurality of processors and a plurality of processor trace collection logic units. Each processor trace collection logic unit corresponds with, and is operatively coupled to, one of the processors. A separate filtering logic unit is operatively coupled to the plurality of processor trace collection logic units. In some embodiments of the integrated circuit, each processor trace collection logic unit is operative to continuously collect processor trace information from a corresponding operatively coupled processor. Each filtering logic unit is operative to monitor the continuous processor trace information for occurrence of a predetermined condition, and to store some of the processor trace information to memory in response to occurrence of that condition. | 2016-05-19 |
20160140015 | DISTRIBUTED ANALYSIS AND ATTRIBUTION OF SOURCE CODE - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing analysis tasks and attribution tasks. One of the methods includes receiving data representing a plurality of snapshots of a code base, wherein each snapshot comprises source code files, wherein one or more snapshots have a parent snapshot in the code base according to a revision graph of snapshots in the code base. An attribution set is generated from the plurality of snapshots, the attribution set having a target set of attributable snapshots to be attributed and a support set of all parent snapshots of all snapshots in the target set. An attribution task is distributed for the attribution set to a particular worker node of a plurality of worker nodes. | 2016-05-19 |
20160140016 | EVENT SEQUENCE CONSTRUCTION OF EVENT-DRIVEN SOFTWARE BY COMBINATIONAL COMPUTATIONS - According to an aspect of an embodiment, a method may include determining event sequences of an event-driven software application. The method may further include determining, for each event sequence, a distance with respect to each of one or more target conditions of the event-driven software application. The event sequence distance may indicate a degree to which execution of its corresponding event sequence satisfies a corresponding target condition. The method may also include prioritizing execution of the plurality of event sequences based on the event sequence distances. Further, the method may include exploring, according to the prioritization of execution, an event space that includes one or more of the event sequences and a dependent event that corresponds to the one or more target conditions. | 2016-05-19 |
20160140017 | USING LINKED DATA TO DETERMINE PACKAGE QUALITY - Arrangements described herein relate to determining a quality of a software package. Via linked data, the software package can be linked to at least one test plan and a requirement collection. The software package can be executed in accordance with the test plan using at least one test case. At least one test result of the execution of the software package can be generated. A score can be assigned to the test result and a score can be assigned to the test based at least on the test result. Based at least the scores on assigned to the test result and the test case, a package quality score can be assigned to the software package. | 2016-05-19 |
20160140018 | Purity Analysis Using White List/Black List Analysis - Memoizable functions may be identified by analyzing a function's side effects. The side effects may be evaluated using a white list, black list, or other definition. The side effects may also be classified into conditions which may or may not permit memoization. Side effects that may have de minimus or trivial effects may be ignored in some cases where the accuracy of a function may not be significantly affected when the function may be memoized. | 2016-05-19 |
20160140019 | EVENT SUMMARY MODE FOR TRACING SYSTEMS - Reducing resource requirements in an instrumented process tracing system, a process having a top instrumented process and a nested hierarchy of instrumented sub-processes. A computer receives a plurality of instrumented process data from the top process and the sub-processes, each datum including a process identifier, a process type, a top process identifier, and a process completion elapsed time. Based on the computer determining that the process identifier and the top process identifier in the datum received are equivalent: if the process completion elapsed time in the datum received is determined to be less than a threshold value, the computer writes a summary of the plurality of instrumented process data to a data store, and if the process completion elapsed time in the datum received is determined to not be less than the threshold value, the computer writes the plurality of instrumented process data to the data store. | 2016-05-19 |
20160140020 | REQUEST MONITORING - A method of monitoring requests to a code set is provided, which includes: receiving a request to the code set; creating a trace for the request, the trace defining the path of the request through the code set; accessing a plurality of stored trace patterns, each stored trace pattern defining an acceptable path of a request through the code set; comparing the created trace to the stored trace patterns; and storing the created trace if it does not match one of the stored trace patterns. | 2016-05-19 |
20160140021 | METHODS AND SYSTEMS FOR ISOLATING SOFTWARE COMPONENTS - A software testing system operative to test a software application comprising a plurality of software components, at least some of which are highly coupled hence unable to support a dependency injection, each software component operative to perform a function, the system comprising apparatus for at least partially isolating, from within the software application, at least one highly coupled software component which performs a given function, and apparatus for testing at least the at least partially isolated highly coupled software component. | 2016-05-19 |
20160140022 | DYNAMIC PROVISIONING OF A VIRTUAL TEST ENVIRONMENT - At least one test of a first computing system is launched utilizing a second computing system that includes a first set of computing devices. Progress of the test is monitored during a first period of time. Performance of the second computing system is also monitored during the first period. An additional second set of computing devices is automatically provisioned for inclusion in the second computing system based at least in part on the monitoring of the test progress and monitoring of the performance of the computing system during the first time period. The test can utilize the first and second sets of computing devices during a second period of time subsequent to the first period. | 2016-05-19 |
20160140023 | MODELING AND TESTING INTERACTIONS BETWEEN COMPONENTS OF A SOFTWARE SYSTEM - Various systems and methods for are disclosed. For example, a method can involve extracting information from a response. The response is generated in response to a request generated by a test module during execution of a test case. The extracted information describes characteristics of transactions within the test case. The method can then involve generating a display, based upon the extracted information. The display includes information identifying each of the components that participated in at least one of the transactions within the test case. Such a method can be performed by a computing device implementing the test module. | 2016-05-19 |
20160140024 | METHOD AND DEVICE FOR TESTING SEMICONDUCTOR MANUFACTURING EQUIPMENT AUTOMATION PROGRAM - A method for testing an equipment automation program may be implemented using a hardware device and may include the following steps: receiving user input through a user interface of the device; automatically identifying a test scenario based on the user input; automatically and sequentially fetching a plurality of messages according to the test scenario; and automatically and sequentially sending the messages to the equipment automation program. | 2016-05-19 |
20160140025 | METHOD AND APPARATUS FOR PRODUCING A BENCHMARK APPLICATION FOR PERFORMANCE TESTING - A method of producing a benchmark application for testing input/output—I/O—settings of a computer application, the method comprising: compiling trace data relating to operations to be executed by the computer application; grouping the trace data into one or more phases, based on different stages in the execution of the computer application to which the operations relate; identifying patterns in the trace data and comparing the patterns; producing simplified trace data in which trace data having similar patterns are combined; and outputting a benchmark application which includes the simplified trace data and information indicating where the trace data have been combined. | 2016-05-19 |
20160140026 | Systems and Methods for Selection of Test Cases for Payment Terminals - The present disclosure proposes a computer implemented method for selecting test cases to be executed on a terminal by creating a configuration code and applying this code to a set of test case selection tuples. The present disclosure also proposes a method for automatically creating a set of test case selection tuples, taking a source code as an input. The created set of test case selection tuples can be used in the above-mentioned method for selecting test cases. Finally, the present disclosure proposes a method for operating a program for selecting test cases having a user interface and a selection logic. The program may apply the above-mentioned method for selecting test cases by creating a configuration code and applying this code to a set of test case selection tuples. | 2016-05-19 |
20160140027 | REGRESSION TESTING WITH EXTERNAL BREAKPOINTS - Regression testing of software applications is described. Breakpoints are inserted in a programming code of an object to perform testing of all software applications that use the object. A processor in a computing device can receive data representing a programming code of a functionality of a software application rectifying a problem associated with the functionality of the software application. The processor can determine another software application executing the functionality. The processor can insert a breakpoint in the programming code of the functionality of the software application and the another software application. The breakpoint can be inserted at a location in the programming code of the software application where the problem was rectified. The processor can execute the programming code of the functionality including the inserted breakpoint. The processor can determine, based on the executing, whether the problem has been rectified in the software application and the another software application. | 2016-05-19 |
20160140028 | TESTING SYSTEMS AND METHODS - A computer implemented method, system and computing device for identifying a test option associated with an application for a user is described. The method comprises selecting a predefined test indicated by a test identifier associated with the requested application, the test having more than one test option associated therewith, generating a hash of the test identifier and a user identifier associated with the user, processing the hash to generate an index, comparing said index with a distribution of numbers divided into multiple ranges, each range being associated with a test option, and selecting a test option associated with the range into which the index falls. The applications may be computer gaming applications. | 2016-05-19 |
20160140029 | EFFICIENT METHOD DATA RECORDING - According to one general aspect, a method may include monitoring the execution of at least a portion of a software application. The method may also include collecting subroutine call information regarding a plurality of subroutine calls included by the portion of the software application, wherein one or more of the subroutine calls is selected for detailed data recording. The method may further include pruning, as the software application is being executed, a subroutine call tree to include only the subroutine calls selected for detailed data recording and one or more parent subroutine calls of each subroutine calls selected for detailed data recording. | 2016-05-19 |
20160140030 | AUTOMATIC CORRECTION OF APPLICATION BASED ON RUNTIME BEHAVIOR - A method and associated system for automatically correcting an application based on runtime behavior of the application. An incident indicates a performance of the application in which a problem object produces an outcome that had not been expected by a user. An incident flow for the problem object is automatically analyzed. Actual run of the application renders a forward data flow and at least one backward data flow is simulated from an expected outcome of the problem object. The forward data flow and the at least one backward data flow are compared to create a candidate fault list for the problem object. A technical specification that corrects the incident by use of the candidate fault list and a specification of the application is generated. | 2016-05-19 |
20160140031 | METHODS AND SYSTEMS FOR AUTOMATED TAGGING BASED ON SOFTWARE EXECUTION TRACES - Systems and methods for analysis of execution patterns for applications executing on remote devices. In some implementations of the system, a knowledge base stores successful traces from a plurality of instances of an application and one or more computing processors in the system receive, via a network interface, call-stack information from an instance of the application executing on a remote device, call-stack information including periodic captures of an execution status for the instance of the application, and determine whether there is a similarity between the call-stack information received from the instance of the application and the stored plurality of successful traces. Responsive to determining a similarity, the computing processors add the remote device to a population of devices likely to execute the object and facilitate further actions specific to the device population. | 2016-05-19 |
20160140032 | CORRELATING TEST RESULTS VARIATIONS WITH BUSINESS REQUIREMENTS - A method, system, and computer program product for relating test data to business requirements are provided in the illustrative embodiments. a test operation of a code is configured in a test data processing environment, a section in the code corresponding to a portion of a business requirements document. A set of baseline results is received. The test operation is executed, identifying the section of the code and associating the section of the code with a test result produced from the test operation. A determination is made whether the test result matches a first baseline result from the set of baseline results within a tolerance. When the test result does not match the first baseline result from the set of baseline results within the tolerance, the portion of the business requirements document is annotated. | 2016-05-19 |
20160140033 | Test Bundling and Batching Optimizations - Test bundling and batching by a test execution framework may be customized in accordance with test suite requirements for testing platform implementations on network-connected, resource-limited devices. Tests, test data and test results may be communicated in bundles or batches. Multiple tests may be bundled into a test application bundle and communicated over a single data connection. Test data for the tests in a bundle may be packaged into a single batch and transferred using a single data connection. Similarly, results from executing the tests in a test application bundle may be batched and transferred together over a single connection. Additionally, a custom user interface may be utilized to allow for customizing the test bundling policy for individual test suites. Providing the ability for a user to customize the test bundling policy may significantly reduce the number of data connections required during test suite execution. | 2016-05-19 |
20160140034 | DEVICES AND METHODS FOR LINKED LIST ARRAY HARDWARE IMPLEMENTATION - A device includes at least one memory including plurality of storage nodes arranged into a plurality of rows. Each of the rows has a known row width. The device includes a controller configured to determine size information regarding a size of at least a first sequence of data elements, and determine location information regarding a location of unused storage nodes in the at least one memory. The controller is configured to write the first sequence of data elements to the at least one memory based on the determined size information and the determined location information such that the first row contains no more than one pointer element for the first sequence of data elements. The pointer element links two sequential data elements. | 2016-05-19 |
20160140035 | MEMORY MANAGEMENT TECHNIQUES - Memory management techniques that permit an executing process to store content in memory and later retrieve that content from the memory, but that also permit a memory manager to discard that content to address memory pressure. A process executing on a computing device may notify a memory manager of the computing device that first memory space allocated to the process contains first content that is available for discard. If the memory manager detects the computing device is experiencing memory pressure, the memory manager may address the memory pressure by selecting memory space available for discard and discarding the content of the memory space. Before a process reuses content made available for discard, the process may notify the memory manager of the intent to reuse and, in response, receive empty memory and an indication that the content was discarded or receive an indication that the content is still available for use. | 2016-05-19 |
20160140036 | SYNCHRONIZATION AND BARRIER FREE CONCURRENT GARBAGE COLLECTION SYSTEM - A system, method and program product for implementing a garbage collection (GC) process that manages dynamically allocated memory in a multithreaded runtime environment. A method is disclosed that includes defining a threshold value, wherein the threshold value defines a number of GC cycles an object must be observed as unreferenced before being reclaimed; traversing objects in an object graph; and reclaiming a traversed object from the dynamically allocated memory if the traversed object has been observed as unreferenced for more than the threshold value. | 2016-05-19 |
20160140037 | Systems, Methods, and Computer Readable Media for Digital Radio Broadcast Receiver Memory and Power Reduction - A method of block deinterleaving data received at a digital radio broadcast receiver is described. The method includes providing a block of memory having a n×k addresses, wherein the block comprises a single table, receiving a digital radio broadcast signal at the receiver, and demodulating the digital radio broadcast signal into a plurality of interleaved data units. For at least one series of n×k data units a pointer step size is determined, and for each data unit in the series, an address in the block is calculated based on the pointer step size, and an output data unit is read from the block at the address, such that said output data units represent block deinterleaved data units. An input data unit from the plurality of interleaved data units is then written to the block at the address. Associated systems and computer readable storage media are presented. | 2016-05-19 |
20160140038 | Generating a Second Code from a First Code - A second physical-address-dependent code is generated from a first physical-address-dependent code using differential data, where the generating comprises converting a first physical address in a region of the first physical-address-dependent code to a second, different physical address for inclusion in a corresponding region of the second physical-address-dependent code. | 2016-05-19 |
20160140039 | PROVIDING MULTIPLE MEMORY MODES FOR A PROCESSOR INCLUDING INTERNAL MEMORY - In one embodiment, a processor comprises: at least one core formed on a die to execute instructions; a first memory controller to interface with an in-package memory; a second memory controller to interface with a platform memory to couple to the processor; and the in-package memory located within a package of the processor, where the in-package memory is to be identified as a more distant memory with respect to the at least one core than the platform memory. Other embodiments are described and claimed. | 2016-05-19 |
20160140040 | FILTERING TRANSLATION LOOKASIDE BUFFER INVALIDATIONS - A filter includes filter entries, each corresponding to a mapping between a virtual memory address and a physical memory address and including a presence indicator indicative which processing elements have the mapping present in their respective translation lookaside buffers (TLBs). A TLB invalidation (TLBI) instruction is received for a first mapping. If a first filter entry corresponding to the first mapping exists in the filter, the plurality of processing elements are partitioned into a first partition of zero or more processing elements that have the first mapping present in their TLBs and a second partition of zero or more processing elements that do not have the first mapping present in their TLBs based on the presence indicator of the first filter entry. The TLBI instruction is sent to the processing elements included in the first partition, and not those in the second partition. | 2016-05-19 |
20160140041 | ELECTRONIC SYSTEM WITH PARTITIONING MECHANISM AND METHOD OF OPERATION THEREOF - An electronic system includes: an interface block of a storage device configured to process system information from a system device; a memory block of the storage device, coupled to the interface block, partitioned by the interface block configured to process the system information for partitioning the memory block; and a storage block of a storage device, coupled to the memory block, configured to access a data block of the storage block provided to the system device. | 2016-05-19 |
20160140042 | INSTRUCTION CACHE TRANSLATION MANAGEMENT - Managing an instruction cache of a processing element, the instruction cache including a plurality of instruction cache entries, each entry including a mapping of a virtual memory address to one or more processor instructions, includes: issuing, at the processing element, a translation lookaside buffer invalidation instruction for invalidating a translation lookaside buffer entry in a translation lookaside buffer, the translation lookaside buffer entry including a mapping from a range of virtual memory addresses to a range of physical memory addresses; causing invalidation of one or more of the instruction cache entries of the plurality of instruction cache entries in response to the translation lookaside buffer invalidation instruction. | 2016-05-19 |
20160140043 | INSTRUCTION ORDERING FOR IN-PROGRESS OPERATIONS - Execution of the memory instructions is managed using memory management circuitry including a first cache that stores a plurality of the mappings in the page table, and a second cache that stores entries based on virtual addresses. The memory management circuitry executes operations from the one or more modules, including, in response to a first operation that invalidates at least a first virtual address, selectively ordering each of a plurality of in progress operations that were in progress when the first operation was received by the memory management circuitry, wherein a position in the ordering of a particular in progress operation depends on either or both of: (1) which of one or more modules initiated the particular in progress operation, or (2) whether or not the particular in progress operation provides results to the first cache or second cache. | 2016-05-19 |
20160140044 | SYSTEMS AND METHODS FOR NON-BLOCKING IMPLEMENTATION OF CACHE FLUSH INSTRUCTIONS - Systems and methods for non-blocking implementation of cache flush instructions are disclosed. As a part of a method, data is accessed that is received in a write-back data holding buffer from a cache flushing operation, the data is flagged with a processor identifier and a serialization flag, and responsive to the flagging, the cache is notified that the cache flush is completed. Subsequent to the notifying, access is provided to data then present in the write-back data holding buffer to determine if data then present in the write-back data holding buffer is flagged. | 2016-05-19 |
20160140045 | PACKET CLASSIFICATION - Methods, systems, and computer readable media for packet classification are disclosed. According to one method, the method includes receiving a packet containing header information for packet classification. The method also includes determining, using the header information, a first memory address identifier. The method further includes determining, using the first memory address identifier, memory pointer information indicating a second memory address identifier. The method also includes obtaining, using the memory pointer information indicating the second memory address identifier, packet related information from a memory. The method further includes performing, using the packet related information, a packet classification action. | 2016-05-19 |
20160140046 | SYSTEM AND METHOD FOR PERFORMING HARDWARE PREFETCH TABLEWALKS HAVING LOWEST TABLEWALK PRIORITY - A hardware prefetch tablewalk system for a microprocessor including a tablewalk engine that is configured to perform hardware prefetch tablewalk operations without blocking software-based tablewalk operations. Tablewalk requests include a priority value, in which the tablewalk engine is configured to compare priorities of requests in which a higher priority request may terminate a current tablewalk operation. Hardware prefetch tablewalk requests having the lowest possible priority so that they do not bump higher priority tablewalk operations and are bumped by higher priority tablewalk requests. The priority values may be in the form of age values indicative of relative ages of operations being performed. The microprocessor may include a hardware prefetch engine that performs boundless hardware prefetch pattern detection that is not limited by page boundaries to provide the hardware prefetch tablewalk requests. | 2016-05-19 |
20160140047 | TRANSLATION LOOKASIDE BUFFER MANAGEMENT - Each of multiple translation lookaside buffers (TLBs) is associated with a corresponding processing element. A first TLB invalidation (TLBI) instruction is issued at a first processing element, and sent to a second processing element. An element-specific synchronization instruction is issued at the first processing element. A synchronization command is broadcast, and received at the second processing element. The element-specific synchronization instruction prevents issuance of additional TLBI instructions at the first processing element until an acknowledgement in response to the synchronization command is received at the first processing element. After completion of any TLBI instructions issued at the second processing element before the synchronization command was received, the acknowledgement is sent from the second processing element to the first processing element, indicating that any TLBI instructions issued at the second processing element before the synchronization command was received at the second processing element are complete. | 2016-05-19 |
20160140048 | CACHING TLB TRANSLATIONS USING A UNIFIED PAGE TABLE WALKER CACHE - A core executes memory instructions. A memory management unit (MMU) coupled to the core includes a first cache that stores a plurality of final mappings of a hierarchical page table, a page table walker that traverses levels of the page table to provide intermediate results associated with respective levels for determining the final mappings, and a second cache that stores a limited number of intermediate results provided by the page table walker. The MMU compares a portion of the first virtual address to portions of entries in the second cache, in response to a request from the core to invalidate a first virtual address, based on a match criterion that depends on the level associated with each intermediate result stored in an entry in the second cache, and removes any entries in the second cache that satisfy the match criterion. | 2016-05-19 |
20160140049 | WIRELESS MEMORY INTERFACE - Systems and methods for vendor-agnostic access to non-volatile memory of a wireless memory tag include: detecting, via a wireless memory host, a wireless memory tag; providing a vendor-agnostic command to the wireless memory tag to affect a change in a register-based interface of the wireless memory tag, wherein the change results in reading data from non-volatile memory of the wireless memory tag, writing data to the non-volatile memory of the wireless memory tag, or both. | 2016-05-19 |
20160140050 | METHOD AND SYSTEM FOR COMPRESSING DATA FOR A TRANSLATION LOOK ASIDE BUFFER (TLB) - An embodiment of the present disclosure includes a method for compressing data for a translation look aside buffer (TLB). The method includes: receiving an identifier at a content addressable memory (CAM), the identifier having a first bit length; compressing the identifier based on a location within the CAM the identifier is stored, the compressed identifier having a second bit length, the second bit length being smaller than the first bit length; and mapping at least the compressed identifier to a physical address in a buffer. | 2016-05-19 |
20160140051 | TRANSLATION LOOKASIDE BUFFER INVALIDATION SUPPRESSION - Managing a plurality of translation lookaside buffers (TLBs) includes: issuing, at a first processing element, a first instruction for invalidating one or more TLB entries associated with a first context in a first TLB associated with the first processing element. The issuing includes: determining whether or not a state of an indicator indicates that all TLB entries associated with the first context in a second TLB associated with a second processing element are invalidated; if not: sending a corresponding instruction to the second processing element, causing invalidation of all TLB entries associated with the first context in the second TLB, and changing a state of the indicator; and if so: suppressing sending of any corresponding instructions for causing invalidation of any TLB entries associated with the first context in the second TLB to the second processing element. | 2016-05-19 |
20160140052 | System and Method for Efficient Cache Utility Curve Construction and Cache Allocation - Interaction is evaluated between a computer system cache and at least one entity that submits a stream of references corresponding to location identifiers of data storage locations. The reference stream is spatially sampled by comparing a hash value of each reference with a threshold value and selecting only those references whose hash value meets a selection criterion. Cache utility values are then compiled for those references. In some embodiments, the compiled cache values may then be corrected for accuracy as a function of statistics of those location identifiers over the entire stream of references and of the sampled references whose hash values satisfied the selection criterion. Alternatively, a plurality of caching configurations is selected and the selected references are applied as inputs to a plurality of caching simulations, each corresponding to a different caching configuration. A resulting set of cache utility values is then computed for each caching simulation. | 2016-05-19 |
20160140053 | RE-MRU OF METADATA TRACKS TO REDUCE LOCK CONTENTION - For reducing lock contention on a Modified Least Recently Used (MLRU) list for metadata tracks, upon a conclusion of an access of a metadata track, if one of the metadata track is located in a predefined lower percentile of the MLRU list, and the metadata track has been accessed, including the access, a predetermined number of times, the metadata track is removed from a current position in the MLRU list and moved to a Most Recently Used (MRU) end of the MLRU list. | 2016-05-19 |
20160140054 | METHOD AND SYSTEM FOR DETERMINING FIFO CACHE SIZE - Described herein are methods, systems and machine-readable media for simulating a FIFO cache using a Bloom filter ring, which includes a plurality of Bloom filters arranged in a circular log. New elements are registered in the Bloom filter at the head of the circular log. When the Bloom filter at the head of the circular log is filled to its capacity, membership information associated with old elements in the Bloom filter at the tail of the circular log is evicted (simulating FIFO cache behavior), and the head and tail of the log are advanced. The Bloom filter ring is used to determine cache statistics (e.g., cache hit, cache miss) of a FIFO cache of various sizes. In response to simulation output specifying cache statistics for FIFO cache of various sizes, a FIFO cache is optimally sized. | 2016-05-19 |
20160140055 | Least Privileged Operating System - A method and system encrypts data in a least privileged operating system. The method includes determining a first encryption scheme to be used with software code to be mapped to a virtual memory. The method includes mapping a first portion of the virtual memory with the software code for access by a processor using the first encryption scheme. The method includes receiving a call for an entry point of the operating system. The method includes determining a second encryption scheme to be used with the entry point when mapped to the virtual memory. The method includes mapping a second portion of the virtual memory for executing entry point code associated with the entry point for access by the processor using the second encryption scheme. The processor executing the software code is permitted to access only data from the first and second portions of the virtual memory. | 2016-05-19 |
20160140056 | METHODS TO IMPROVE SECURE FLASH PROGRAMMING - Methods are provided for securely loading software objects into an electronic control unit. The methods include receiving a first software object comprising a second level public key certificate, a first encryption signature and a first set of software. Once the first software object is received, validating the first second level public key is validated with the embedded root public key, the first encryption signature with the first second level public key certificate, and the first set of software with the first encryption signature. When the first set of software is valid, then the first second level public key certificate and the first set of software are stored to non-volatile memory. Once stored, a consecutive software object is received comprising only a consecutive encryption signature and a consecutive set of software from the programming source. The consecutive encryption signature is validated with the stored second level public key certificate, and the consecutive set of software is validated with the consecutive encryption signature. | 2016-05-19 |
20160140057 | SEMICONDUCTOR DEVICE AND ENCRYPTION KEY WRITING METHOD - A semiconductor device includes a central processing unit (CPU), a first memory which stores a plurality of split keys, a second memory which stores an encryption code as at least one of an encrypted instruction and encrypted data, the plurality of split keys including an encryption key for decrypting the encryption code, and a decrypter which reads the encryption code from the second memory, decrypts the encryption code with the use of the encryption key, and supplies the decrypted encryption code to the CPU. The second memory stores an encryption key reading program which is executed by the CPU to restore the encryption key and to supply the encryption key to the decrypter, by reading and reconfiguring the split keys stored in the first memory in a distributed manner. | 2016-05-19 |
20160140058 | INTRINSIC BARRIER DEVICE WITH SOFTWARE CONFIGURABLE IO TYPE - An intrinsic barrier device, method and computer program product for isolating a communication channel of an input/output (IO) module from a field device. The intrinsic barrier device includes a front end having a programming input adapted to receive an analog input (AI), analog output (AO), digital input (DI) or digital output (DO) IO type configuration signal. The intrinsic barrier device also includes a processor to process the IO type configuration signal and an associated memory device storing an intrinsic barrier IO type configuration (IBTC) program. The processor is programmed to implement the IBTC program. The processor, responsive to the IO type configuration signal configures the intrinsic barrier device to operate as the AI, AO, DI or DO for supporting communications through the intrinsic barrier device over the communication channel between the IO module and the field device in the AI, AO, DI or DO. | 2016-05-19 |
20160140059 | MULTIPLE MEMORY MANAGEMENT UNITS - In an embodiment, interfacing a pipeline with two or more interfaces in a hardware processor includes providing a single pipeline in a hardware processor. The single pipeline presents at least two visible units. The single pipeline includes replicated architecturally visible structures, shared logic resources, and shared architecturally hidden structures. The method further includes receiving a request from one of a plurality of interfaces at one of the visible units. The method also includes tagging the request with an identifier based on the one of the at least two visible units that received the request. The method further includes processing the request in the single pipeline by propagating the request through the single pipeline through the replicated architecturally visible structures that correspond with the identifier. | 2016-05-19 |
20160140060 | MANAGING BUFFERED COMMUNICATION BETWEEN SOCKETS - A motherboard includes multiple sockets, each socket configured to accept an integrated circuit. A first integrated circuit in a first socket includes one or more cores and at least one buffer. A second integrated circuit in a second socket includes one or more cores and at least one buffer. Communication circuitry transfers messages to buffers of integrated circuits coupled to different sockets. A first core on the first integrated circuit is configured to send messages corresponding to multiple types of instructions to a second core on the second integrated circuit through the communication circuitry. The buffer of the second integrated circuit is large enough to store a maximum number of instructions of a second type that are allowed to be outstanding from cores on the first integrated circuit at the same time, and still have enough storage space for one or more instructions of a first type. | 2016-05-19 |
20160140061 | MANAGING BUFFERED COMMUNICATION BETWEEN CORES - Communicating among multiple sets of multiples cores includes: buffering messages in first buffer associated with a first set of multiple cores; buffering messages in a second buffer associated with a second set of multiple cores; and transferring messages over communication circuitry from cores not in the first set to the first buffer, and to transferring messages from cores not in the second set to the second buffer. A first core of the first set sends messages corresponding to multiple types of instructions to a second core of the second set through the communication circuitry. The second buffer is large enough to store a maximum number of instructions of a second type that are allowed to be outstanding from cores in the first set at the same time, and still have enough storage space for one or more instructions of a first type. | 2016-05-19 |
20160140062 | MESSAGE FILTERING IN A DATA PROCESSING SYSTEM - Each processor of a plurality of processors is configured to execute an interrupt message instruction. A message filtering unit includes storage circuitry configured to store captured identifier information from each processor. In response to a processor of the plurality of processors executing an interrupt message instruction, the processor is configured to provide a message type and a message payload to the message filtering unit. The message filtering unit is configured to use the captured identifier information to determine a recipient processor indicated by the message payload and, in response thereto, provides an interrupt request indicated by the message type to the recipient processor. | 2016-05-19 |
20160140063 | MESSAGE FILTERING IN A DATA PROCESSING SYSTEM - A data processing system includes a plurality of processors, each processor configured to execute instructions, including a message send instruction, and a message filtering unit. The message filtering system is configured to receive messages from one or more of the plurality of processors in response to execution of message send instructions, each message indicating a message type and a message payload. The message filtering unit is configured to determined, for each received message, a recipient processor indicated by the message payload. The message filtering system is further configured to, in response to receiving, within a predetermined interval of time, at least two messages having a same recipient processor and indicating a same message type, delivering a single interrupt request indicated by the same message type to the same recipient processor, wherein the single interrupt request is representative of the at least two messages. | 2016-05-19 |
20160140064 | DISTRIBUTED INTERRUPT SCHEME IN A MULTI-PROCESSOR SYSTEM - Methods and systems are disclosed for routing and distributing interrupts in a multi-processor computer to various processing elements within the computer. A system for distributing the interrupts may include a plurality of logic devices configured in a hierarchical tree structure that distributes incoming interrupts to interrupt redistributors (redistribution devices). The system also includes plural processing elements, where each processing element has an associated bus address. A shared serial bus couples the redistribution devices and processing elements. Each of the redistribution devices is configured to transfer the incoming interrupts to at least one of the processing elements over the common bus, based on the bus address. | 2016-05-19 |
20160140065 | Register Access Control Among Multiple Devices - A circuit manages and controls access requests to a register, such as a control and status register (CSR) among a number of devices. In particular, the circuit selectively forwards or suspends off-chip access requests and forwards on-chip access requests independent of the status of off-chip requests. The circuit receives access requests at a plurality of buses, one or more of which can be dedicated to exclusively on-chip requests and/or exclusively off-chip requests. Based on the completion status of previous off-chip access requests, further off-chip access requests are selectively forwarded or suspended, while on-chip access request are sent independently of off-chip request status. | 2016-05-19 |
20160140066 | DISTRIBUTED TIMER SUBSYSTEM - A silicon device configured to distribute a global timer value over a single serial bus to a plurality of processing elements that are disposed on the silicon device and that are coupled to the serial bus. Each of the processing elements comprises a slave timer. Upon receipt of the global timer value, the processing elements synchronize their respective slave timers with the global timer value. After the timers are synchronized, the global timer sends periodic increment signals to each of the processing elements. Upon receipt of the increment signals, the processing elements update their respective slave timers. | 2016-05-19 |
20160140067 | SLAVE SIDE BUS ARBITRATION - A method includes, in response to a master port requesting bus access for a bus transfer with a slave port, selecting the master port to allow a master device that is coupled to the master port to perform a bus transfer with a slave device that is coupled to the slave port. The bus transfer is associated with at least one bus cycle. The method includes, in response to an end of the bus transfer, maintaining selection of the master port for at least one additional bus cycle. | 2016-05-19 |
20160140068 | ELECTRONIC DEVICE ASSEMBLY - An electronic device connecting system includes a master electronic device and a plurality of slave electronic devices. The master electronic device includes a connecting module and a MCU. Each slave electronic device includes a coupling module. The connecting module includes a plurality of port assemblies. Each port assembly is configured to couple a function device and includes a plurality of connecting ports and a switch assembly. The coupling module includes a plurality of coupling ports. When the master electronic device is coupled to one of the slave electronic devices, the MCU controls the switch assemblies to switch on the corresponded connecting ports and coupling ports, enabling the function devices to be synchronously connected to the one of the slave electronic devices and the master electronic device. | 2016-05-19 |
20160140069 | PCI EXPRESS TUNNELING OVER A MULTI-PROTOCOL I/O INTERCONNECT - Described are embodiments of methods, apparatuses, and systems for PCIe tunneling across a multi-protocol I/O interconnect of a computer apparatus. A method for PCIe tunneling across the multi-protocol I/O interconnect may include establishing a first communication path between ports of a switching fabric of a multi-protocol I/O interconnect of a computer apparatus in response to a peripheral component interconnect express (PCIe) device being connected to the computer apparatus, and establishing a second communication path between the switching fabric and a PCIe controller. The method may further include routing, by the multi-protocol I/O interconnect, PCIe protocol packets of the PCIe device from the PCIe device to the PCIe controller over the first and second communication paths. Other embodiments may be described and claimed. | 2016-05-19 |
20160140070 | NETWORK TRAFFIC PROCESSING - As disclosed herein a method, executed by a computer, for providing improved multi-protocol traffic processing includes receiving a data packet, determining if a big processor is activated, deactivating a little processor and activating the big processor if the big processor is not activated and an overflow queue is full, and deactivating the big processor and activating the little processor if the big processor is activated and a current throughput for the big processor is below a first threshold or a sustained throughput for the big processor remains below a second threshold. The big and little processors may be co-located on a single integrated circuit. An overflow queue, managed with a token bucket algorithm, may be used to enable the little processor to handle short burst of data packet traffic. A computer program product and an apparatus corresponding to the described method are also disclosed herein. | 2016-05-19 |
20160140071 | Arbitrated Access To Resources Among Multiple Devices - An arbiter circuit manages and enforces arbitration and quality of service (QOS) among multiple devices accessing a resource, such as a memory. The arbiter circuit receives requests from a number of devices to use resources of a bridge connecting to a memory, and maintains a count of bridge resources available on a per-device and per-bus basis. The arbiter circuit operates to select a next one of the requests to grant a bridge resource based on the device originating the request, a count of the per-device resources available, and a count of the resources available to the bus connecting the device to the bridge. | 2016-05-19 |