14th week of 2016 patent applcation highlights part 31 |
Patent application number | Title | Published |
20160098283 | Platform Configuration Management Using a Basic Input/Output System (BIOS) - Methods and systems for platform configuration management may use a platform configuration register (PCR) stored on a trusted platform module (TPM) included with an information handling system. A basic input/output system (BIOS) may include instructions to generate a first PCR value based on BIOS settings while a user is operating the BIOS. When the first PCR value indicates a change from a previous PCR value stored in the PCR, an alert may be displayed to the user and sent to a network administrator. The BIOS may display an indication of a mapping of BIOS settings to the first PCR value. | 2016-04-07 |
20160098284 | DYNAMIC DEVICE DRIVERS - A method includes receiving a driver model for a device. The driver model includes a list of variables associated with the device and one or more characteristics of the variables. The method includes determining whether the driver model is format-compliant and validating syntax of the driver model based at least partially on a driver template that is accessible to a third party. In response to the driver model being format-compliant and the syntax being valid, the method includes generating a verified file that is representative of the driver model. The verified file is formatted to dynamically load into a device application module during operation and to dynamically support the device. The method includes communicating the verified file to a user apparatus and adding an integrity check value thereto. In response to the driver model being format-noncompliant or the syntax being invalid, the method includes communicating an error message. | 2016-04-07 |
20160098285 | USING VIRTUAL MACHINE CONTAINERS IN A VIRTUALIZED COMPUTING PLATFORM - A virtualized computing system supports the execution of a plurality of virtual machines, where each virtual machine supports the execution of applications therein. Each application executes within a container that isolates the application executing therein from other processes executing on the computing system. A hierarchy of virtual machine templates is created by instantiating a parent virtual machine template, the parent virtual machine template having a. guest operating system and a container. An application to be run in a container is determined, and, in response, the parent virtual machine template is forked to create a child virtual machine template, where the child virtual machine template includes a replica of the container, and where the guest operating system of the parent virtual machine template overlaps in memory with a guest operating system of the child virtual machine template. The application is then installed in the replica of the container. | 2016-04-07 |
20160098286 | CREATING TEMPLATES OF OFFLINE RESOURCES - Implementations of the present invention allow software resources to be duplicated efficiently and effectively while offline. In one implementation, a preparation program receives an identification of a software resource, such as a virtual machine installed on a different volume, an offline operating system, or an application program. The preparation program also receives an indication of customized indicia that are to be removed from the software resource. These indicia can include personalized information as well as the level of software updates, security settings, user settings or the like. Upon execution, the preparation program redirects the function calls of the preparation program to the software resource at the different volume (or even the same volume) while the software resource is not running. The preparation program thus can thus creates a template of the software resource in a safe manner without necessarily affecting the volume at which the preparation program runs. | 2016-04-07 |
20160098287 | Method and System for Intelligent Analytics on Virtual Deployment on a Virtual Data Centre - The invention relates to a method and system for data centre infrastructure management and, more particularly, to analyze and deploy interrelated objects in a virtual data centre at virtual deployment level. The present system monitors and identifies different elements of source virtual deployment such as configuration data, settings and so on which are scattered at different levels. Further, the system performs analysis based on various parameters such as virtual deployment performance data, past history data, future requirement and policy based data in order to identify best suitable target virtual data centre. After identifying best suited target virtual data centre, system triggers a redeployment request. Finally, system performs the redeployment of source virtual deployment to identified target virtual data centre. | 2016-04-07 |
20160098288 | BUILDING VIRTUAL APPLIANCES - An example method to build a virtual appliance for deployment in a virtualized computing environment may include obtaining a base virtual appliance that is application-independent. The base virtual appliance includes a virtual machine, a virtual disk associated with the virtual machine and a guest operating system (OS) installed on the virtual disk. The method may further comprise obtaining an application package associated with an application; and building the virtual appliance by assembling the base virtual appliance with the application package. During the assembly, the application package is installed on the virtual disk of the base virtual appliance such that the virtual machine supports both the guest OS and the application. | 2016-04-07 |
20160098289 | SYSTEM AND METHOD FOR HANDLING AN INTERRUPT - An interrupt controller, a system and a method for handling an interrupt under a virtualization environment are provided. The system for handling an interrupt, includes: an interrupt controller, a virtual machine, and a hypervisor which controls activation of the virtual machine, the interrupt controller may receive a physical interrupt from the outside and transmit the physical interrupt to the hypervisor or the virtual machine based on a characteristic of the physical interrupt, the hypervisor may convert the physical interrupt into a virtual interrupt to transmit the virtual interrupt to the virtual machine, and the virtual machine may handle the physical interrupt or the virtual interrupt using a first interrupt handler which is included in the virtual machine. | 2016-04-07 |
20160098290 | INFORMATION SHARING PROGRAM, INFORMATION SHARING SYSTEM AND INFORMATION SHARING METHOD - A non-transitory computer-readable storage medium storing therein an information sharing program for causing a computer to execute a process includes storing, in a storage, conversion information including first processing request information for issuing a processing request to a first processing processor that operates on a first physical machine, first operating environment information relating to an operating environment of the first physical machine and corresponding to the first processing request information, and second operating environment information relating to an operating environment of a second physical machine and corresponding to second processing request information for issuing a processing request to a second processing processor that operates on the second physical machine, and when a virtual machine that operates on the first physical machine transfers to the second physical machine, causing the second physical machine to hold the conversion information. | 2016-04-07 |
20160098291 | VIRTUAL MACHINE CAPACITY PLANNING - Virtual machine capacity planning techniques are disclosed. In various embodiments, a set of time series data is constructed based at least in part on virtual machine related metric values observed with respect to a virtual machine during a training period. The constructed time series data is used to build a forecast model for the virtual machine. The forecast model is used to forecast future values for one or more of the virtual machine related metrics. The forecasted future values are used to determine whether an alert condition is predicted to be met. | 2016-04-07 |
20160098292 | JOB SCHEDULING USING EXPECTED SERVER PERFORMANCE INFORMATION - A job scheduler that schedules ready tasks amongst a cluster of servers. Each job might be managed by one scheduler. In that case, there are multiple job schedulers which conduct scheduling for different jobs concurrently. To identify a suitable server for a given task, the job scheduler uses expected server performance information received from multiple servers. For instance, the server performance information might include expected performance parameters for tasks of particular categories if assigned to the server. The job management component then identifies a particular task category for a given task, determines which of the servers can perform the task by a suitable estimated completion time, and then assigns based on the estimated completion time. The job management component also uses cluster-level information in order to determine which server to assign a task to. | 2016-04-07 |
20160098293 | SYSTEM, METHOD, AND SOFTWARE FOR CONTROLLED INTERRUPTION OF BATCH JOB PROCESSING - This disclosure provides various embodiments of software, systems, and techniques for controlled interruption of batch job processing. In one instance, a tangible computer readable medium stores instructions for managing batch jobs, where the instructions are operable when executed by a processor to identify an interruption event associated with a batch job queue. The instructions trigger an interruption of an executing batch job within the job queue such that the executed portion of the job is marked by a restart point embedded within the executable code. The instructions then restart the interrupted batch job at the restart point. | 2016-04-07 |
20160098294 | EXECUTION OF A METHOD AT A CLUSTER OF NODES - Systems and methods are disclosed for executing a clustered method at a cluster of nodes. An example method includes identifying an annotated class included in an application that is deployed on the cluster of nodes. An annotation of the class indicates that a clustered method associated with the annotated class is executed at each node in the cluster. The method also includes creating an instance of the annotated class and coordinating execution of the clustered method with one or more other nodes in the cluster. The method further includes executing, based on the coordinating, the clustered method using the respective node's instance of the annotated class. | 2016-04-07 |
20160098295 | INCREASED CACHE PERFORMANCE WITH MULTI-LEVEL QUEUES OF COMPLETE TRACKS - Exemplary method, system, and computer program product embodiments for increased cache performance using multi-level queues by a processor device. The method includes distributing to each one of a plurality of central processing units (CPUs) workload operations for creating complete tracks from partial tracks, creating sub-queues of the complete tracks for distributing to each one of the CPUs, and creating demote scan tasks based on workload of the CPUs. Additional system and computer program product embodiments are disclosed and provide related advantages. | 2016-04-07 |
20160098296 | TASK POOLING AND WORK AFFINITY IN DATA PROCESSING - Mechanisms for improving computing system performance by a processor device. System resources are organized into a plurality of groups. Each of the plurality of groups is assigned one of a plurality of predetermined task pools. Each of the predetermined task pools has a plurality of tasks. Each of the plurality of groups corresponds to at least one physical boundary of the system resources such that a speed of an execution of those of the plurality of tasks for a particular one of the plurality of predetermined task pools is optimized by a placement of an association with the at least one physical boundary and the plurality of groups. | 2016-04-07 |
20160098297 | System and Method for Determining Capacity in Computer Environments Using Demand Profiles - A system and method are provided for determining aggregate available capacity for an infrastructure group with existing workloads in computer environment. The method comprises determining one or more workload placements of one or more workload demand entities on one or more capacity entities in the infrastructure group; computing an available capacity and a stranded capacity for each resource for each capacity entity in the infrastructure group, according to the workload placements; and using the available capacity and the stranded capacity for each resource for each capacity entity to determine an aggregate available capacity and a stranded capacity by resource for the infrastructure group. | 2016-04-07 |
20160098298 | METHODS AND APPARATUS FOR INTEGRATED WORK MANAGEMENT - Described herein are techniques for integrated work management. An integrated work management server processes one or more datum of one or more source systems. The datum relates to at least one work item representing at least one assignment to be processed by a resource. An integrator is coupled to the integrated work management server. The integrator uses the one or more datum to create, store and/or update a combined work queue for the resource. The combined work queue comprises any of at least one work item and at least one assignment. One or more prioritization rules specify one or more criteria. The integrator prioritizes the combined work queue by evaluating the criteria in accord with the one or more datum. | 2016-04-07 |
20160098299 | GLOBAL LOCK CONTENTION PREDICTOR - A method for lock acquisition includes adding a current contention state of a lock to a contention history. The lock includes a memory location for storing information used for excluding accessing a resource by one or more threads while another thread accesses the resource. The method includes combining the contention history with a lock address for the lock to form a predictor table index, and using the predictor table index to determine a lock prediction for the lock. The prediction includes a determination of an amount of contention. | 2016-04-07 |
20160098300 | MULTI-CORE PROCESSOR SYSTEMS AND METHODS FOR ASSIGNING TASKS IN A MULTI-CORE PROCESSOR SYSTEM - A multi-core processor system and a method for assigning tasks are provided. The multi-core processor system includes a plurality of processor cores, configured to perform a plurality of tasks, and each of the tasks is in a respective one of a plurality of scheduling classes. The multi-core processor system further includes a task scheduler, configured to obtain first task assignment information about tasks in a first scheduling class assigned to the processor cores, obtain second task assignment information about tasks in one or more other scheduling classes assigned to the processor cores, and refer to the first task assignment information and the second task assignment information to assign a runnable task in the first scheduling class to one of the processor cores. | 2016-04-07 |
20160098301 | SYSTEM AND METHOD FOR TRANSFORMING LEGACY DESKTOP ENVIRONMENTS TO A VIRTUALIZED DESKTOP MODEL - A system and method for transforming a legacy device into a virtualized environment, comprising includes analyzing the profiling data for at least one application to determine usage frequency and resource requirements of the at least one application. Captured user events are benchmarked to simulate a user workload for the at least one application to determine how resource utilization and execution times scale from a legacy environment to a virtualized environment. The legacy device is transformed into the virtualized environment in accordance with a provisioning plan. | 2016-04-07 |
20160098302 | RESILIENT POST-COPY LIVE MIGRATION USING EVICTION TO SHARED STORAGE IN A GLOBAL MEMORY ARCHITECTURE - A method includes, in a computing system that includes at least first and second compute nodes, running on the first compute node a workload that uses memory pages. The memory pages used by the workload are classified into at least active pages and inactive pages, and the inactive memory pages are evicted to shared storage that is accessible at least to the first and second compute nodes. In response to migration of the workload from the first compute node to the second compute node, the active pages are transferred from the first compute node to the second compute node for use by the migrated workload, and the migrated workload is provided with access to the inactive pages on the shared storage. | 2016-04-07 |
20160098303 | GLOBAL LOCK CONTENTION PREDICTOR - An apparatus for lock acquisition is disclosed. A method and a computer program product also perform the functions of the apparatus. The apparatus includes a lock history module that adds a current contention state of a lock to a contention history. The lock includes a memory location for storing information used for excluding access to a resource by one or more threads while another thread accesses the resource. The apparatus, in some embodiments, includes a combination module that combines the contention history with a lock address for the lock to form a predictor table index, and a prediction module that uses the predictor table index to determine a lock prediction for the lock. The prediction includes a determination of an amount of contention. | 2016-04-07 |
20160098304 | SYSTEMS AND METHODS FOR CLASSIFYING AND ANALYZING RUNTIME EVENTS - A system may be able classify events that occur during the runtime of applications (e.g., exceptions). The system may receive an indication of the event and may classify the event based on a comparison with elements of a classification data structure. The classification data structure may be a hierarchical data structure, and child elements may inherit characteristics from parent elements. Based on the classification, the system may perform one or more actions, which may be specified by the elements of the data structure. For example, the system may provide notifications to administrators and/or user, may attempt to recover from the event, and/or the like. Each event may be associated with a unique identifier so the user can more easily identify the event to support personnel. The system may include analysis tools to assist administrators in tracking events and identifying which events are most important. | 2016-04-07 |
20160098305 | METHOD AND APPARATUS FOR RESOURCE BALANCING IN AN AUTOMATION AND ALARM ARCHITECTURE - A method and system architecture for automation and alarm systems is provided. According to exemplary embodiments, relatively simple processing tasks are performed at the sensor level, with more complex processing being shifted to the gateway entity or a networked processing device. The gateway entity dynamically allocates processing resources for sensors. If a sensor detects than an event is occurring, or predicts that an event is about to occur, the sensor submits a resources allocation request and a power balancer running on the gateway entity processes the request. In response to the resources allocation request, the gateway entity allocates some processing resources to the requesting sensor and the data is processed in real-time or near-real-time by the gateway entity. | 2016-04-07 |
20160098306 | HARDWARE QUEUE AUTOMATION FOR HARDWARE ENGINES - In general, techniques are described for performing hardware-based queue automation for hardware engines. An apparatus comprising a hardware engine and a hardware event queue manager may be configured to perform the techniques. The hardware event queue manager may be configured to receive, from a processing unit separate from the hardware event queue manager, an event to be processed by the hardware engine, and perform queue management with respect to an event queue to schedule processing of the event by the hardware engine. | 2016-04-07 |
20160098307 | INTEGRATION APPLICATION BUILDING TOOL - Systems, methods, and other embodiments associated with an integration application building tool are described. In one embodiment, a method includes providing data files including an adapter data file, a flow data file, and an environment data file. The adapter data file stores adapter data corresponding to a plurality of adapters for respective enterprise applications. An adapter for a given enterprise application enables the given enterprise application to exchange messages with a messaging system. The flow data file describes to a plurality of flows of messages, through the messaging system, between enterprise applications. The environment data file is configured to be populated with location data. The method includes, receiving an instance of location data and populating the environment data file. An adapter application comprising computer code is generated that, when executed, allows the enterprise application to exchange messages with the messaging system. The adapter application is deployed on integration bus hardware. | 2016-04-07 |
20160098308 | End-to- End Application Tracking Framework - Novel tools and techniques for tracing application execution and performance. Some of the tools provide a framework for monitoring the execution and/or performance of applications in an execution chain. In some cases, the framework can accomplish this monitoring with a few simple calls to an application programming interface on an application server. In other cases, the framework can provide for the passing of traceability data in protocol-specific headers of existing inter-application (and/or intra-application) communication protocols. | 2016-04-07 |
20160098309 | BACKUP-INSTRUCTING BROADCAST TO NETWORK DEVICES RESPONSIVE TO DETECTION OF FAILURE RISK - Embodiments relate to systems and methods for detecting failure-risk events at devices and facilitating local and/or remote data back-up and/or device operations. In some instances, a device characterizes a stimulus sensed at the device or an operation of a component of the device. A determination is made that a failure-risk condition is satisfied based on the characterization. In response to determining that the failure-risk condition is satisfied, the device initiates a data backing up of data in a non-volatile reserved memory or facilitates transmission of an alert communication from the device to another device. | 2016-04-07 |
20160098310 | DEVICE DRIVER ERROR ISOLATION ON DEVICES WIRED VIA FSI CHAINED INTERFACE - Fault isolation for a computer system having multiple FRUs in an FSI chain uses logic embedded in a device driver to determine first failure data and a logical error identifier. The logical error identifier represents a hardware logical area of the fault. The fault is then mapped to a segment of the system based on a self-describing system model which includes FRU boundary relationships for the devices. Operation of the device driver is carried out by a flexible service processor. The device driver uses the first failure data to identify a link at a failure point corresponding to the fault and determine a failure type at the link, then maps the link and the failure type to the logical error identifier. After identifying the segment, the device driver can generate a list of callouts of the field replaceable units associated with the segment which require replacement. | 2016-04-07 |
20160098311 | DEVICE DRIVER ERROR ISOLATION ON DEVICES WIRED VIA FSI CHAINED INTERFACE - Fault isolation for a computer system having multiple FRUs in an FSI chain uses logic embedded in a device driver to determine first failure data and a logical error identifier. The logical error identifier represents a hardware logical area of the fault. The fault is then mapped to a segment of the system based on a self-describing system model which includes FRU boundary relationships for the devices. Operation of the device driver is carried out by a flexible service processor. The device driver uses the first failure data to identify a link at a failure point corresponding to the fault and determine a failure type at the link, then maps the link and the failure type to the logical error identifier. After identifying the segment, the device driver can generate a list of callouts of the field replaceable units associated with the segment which require replacement. | 2016-04-07 |
20160098312 | LOG MANAGEMENT APPARATUS, COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN LOG MANAGEMENT PROGRAM, AND LOG MANAGEMENT METHOD - A non-transitory computer-readable recording medium having stored therein a log management program that causes a computer to execute a process includes obtaining a log item group included in each log and having a periodicity, for each of a plurality of logs outputted from a plurality of monitoring targets, detecting a first log item group from a first log, the first log item group being different from the log item group included in the first log, specifying a second log item group outputted in a same period as that of the first log item group, from a second log related to the first log, extracting the first log item group from the first log, and outputting the first log item group, and extracting the specified second log item group from the second log, and outputting the second log item group. | 2016-04-07 |
20160098313 | WATCHDOG METHOD AND DEVICE - Each task assigned to a core can be considered an “active” task. Sequential strobe signals of a watchdog signal can be spaced apart in time by a certain duration. The duration between strobe signals is longer than the expected duration of an active task. By knowing that all tasks being monitored are expected to execute within an expected amount of time, the duration between the strobe signals can be set to be longer than that expected amount of time. If a task has not transitioned to inactive by a next strobe, a watchdog error has occurred. | 2016-04-07 |
20160098314 | METHODS AND APPARATUS FOR CUSTOMIZING AND USING A REUSABLE DATABASE FRAMEWORK FOR FAULT PROCESSING APPLICATIONS - A method for operation of a reusable fault processing database in conjunction with a complex system is provided. The method stores a set of logical rules into one or more logic entities of the reusable fault processing database, the set comprising one or more executable instructions applicable to fault detection and fault isolation in the complex system; stores at least one defined variable for each of the received set of logical rules, the at least one defined variable being stored into one or more variable entities of the reusable fault processing database; and stores a configuration of at least one external interface of the reusable fault processing database, the configuration being stored in one or more input/output (I/O) entities of the reusable fault processing database, the external interface comprising a defined set of input to the reusable fault processing database and a defined set of output from the reusable fault processing database. | 2016-04-07 |
20160098315 | DEVICE FOR MANAGING THE STORAGE OF DATA - A device manages the storage of data in at least one storage device of a first type and in a storage device of a second type, the at least one storage device of the first type being physically distinct from the storage device of the second type. The device partitions data to be stored into blocks of data, determines redundancies generated by an error detection code for each block of data, stores blocks of data in the at least one storage device of the first type, the storage device(s) of the first type being compliant with an avionic quality assurance level of a given quality level, and stores redundancies in the storage device of the second type, the storage device of the second type being compliant with an avionic quality assurance level that is higher than the avionic quality assurance level of the storage device(s) of first type. | 2016-04-07 |
20160098316 | ERROR PROCESSING METHOD, MEMORY STORAGE DEVICE AND MEMORY CONTROLLING CIRCUIT UNIT - An error processing method for a rewritable non-volatile memory module, a memory storage device and a memory controlling circuit unit are provided. The rewritable non-volatile memory module includes a plurality of memory cells. The error processing method includes: sending a first read command sequence for reading a plurality of bits from the memory cells; performing a first decoding on the bits; determining whether each error belongs to a first type error or a second type error if the bits have at least one error; recording related information of a first error in the at least one error if the first error belongs to the first type error; and not recording the related information of the first error if the first error belongs to the second type error. Accordingly, errors with particular type may be processed suitably. | 2016-04-07 |
20160098317 | WRITE MAPPING TO MITIGATE HARD ERRORS VIA SOFT-DECISION DECODING - An apparatus having mapping and interface circuits. The mapping circuit (i) generates a coded item by mapping write unit bits using a modulation or recursion of past-seen bits, and (ii) calculates a particular state to program into a nonvolatile memory cell. The interface circuit programs the cell at the particular state. Two normal cell states are treated as at least four refined states. The particular state is one of the refined states. A mapping to the refined states mitigates programming write misplacement that shifts an analog voltage of the cell from the particular state to an erroneous state. The erroneous state corresponds to a readily observable illegal or atypical write sequence, and results in a modified soft decision from that calculated based on the normal states only. A voltage swing between the particular state and the erroneous state is less than between the normal states. | 2016-04-07 |
20160098318 | DYNAMIC PER-DECODER CONTROL OF LOG LIKELIHOOD RATIO AND DECODING PARAMETERS - An apparatus includes one or more error-correction decoders, a buffer, and at least one processor. The buffer may be configured to store data to be decoded by the one or more error-correction decoders. The at least one processor is generally enabled to send messages to the one or more error-correction decoders. The messages may contain datapath control information corresponding to data in the buffer to be decoded by the one or more error-correction decoders. The one or more error-correction decoders are generally enabled to decode the data read from the buffer according to the corresponding datapath control information. | 2016-04-07 |
20160098319 | SYSTEM AND METHOD FOR PRE-ENCODING OF DATA FOR DIRECT WRITE TO MULTI-LEVEL CELL MEMORY - A method and system for reducing data transfers between memory controller and multi-level cell (MLC) non-volatile memory during programming passes of a word line (WL) in the non-volatile memory. The system includes a controller and non-volatile memory having multiple WLs, each WL having a plurality of MLC memory cells. The controller stores received data in volatile memory until a target WL amount of data is received. The controller pre-encodes the received data into direct WL programming data for each programming pass necessary to program a target MLC WL. All direct WL programming data for all programming passes are stored in the volatile memory before programming. Different portions of direct WL programming data are transmitted from the controller to the non-volatile memory each pass. The received data may be deleted from the volatile memory before transmitting at least a portion of the direct WL programming data to the non-volatile memory. | 2016-04-07 |
20160098320 | EFFICIENTLY STORING DATA IN A DISPERSED STORAGE NETWORK - A method includes determining that one or more data blocks of a permanently stored data blocks are to be deleted. In response, the method further includes obtaining a group of partial redundancy data for the permanently stored data blocks. The method further includes identifying a temporarily stored plurality of data blocks for which partial redundancy data does not yet exist. The method further includes creating a new plurality of data blocks from data blocks of the permanently stored plurality of data blocks that are to remain permanently stored and data blocks from the temporarily stored plurality of data blocks that are to be permanently stored. The method further includes permanently storing the new plurality of data blocks. The method further includes generating a new group of partial redundancy data. The method further includes sending the new group of partial redundancy data and the group of partial redundancy data. | 2016-04-07 |
20160098321 | Efficient Memory Architecture for Low Density Parity Check Decoding - A low density parity check (LDPC) decoder integrated on a single semiconductor substrate may comprise one or more arrays of first-type memory cells and one or more arrays of second-type memory cells. The LDPC decoder may be configured to store intrinsic messages in the array of first-type cells and to store extrinsic messages in the array of second-type cells. The first-type cells may be a first one of: static random access memory (SRAM) cells, refreshed dynamic random access memory (DRAM) cells, non-refreshed DRAM cells configured as a FIFO, and non-refreshed DRAM cells not configured as a FIFO. The second-type cells may be a second one of: static random access memory (SRAM) cells, refreshed dynamic random access memory (DRAM) cells, non-refreshed DRAM cells configured as a FIFO, and non-refreshed DRAM cells not configured as a FIFO. | 2016-04-07 |
20160098322 | Background Initialization for Protection Information Enabled Storage Volumes - Technology is disclosed for performing background initialization on protection information enabled storage volumes or drives. In some embodiments, a storage controller generates multiple I/O requests for stripe segments of each drive (e.g., disk) of multiple drives of a RAID-based system (e.g., RAID-based disk array). The I/O requests are then sorted for each of the drives according to a pre-determined arrangement and initiated in parallel to the disks while enforcing the pre-determined arrangement. Sorting and issuing the I/O requests in the manner described herein can, for example, reduce drive head movement resulting in faster storage subsystem initialization. | 2016-04-07 |
20160098323 | INTELLIGENT PROTECTION OF OFF-LINE MAIL DATA - A system according to certain aspects improves the process of creating secondary copies of data (e.g., creating backup copies). The system can compute the score of the data (e.g., a computer file storing information) to be backed up, and determine whether the score satisfies one or more threshold criteria before backing up the data. In one example, a change in score indicates a change in the content of the data. The threshold criteria may be that the score be different from the score of the most recently backed up copy of the data. | 2016-04-07 |
20160098324 | DYNAMIC PROTECTION OF STORAGE RESOURCES FOR DISASTER RECOVERY - A recovery manager discovers replication properties of datastores stored in a storage array, and assigns custom tags to the datastores indicating the discovered replication properties. A user may create storage profiles with rules using any combination of these custom tags describe replication properties. The recovery manager protects a storage profile using a policy-based protection mechanism. Whenever a new replicated datastore is provisioned, the datastore is dynamically tagged with the replication properties of their underlying storage, and will belong to one or more storage profiles. The recovery manager monitors storage profiles for new datastores and protects the newly provisioned datastore dynamically, including any or all of the VMs stored in the datastore. | 2016-04-07 |
20160098325 | UNIFYING APPLICATION LOG MESSAGES USING RUNTIME INSTRUMENTATION - Unifying application log messages using runtime instrumentation includes capturing raw data associated as a log message from an application using an application monitoring module, determining if the raw data is to be filtered based on a filtering configuration, and constructing a log message based on the raw data. A system for unifying application log messages using runtime instrumentation includes a capture engine to capture raw data associated as a log message from an application using an application monitoring module, a determination engine to determine if the raw data is to be filtered based on a filtering configuration, a construction engine to construct a log message based on the raw data, and a log framework monitor engine to monitor an application program interface that invokes a writing action of the log message using at least one log framework to capture the log message in real time. | 2016-04-07 |
20160098326 | METHOD AND APPARATUS FOR ENABLING TEMPORAL ALIGNMENT OF DEBUG INFORMATION - A signal processing device includes at least one timestamp generation component arranged to generate at least one local timestamp value, and to provide the at least one local timestamp value to at least one data link layer module for timestamping of data packets. The signal processing device further includes at least one debug module arranged to receive the at least one local timestamp value and to timestamp debug information based at least partly on the at least one local timestamp value. | 2016-04-07 |
20160098327 | BYPASSING FAILED HUB DEVICES IN HUB-AND-SPOKE TELECOMMUNICATION NETWORKS - In an embodiment, a method comprises using a first hub device: establishing one or more secure connections with one or more spoke devices logically arranged as spokes with respect to a data processing system; generating and sending via a high-speed link a hub probe to a second hub device; in response to determining that the second hub device is nonresponsive, transmitting, to the one or more spoke devices a first communication indicating that the second hub device is nonresponsive; using a spoke device, receiving the first communication indicating that the second hub device is nonresponsive; determining whether the spoke device has established a secure connection with the second hub device; in response to determining that the spoke device has established the secure connection with the second hub device, selecting a third hub device, establishing a secure connection with the third hub device, and communicating with the third hub device. | 2016-04-07 |
20160098328 | Method and Device for Distributing Holdup Energy to Memory Arrays - The various embodiments described herein include methods and/or devices used to protect data in a storage device. In one aspect, a method includes performing a power fail operation on a first section of the storage device, the first section of the storage device comprising one or more memory group modules. The power fail operation includes supplying power, via one or more energy storage devices, to the one or more memory group modules, where each memory group module includes a respective memory group module controller. The power fail operation also includes supplying power, via an additional energy storage device, to a storage device controller, the storage device controller corresponding to the first section of the storage device. The additional energy storage device is distinct from the one or more energy storage devices and each are distinct from a power source used during normal operation of the storage device. | 2016-04-07 |
20160098329 | INFORMATION PROCESSING TECHNIQUE FOR UNINTERRUPTIBLE POWER SUPPLY - This information processing apparatus includes a display processing unit to perform, on a display device, display for requesting a user to collectively input, for each of plural apparatuses whose activation or stop is to be controlled, a first time for causing a power-supply start to be delayed since a power feeding start or a second time for causing a power-supply stop to be delayed since a power failure, by an uninterruptible power supply that supplies power to the apparatus among one or more uninterruptible power supplies or by a power feeding management group in the uninterruptible power supply; and a setting unit to set, for each of the plural apparatuses, the inputted first time or second time for the uninterruptible power supply that supplies power to the apparatus, via a communication network. | 2016-04-07 |
20160098330 | TECHNIQUES FOR ERROR HANDLING IN PARALLEL SPLITTING OF STORAGE COMMANDS - Various embodiments are generally directed to techniques for handling errors affecting the at least partially parallel performance of data access commands between nodes of a storage cluster system. An apparatus may include a processor component of a first node, an access component to perform a command received from a client device via a network to alter client device data stored in a first storage device coupled to the first node, a replication component to transmit a replica of the command to a second node via the network to enable performance of the replica by the second node at least partially in parallel, an error component to retry transmission of the replica based on a failure indicated by the second node and a status component to select a status indication to transmit to the client device based on the indication of failure and results of retrial of transmission of the replica. | 2016-04-07 |
20160098331 | METHODS FOR FACILITATING HIGH AVAILABILITY IN VIRTUALIZED CLOUD ENVIRONMENTS AND DEVICES THEREOF - A method, non-transitory computer readable medium and host computing device that stores, by a first virtual storage controller, a plurality of received transactions in a transaction log in an in-memory storage device. The first virtual storage controller is monitored and a determination is made when a failure of the first virtual storage controller has occurred based on the monitoring. When the failure of the first virtual storage controller is determined to have occurred, at least one storage volume previously assigned to the first virtual storage controller is remapped to be assigned to a second virtual storage controller. Additionally, the second virtual storage controller retrieves at least one of the transactions from the transaction log in the in-memory storage device and replays at least one of the transactions. | 2016-04-07 |
20160098332 | DYNAMIC MULTI-PURPOSE EXTERNAL ACCESS POINTS CONNECTED TO CORE INTERFACES WITHIN A SYSTEM ON CHIP (SOC) - An integrated circuit device comprises multiple cores each comprising one or more separate input and output interfaces, the multiple cores integrated within the integrated circuit device to function as a single computer system. Internal inter-chip connection links are disposed on the integrated circuit device for connecting one or more cores with at least one other core via the one or more separate input and output interfaces. One or more bidirectional access ports are communicatively connected in each path of the inter-chip connection links to enable a separate external access point to each of the one or more separate input and output interfaces of the cores, wherein each of the one or more bidirectional access ports is dynamically selectable as each of an external input interface of the integrated circuit device and an external output interface of the integrated circuit device. | 2016-04-07 |
20160098333 | DETECTION OF FAULT INJECTION ATTACKS - An apparatus for detecting fault injection includes functional circuitry and fault detection circuitry. The functional circuitry is configured to receive one or more functional input signals and to process the functional input signals so as to produce one or more functional output signals. The functional circuitry meets a stability condition that specifies that stability of a designated set of one or more of the functional input signals during a first time interval guarantees stability of a designated set of one or more of the functional output signals during a second time interval that is derived from the first time interval. The fault detection circuitry is configured to monitor the designated functional input and output signals, to evaluate the stability condition based on the monitored functional input and output signals, and to detect a fault injection attempt in response to detecting a deviation from the stability condition. | 2016-04-07 |
20160098334 | BENCHMARKING MOBILE DEVICES - According to aspects of the invention there are provided methods and apparatus for monitoring, analysing and/or optimising the performance of a mobile device. The mobile device includes a memory with computer readable instructions stored thereon associated with a diagnostic application, which when executed on a processor, has a first level of permissions for accessing the mobile device, and associated with a performance monitoring component, which when executed on the processor, has a second level of permissions for accessing the mobile device. The diagnostic application and performance monitoring component communicate to retrieve performance-related data associated with execution of an application on the mobile device, where the performance-related data is accessible using the second level of permissions. The diagnostic application receives and stores performance related data from the performance monitoring component for analysing and/or optimising the performance of the mobile device executing the application. | 2016-04-07 |
20160098335 | METHOD FOR AUTOMATIC DECOMMISSIONING OF NETWORK PARTICIPANTS - Disclosed is a memory device in which the state of the memory may be set by a mechanical action, with or without mains power present. The memory state may be detected by a microcontroller. The state for the memory device may be reset by a microcontroller. The microcontroller may be external to an apparatus containing the memory device, adjacent to or within the apparatus. | 2016-04-07 |
20160098336 | METHODS AND SYSTEMS FOR DYNAMIC RETIMER PROGRAMMING - Systems and methods for dynamically programming retimers for computer communications are disclosed. For example, in one aspect, a machine implemented method is disclosed that includes: detecting a cable related event at a port coupled to another device via a cable; determine a cable type and a cable length from a storage location of the port; determining if the cable type and the cable length are different from a previously stored cable type and cable length connecting the port to the other device; and when either the cable type or the cable length are different than the previously stored cable type and cable length, programming a retimer device based on the cable type and length. | 2016-04-07 |
20160098337 | SAMPLING OF DEVICE STATES FOR MOBILE SOFTWARE APPLICATIONS - A method for monitoring software application performance and one or more device states affecting a software application on a periodic basis on a mobile device. The method includes one or more computer processors identifying a software application on a mobile device. The method further includes the one or more computer processors identifying a plurality of sampling plans and one or more respective triggers within the plurality of sampling plans that are respectively associated with the software application and are stored on the mobile device. The method further includes the one or more computer processors determining a first value associated with the one or more respective triggers. The method further includes the one or more computer processors selecting a first sampling plan from the plurality of sampling plans for the software application based, at least in part, on the value associated with the one or more respective triggers. | 2016-04-07 |
20160098338 | METHODS FOR MANAGING PERFORMANCE STATES IN AN INFORMATION HANDLING SYSTEM - An information handling system (IHS) is disclosed wherein the system includes a processor associated with at least one performance state (P-state), and a memory in communication with the processor. The memory is operable to store a virtualization software and a basic input/out system (BIOS). The BIOS is configured to report a parameter of the P-state to the virtualization software. In addition, the BIOS is configured to transition the processor into a desired P-state. A method for managing performance states in an information handling system (IHS) is further disclosed wherein the method includes providing a basic input/output system (BIOS) in communication with a processor, the processor associated with an at least one performance state (P-state) and reporting a parameter of the at least one P-state to a virtualization software via the BIOS. The method further includes transitioning the processor to a desired P-state via the BIOS. | 2016-04-07 |
20160098339 | SMART POWER SCHEDULING FOR USER-DIRECTED BATTERY DURATION - The systems and method described herein provide smart power scheduling for an electronic device. A user can specify a desired battery duration for the electronic device. Smart power scheduling can modify operation of the electronic device, when appropriate, so that the desired battery duration is met. The user may, for example, know that she will be unable to charge her smartphone for 10 hours and specify a desired battery duration of 10 hours. If during the 10 hours, the user plays a game on the smartphone that drains the battery too fast to meet the 10-hour desired battery duration, the smartphone can modify its operations, for example, by lowering the graphics resolution, to achieve the user-specified desired battery duration. When smart power scheduling modifies operations, it may limit the impact on the user experience, for example, background tasks may be modified before foreground tasks are modified. | 2016-04-07 |
20160098340 | METHOD AND SYSTEM FOR COMPARING DIFFERENT VERSIONS OF A CLOUD BASED APPLICATION IN A PRODUCTION ENVIRONMENT USING SEGREGATED BACKEND SYSTEMS - An application is implemented in the production environment in which the application will be used. Two or more backend systems are used to implement different versions of the application using the production environment in which the application will actually be used and accessed. Actual user data is received. A first portion of the actual user data is routed and processed in the production environment using a first version of the application and a first backend system of the two or more backend systems. A second portion of the actual user data is also routed and processed in the production environment but using a second version of the application and a second backend system of the two or more backend systems. The results data is then analyzed to evaluate the various versions of the application in the production environment. | 2016-04-07 |
20160098341 | WEB APPLICATION PERFORMANCE TESTING - A system for performance testing a web application initializes to be instrumented a subset of methods of the web application to be tested in response to a request, and then tests the application based on the subset of methods. The system generates an instrumented call tree and corresponding stack traces for each request in response to the testing, and determines one or more methods that take longer than a predetermined time period to execute using the instrumented call trees and the stack traces. The system then determines additional methods to be tested and adds the determined additional methods to the subset of methods and repeats the testing. | 2016-04-07 |
20160098342 | SYSTEMS AND PROCESSES FOR COMPUTER LOG ANALYSIS - Existing program code, which is executable on one or more computers forming part of a distributed computer system, is analyzed. The analysis identifies log output instructions present in the program code. Log output instructions are those statements or other code that generate log messages related to service requests processed by the program code. A log model is generated using the analysis. The log model is representative of causal relationships among service requests defined by the program code. The log model can then be applied to logs containing log messages generated by execution of the program code, during its normal operation, to group log messages for improved analysis, including visualization, of the performance and behaviour of the distributed computer system. | 2016-04-07 |
20160098343 | SYSTEM AND METHOD FOR SMART FRAMEWORK FOR NETWORK BACKUP SOFTWARE DEBUGGING - A system for network software debugging comprises a processor, an input interface, and an output interface. The processor is configured to determine a set of available components of a selected component type, and determine a set of backup processes running on the component. The input interface is configured to receive a selection of a backup process of the set of backup processes. The output interface is configured to provide an indication of a change of verbosity level. | 2016-04-07 |
20160098344 | HARDWARE AUTOMATION FOR MEMORY MANAGEMENT - A storage module may include a controller that has hardware path that includes a plurality of hardware modules configured to perform a plurality of processes associated with execution of a host request. The storage module may also include a firmware module having a processor that executes firmware to perform at least some of the plurality of processes performed by the hardware modules. The firmware module performs the processes when the hardware modules are not able to successfully perform them. | 2016-04-07 |
20160098345 | MEMORY MANAGEMENT APPARATUS AND METHOD - A memory management apparatus and method are provided herein. The memory management apparatus includes a memory management list generation unit, a memory allocation unit, and a memory release unit. The memory management list generation unit generates a memory management list adapted to have all memory blocks divided into a plurality of memory blocks and to indicate whether each of the memory blocks has been allocated. The memory allocation unit allocates a memory region that belongs to the memory management list and that corresponds to an amount of memory requested for allocation in response to a memory allocation request. The memory release unit releases a memory region that belongs to the memory management list and that corresponds to a memory region to be released in response to a memory release request. | 2016-04-07 |
20160098346 | ASSISTED GARBAGE COLLECTION IN A VIRTUAL MACHINE - A method relates to receiving, by a processing device executing a virtual machine, bytecode comprising an object to be loaded into a memory space, the object being tagged with a garbage collection descriptor, wherein the garbage collection descriptor is generated according to an annotation to the object in a source code, determining, in view of the garbage collection descriptor, a region of the memory space to store the object, wherein the memory space comprises a first region to store a first set of objects that have survived less than a pre-determined number of rounds of garbage collection and a second region to store a second set of objects that have survived at least the pre-determined number of rounds of garbage collection in the first region, and storing the object in the region of the memory space. | 2016-04-07 |
20160098347 | TEMPORAL CLONES TO IDENTIFY VALID ITEMS FROM A SET OF ITEMS - Techniques are provided for using bitmaps to indicate which items, in a set of items, are invalid. The bitmaps include an “active” bitmap and one or more “temporal clones”. The active bitmap indicates which items in the set are currently valid. The temporal clones are outdated versions of the active bitmap that indicate which items in the set were invalid at previously points in time. Temporal clones may not be very different from each other. Therefore, temporal clones may be efficiently compressed. For example, a bitmap may be selected as a “base bitmap”, and one or more other bitmaps are encoded using delta encoding. Run length encoding may then be applied to further compress the bitmap information. These bitmaps may then be used to determine which items are valid relative to past-version requests. | 2016-04-07 |
20160098348 | DYNAMIC MEMORY ESTIMATIONS FOR MEMORY BOUNDED APPLICATIONS - Techniques are disclosed for improving application responsiveness, and particularly applications used to present rich media content, by precaching nearby but not-yet-displayed content, so that content can be immediately ready to display. A precache window can be used to determine what undisplayed content is precached, in accordance with an embodiment. The size of the precache window, and hence the amount of content that can be precached for later display, is dynamic in nature and is determined based on a number of variables, such as the distance of the content from being visible and the estimated memory consumption of the content. In addition, the dynamic precache window can be recalculated in real-time in response to events and/or as the user interacts with the content in a way that causes a significant enough change to warrant a new memory limit estimate be performed. Out-of-memory errors may be handled by reducing precache window. | 2016-04-07 |
20160098349 | APPARATUS AND METHOD FOR CONSOLIDATING MEMORY ACCESS PREDICTION INFORMATION TO PREFETCH CACHE MEMORY DATA - An apparatus is connected to a main memory, includes a cache memory holding data and a memory storing prediction information in plural areas thereof. The prediction information is referenced to determine whether to execute prefetch, which holds data from the main memory to the cache memory, in a case where a plurality of unrolled instructions produced by unrolling a target instruction included in a loop sentence are executed individually, and corresponds to individual memory accesses executed at certain address intervals in accordance with the respective unrolled instructions. The apparatus executes memory access to the main memory, and executes the prefetch. When the plurality of unrolled instructions are executed individually, the apparatus consolidates a plurality of pieces of prediction information respectively stored in the plural areas of the memory into one based on the number-of-unrolling information, and stores the consolidated prediction information into any one of the plural areas. | 2016-04-07 |
20160098350 | SIZING A CACHE WHILE TAKING INTO ACCOUNT A TOTAL BYTES WRITTEN REQUIREMENT - A total bytes written (TBW) requirement associated with solid state storage is obtained. A size of a cache associated with the solid state storage is determined based at least in part on the TBW requirement. The size of the cache is adjusted to be the determined size | 2016-04-07 |
20160098351 | DEVICE AND METHOD FOR INTEGRATED DATA MANAGEMENT FOR NONVOLATILE BUFFER CACHE AND NONVOLATILE STORAGE - An integrated nonvolatile memory control subsystem and method are disclosed. The integrated nonvolatile memory control subsystem includes a nonvolatile buffer cache, a nonvolatile journal area, nonvolatile storage, and an integrated memory control unit. The integrated memory control unit performs a read operation, a write operation, a commit operation and a checkpoint operation on the cache blocks of the nonvolatile buffer cache and the journal blocks of the nonvolatile journal area. The integrated memory control unit sets each of data blocks of the nonvolatile storage as one among valid state, erasable state and invalid state, depending on being cached or not, being journaled or not, and being a clean cache or not, so as to maintain an authentic original up-to-date consistent data within any one of the nonvolatile buffer cache, the nonvolatile journal area and the nonvolatile storage. | 2016-04-07 |
20160098352 | MEDIA CACHE CLEANING - Implementations disclosed herein provide a method comprising detecting a power supply status, determining a media cache cleaning scheme based on the detected power supply status, and performing the determined cleaning scheme until a predetermined threshold is reached. | 2016-04-07 |
20160098353 | METHODS AND SYSTEMS FOR MEMORY DE-DUPLICATION - Provided are methods and systems for de-duplicating cache lines in physical memory by detecting cache line data patterns and building a link-list between multiple physical addresses and their common data value. In this manner, the methods and systems are applied to achieve de-duplication of an on-chip cache. A cache line filter includes one table that defines the most commonly duplicated content patterns and a second table that saves pattern numbers from the first table and the physical address for she duplicated cache line. Since a cache line duplicate can be detected during a write operation, each write can involve table lookup and comparison. If there is a hit in the table, only the address is saved instead of the entire data string. | 2016-04-07 |
20160098354 | SYSTEM INCLUDING HIERARCHICAL MEMORY MODULES HAVING DIFFERENT TYPES OF INTEGRATED CIRCUIT MEMORY DEVICES - Volatile memory devices corresponding to a first memory hierarchy may be on a first memory module that is coupled to a memory controller by a first signal path. A nonvolatile memory device corresponding to a second memory hierarchy may be on a second memory module that is coupled to the first memory module by a second signal path. Memory transactions for the nonvolatile memory device may be transferred from the memory controller to the first memory hierarchy using the first signal path, and data associated with an accumulation of the memory transactions may be written from the first memory hierarchy to the second memory hierarchy using the second signal path and a first and second control signal. The first control signal may be generated in view of a detection of wear and the second control signal may be generated in view of a detection of a defect. | 2016-04-07 |
20160098355 | OPTIMISTIC DATA READ - A storage module may include a controller that is configured to perform a read operation to read data stored in at least one memory, where the data is associated with logical address information. In order to perform the read operation, the controller may be configured to retrieve a preliminary physical address associated with the logical address information, and initiate a data retrieval process for a first version of the data stored at the preliminary physical address prior to confirming a final physical address associated with the logical address information. | 2016-04-07 |
20160098356 | HARDWARE-ASSISTED MEMORY COMPRESSION MANAGEMENT USING PAGE FILTER AND SYSTEM MMU - Provided are methods and systems for managing memory using a hardware-based page filter designed to distinguish between active and inactive pages (“hot” and “cold” pages, respectively) so that inactive pages can be compressed prior to the occurrence of a page fault. The methods and systems are designed to achieve, among other things, lower cost, longer battery life, and faster user response. Whereas existing approaches for memory management are based on pixel or frame buffer compression, the methods and systems provided focus on the CPU's program (e.g., generic data structure). Focusing on hardware-accelerated memory compression to offload CPU translates higher power efficiency (e.g., ASIC is approximately 100× lower power than CPU) and higher performance (e.g., ASIC is approximately 10× faster than CPU), and also allows for hardware-assisted memory management to offload OS/kernel, which significantly increases response time. | 2016-04-07 |
20160098357 | METHOD AND APPARATUS FOR DETERMINING PHYSICAL ADDRESS - A method and an apparatus for determining a physical address are disclosed. According to the present disclosure, a page size is obtained according to the higher-order N bits of a linear address, where N is greater than 0 and less than a quantity of bits of the linear address; an index number of a translation lookaside buffer TLB is obtained according to the page size; a mask is obtained according to the page size and a supported minimum page size; a label of the TLB is obtained according to the mask; the higher-order MAC1 bits of a physical address corresponding to the linear address are obtained by searching the TLB according to the index number and the label; and the physical address is obtained according to the mask, the supported minimum page, and the higher-order MAC1 bits of the physical address. | 2016-04-07 |
20160098358 | PCI DEVICE, INTERFACE SYSTEM INCLUDING THE SAME, AND COMPUTING SYSTEM INCLUDING THE SAME - A peripheral component interconnect (PCI) device includes a first memory which includes a plurality of page buffers, a base address register which includes a plurality of base addresses, and a first address translation unit which translates each of the plurality of base addresses to a corresponding one of a plurality of virtual addresses. A map table includes a plurality of map table entries each accessed in correspondence to each of the plurality of virtual addresses, and maps each of the plurality of virtual addresses onto a physical address of physical addresses of the plurality of page buffers. The first address translation unit translates each of the plurality of virtual addresses to a corresponding one of the physical addresses using the map table. | 2016-04-07 |
20160098359 | System and Method for Secured Host-slave Communication - Slave device circuitry, including processing circuitry which is configured to determine a new session identification value; determine a seed value using a secure hash algorithm on a previously determined seed value; determine a slave number from using the secure hash algorithm on the new session identification value, the determined seed value, and a serial number of the slave device associated with the slave device circuitry; receive a host number from the host imaging apparatus and calculate a session key using a hash-based algorithm computation on the host number, the slave number, the new session identification value, and a stored encryption key. The session key has a first portion for performing encryption and decryption operations on data to be transmitted and data received by the slave device, respectively, and a second portion for generating a new address value of the slave device for communicating with the host. | 2016-04-07 |
20160098360 | Information Handling System Secret Protection Across Multiple Memory Devices - Information handling system secret protection is enhanced by encrypting secrets into a common file and breaking up the encrypted file into plural portions stored at plural memory devices, such as across plural DIMMs disposed in the information handling system. In one embodiment, a decryption key to decrypt the encrypted file is broken into plural portions stored at the plural memory devices. Upon detection of a predetermined security factor, such as an indication of removal of a the encrypted file is removed from the plural portions. | 2016-04-07 |
20160098361 | Optimization of Data Locks for Improved Write Lock Performance and CPU Cache usage in Mulit Core Architectures - Data access optimization features the innovative use of a writer-present flag when acquiring read-locks and write-locks. Setting a writer-present flag indicates that a writer desires to modify a particular data. This serves as an indicator to readers and writers waiting to acquire read-locks or write-locks not to acquire a lock, but rather to continue waiting (i.e., spinning) until the write-present flag is cleared. As opposed to conventional techniques in which readers and writers are not locked out until the writer acquires the write-lock, the writer-present flag locks out other readers and writers once a writer begins waiting for a write-lock (that is, sets a writer-present flag). This feature allows a write-lock method to acquire a write-lock without having to contend with waiting readers and writers trying to obtain read-locks and write-locks, such as when using conventional spinlock implementations. | 2016-04-07 |
20160098362 | Methods and Systems for Filtering Communication Between Peripheral Devices and Mobile Computing Devices - The embodiments are directed to methods and systems for sending and receiving signals between one or more peripheral devices connected to a dongle system and an operating system. The methods and systems can detect when a dongle system has been connected to a mobile computing device. The methods and systems can receive an input to use the dongle system with a local operating system or a remote operating system. The methods and systems can also establish a communication channel between the local operating system and the remote operating system, and exchange signals between the dongle system and the remote operating system using one or more virtual filters. | 2016-04-07 |
20160098363 | INITIALIZING I/O DEVICES - A data processing system is provided which includes a processor nest communicatively coupled to an input/output bus by a bus controller, and a service interface controller communicatively coupled to the processor nest. The system includes storage for storing commands for the bus controller and associated command data and resulting status data, the storage being communicatively coupled to the processor nest and the bus controller. The service interface controller is configured, in response to received service commands, to read and write the storage, to execute the command specified in the storage, to retrieve the result of the command, and to store the result in the storage. | 2016-04-07 |
20160098364 | RECONFIGURABLE HARDWARE STRUCTURES FOR FUNCTIONAL PIPELINING OF ON-CHIP SPECIAL PURPOSE FUNCTIONS - A method and apparatus for reconfiguring hardware structures to pipeline the execution of multiple special purpose hardware implemented functions, without saving intermediate results to memory, is provided. Pipelining functions in a program is typically performed by a first function saving its results (the “intermediate results”) to memory, and a second function subsequently accessing the memory to use the intermediate results as input. Saving and accessing intermediate results stored in memory incurs a heavy performance penalty, requires more power, consumes more memory bandwidth, and increases the memory footprint. Due to the ability to redirect the input and output of the hardware structures, intermediate results are passed directly from one special purpose hardware implemented function to another without storing the intermediate results in memory. Consequently, a program that utilizes the method or apparatus, reduces power consumption, consumes less memory bandwidth, and reduces the program's memory footprint. | 2016-04-07 |
20160098365 | EMULATED ENDPOINT CONFIGURATION - Techniques for emulating a configuration space by a peripheral device may include receiving a configuration access request, determining that the configuration access request is for a configuration space other than a native configuration space of the peripheral device, and retrieving an emulated configuration from an emulated configuration space. The configuration access request can then be serviced by using the emulated configuration. | 2016-04-07 |
20160098366 | METHOD AND APPARATUS FOR ENCODING REGISTERS IN A MEMORY MODULE - Provided are a method and apparatus for method and apparatus for encoding registers in a memory module. A mode register command is sent to the memory module over a bus, initialization of the memory module before the bus to the memory module is trained for bus operations, to program one of a plurality of mode registers in the memory module, wherein the mode register command indicates one of the mode registers and includes data for the indicated mode register | 2016-04-07 |
20160098367 | LOGICAL-TO-PHYSICAL BLOCK MAPPING INSIDE THE DISK CONTROLLER: ACCESSING DATA OBJECTS WITHOUT OPERATING SYSTEM INTERVENTION - Data access in a storage device managed by a storage controller is carried out by receiving in the storage controller offsets in objects directly from a plurality of requesting entities of a computer system. The computer controls a mapping mechanism operated by the storage controller, wherein the mapping mechanism relates the offsets in the objects into physical addresses of the data on the storage device, and wherein the data is accessed at the physical addresses. | 2016-04-07 |
20160098368 | EXTENSIBLE HOST CONTROLLER AND OPERATION METHOD THEREOF - An extensible host controller applied to a host includes a universal serial bus (USB) module, a control unit, and a peripheral component interconnect express (PCIE) bus. The USB module includes a USB unit and a predetermined unit. The PCIE bus is coupled to the control unit, wherein the PCIE bus supports a USB mode and a predetermined mode. When a first host with a first extensible host controller is connected to the USB module, the control unit makes the host utilize the USB mode and the USB unit, or the predetermined mode and the predetermined unit to communicate with the first host according to a determination way. | 2016-04-07 |
20160098369 | SMART HARNESS - A smart harness may comprise a connector configured to selectively plug into and be removable from an Electronic Control Unit (“ECU”) of a vehicle, a first On-Board Diagnostics device (“first OBD device”), and a second On-Board Diagnostics device (“second OBD device”). The smart harness may further comprise at least one transceiver configured to receive and send diagnostic information between the ECU and the first OBD device and the second OBD device. The smart harness may further comprise a processor and a memory having a program communicatively connected to the processor. The processor may be configured to receive a request from at least one of the first and second OBD devices, prioritize the request based on a predefined priority associated with at least one of the first and second OBD devices, record at least a portion of the request as part of a distribution record, send the request to the ECU, receive a response from the ECU, compare the response with the distribution record, and send the response to at least one of the first and second OBD devices based on the distribution record. | 2016-04-07 |
20160098370 | DATA FLOW CIRCUITS FOR INTERVENTION IN DATA COMMUNICATION - A system may include date flow module circuits configured between electronic devices or circuits that may affect and/or intercept the flow of data being communicated between electronic devices. The data flow module circuits may communicate with an external controller that may want to intervene in the data communication. The data flow module circuits may be configured in a pass mode or in an intervention mode. In the pass mode, a data flow module circuit may pass on data it receives without intervention by the external controller. In the intervention mode, the data flow module circuit may receive instructions from the external controller as to the data that the external controller wants the data flow module to output. | 2016-04-07 |
20160098371 | SERIAL PERIPHERAL INTERFACE DAISY CHAIN COMMUNICATION WITH AN IN-FRAME RESPONSE - In one example, a master device connected in a serial-peripheral interface (SPI) daisy chain configuration with a plurality of servant devices, wherein the master device is configured to output a master data output to a first servant data input of a first servant device of a plurality of servant devices, wherein the plurality of servant devices are connected in a serial-peripheral interface (SPI) daisy chain configuration with the master device. The master device further configured to receive a master data input from a last servant device of the plurality of servant devices, wherein the master data input comprises an in-frame response of the plurality of servant devices, and wherein the in-frame response is received by the master device in a single SPI communication frame. | 2016-04-07 |
20160098372 | METHOD TO USE PCIe DEVICE RESOURCES BY USING UNMODIFIED PCIe DEVICE DRIVERS ON CPUs IN A PCIe FABRIC WITH COMMODITY PCI SWITCHES - A method for accessing a device in a primary peripheral component interconnect express (PCIe) domain from a secondary PCIe domain includes determining which one or more virtual functions of the device in the primary PCIe domain are to be made available to the secondary PCIe domain. A virtual function driver is installed in the primary PCIe domain associated with the one or more virtual functions. Information corresponding to the one or more virtual functions is provided to the secondary PCIe domain. A virtual function driver associated with the one or more virtual functions is installed in the secondary PCIe domain from the information. The virtual function driver in the secondary PCIe domain has same properties as the virtual function driver in the primary PCIe domain. The device in the primary PCIe domain is accessed from the virtual function driver in the secondary PCIe domain. | 2016-04-07 |
20160098373 | CONTROL OF TX/RX MODE IN SERIAL HALF-DUPLEX TRANSCEIVER SEPARATELY FROM COMMUNICATING HOST - Signaling to control transmit/receive mode transitions of a serial half-duplex transceiver coupled externally to an integrated circuit is provided by the integrated circuit separately from a host processor of the integrated circuit with which the transceiver communicates. This can avoid slow transceiver turn-around times that may be associated with host processor control of the mode transitions. | 2016-04-07 |
20160098374 | METHOD FOR A DETERMINISTIC SELECTION OF A SENSOR FROM A PLURALITY OF SENSORS - A method for a deterministic selection of a sensor from a plurality of sensors, having a control unit and multiple sensors connected to the control unit by means of a three-wire bus, wherein the sensors are connected to the three-wire bus through at least two lines in parallel to one another, and a protocol frame in conformity with the SENT specification is used between the control unit and the sensors for a data exchange, and a particular sensor is selected within the protocol frame by the control unit through the predefined duration of a selection signal, wherein the duration of the selection signal is determined by the interval between a first falling signal edge and a second falling signal edge. | 2016-04-07 |
20160098375 | INITIATING MULTIPLE DATA TRANSACTIONS ON A SYSTEM BUS - Initiating data transactions on a system bus is disclosed. In some implementations, a controller receives first information from a first peripheral requesting a first data transaction. The first information is received over a first communication link between the controller and the first peripheral. The controller receives second information from a second peripheral requesting a second data transaction. The second information received over a second communication link between the controller and the second peripheral. The controller determines first and second ranks for the first and second data transactions, respectively, based on the first and second information, and initiates based on the first and second ranks, the first and second data transactions on a system bus. | 2016-04-07 |
20160098376 | SIDE CHANNEL COMMUNICATION HARDWARE DRIVER - A system and method for communicating data between a first software and a second software located on first and second devices, respectively, has a hardware driver and memory associated with each device. Each communication of data from the first software to the second software allocates memory to manage data to be communicated from the first software to the second software, provides memory allocation information to the hardware driver associated with the first software, and transmits the data from the first hardware driver to the second hardware driver for delivery to the second software via the memory associated with the second software. | 2016-04-07 |
20160098377 | MATRIX GENERATION TECHNIQUE AND PLANT CONTROL TECHNIQUE - In this disclosure, equations to be solved in the model predictive control are transformed by using an off-line algebraic simplification method into a matrix operational expression representing a product of a coefficient matrix and a vector regarding solution inputs within a control horizon is equal to a function vector regarding target values of output states and the output states. The size of the coefficient matrix is reduced compared with the conventional matrix. Then, the matrix operational expression is solved in an online plant control apparatus with present output states and present target values of the output stats of a plant to be controlled, by the direct method, to output the solution to the plant. | 2016-04-07 |
20160098378 | Method and System for Performing Robust Regular Gridded Data Resampling - During data resampling, bad samples are ignored or replaced with some combination of the good sample values in the neighborhood being processed. The sample replacement can be performed using a number of approaches, including serial and parallel implementations, such as branch-based implementations, matrix-based implementations, and function table-based implementations, and can use a number of modes, such as nearest neighbor, bilinear and cubic convolution. | 2016-04-07 |
20160098379 | Preserving Conceptual Distance Within Unstructured Documents - A method, system and computer-usable medium are disclosed for preserving conceptual distance within unstructured documents by characterizing conceptual relationships. Natural language processing is applied to content in a plurality of documents to identify topics and subjects. Analytic analysis is then applied to the identified topics and subjects to identify concepts. The content in each of the plurality of documents is partitioned into a first structured hierarchy, preserving at least one structure in each document inherent in the each document. Access is then provided to the content through a first index based upon utilizing the first structured hierarchy and through a second index utilizing a second structured hierarchy. The conceptual relationship criteria are based upon a directed graph with weights based upon a similarity and a distance based upon concepts. | 2016-04-07 |
20160098380 | MAPPING OF CONTENT ON A WEB SITE TO PRODUCT FUNCTIONALITY - A method for generating a content map. The method may include, by a content processor, receiving a selection of a first URL for a first content segment. The method may include receiving a selection of a second URL for a second content segment. The method may include grouping the first and second content segment into a first dimension group. The method may include receiving a selection of a third URL for a third content segment. The method may include receiving a selection a fourth URL for a fourth content segment. The method may include grouping the third and fourth content segment into a second dimension group. The method may include relating the first and second URL in the first dimension group and the third and fourth URL in the second dimension group to generate a content map. The method may include causing the content map to be displayed. | 2016-04-07 |
20160098381 | DYNAMICALLY PROVIDING A FEED OF STORIES ABOUT A USER OF A SOCIAL NETWORKING SYSTEM - To display a news feed in a social network environment, a social networking system generates news items regarding activities associated with a user of a social network environment. The social networking system may also attach an informational link associated with at least one of the activities to at least one of the news items, limit access to the news items to a predetermined set of viewers, and assign an order to the news items. The news items may be displayed in the assigned order to at least one viewing user of the predetermined set of viewers, and the number of news items displayed may be dynamically limited. | 2016-04-07 |
20160098382 | CONVERSION TOOL FOR XPS AND OPENXPS DOCUMENTS - A conversion tool enables XPS documents to be automatically converted into the Open XPS format and for Open XPS-formatted documents to be automatically converted into the XPS format. The conversion tool may convert content types, package-level relationships, part-level attributes, and image parts into a format supported by either document format. | 2016-04-07 |