46th week of 2016 patent applcation highlights part 43 |
Patent application number | Title | Published |
20160335105 | PERFORMING SERVER MIGRATION AND DEPENDENT SERVER DISCOVERY IN PARALLEL - Performing server virtual machine image migration and dependent server virtual machine image discovery in parallel is provided. Migration of a server virtual machine image that performs a workload is started to a client device via a network and, in parallel, an identity is continuously discovered of a set of dependent server virtual machine images corresponding to the server virtual machine image being migrated to the client device. In response to discovering the identity of the set of dependent server virtual machine images, a server migration pattern of the discovered set of dependent server virtual machine images is generated for the workload. A level of risk corresponding to migrating each dependent server virtual machine image of the discovered set of dependent server virtual machine images to the client device is calculated based on the server migration pattern of the discovered set of dependent server virtual machine images for the workload. | 2016-11-17 |
20160335106 | TECHNIQUES TO MANAGE DATA MIGRATION - Exemplary embodiments provide techniques for managing VM migrations that use relatively simple and uncomplicated commands or APIs that can be executed through scripts or applications. Configuration and preparation for the conversion may be addressed by one set of command-lets or APIs, while the conversion itself is handled by a separate set of command-lets or APIs, which allows the conversion command-lets to be uncomplex and to require little input. Moreover, the architecture-specific commands can be largely abstracted away, so that the configuration and conversion processes can be carried out through straightforward general commands, which automatically cause an interface (e.g., at the conversion server) to call upon any necessary architecture-specific functionality. Still further, the information that must be entered by a user may be kept to a minimum, because the initial configuration information may be used by the system to automatically discover additional information that is needed to perform the conversion. | 2016-11-17 |
20160335107 | LOGICAL PROCESSING FOR CONTAINERS - Some embodiments provide a method for a first managed forwarding element (MFE). The method receives a data message that includes a logical context tag that identifies a logical port of a particular logical forwarding element. Based on the logical context tag, the method adds a local tag to the data message. The local tag is associated with the particular logical forwarding element, which is one of several logical forwarding elements to which one or more containers operating on a container virtual machine (VM) belong. The container VM connects to the first MFE. The method delivers the data message to the container VM without any logical context. A second MFE operating on the container VM uses the local tag to forward the data message to a correct container of several containers operating on the container VM. | 2016-11-17 |
20160335108 | TECHNIQUES FOR MIGRATION PATHS - Exemplary embodiments described herein relate to a destination path for use with multiple different types of VMs, and techniques for using the destination path to convert, copy, or move data objects stored in one type of VM to another type of VM. The destination path represents a standardized (canonical) way to refer to VM objects from a proprietary VM. A destination location may be specified using the canonical destination path, and the location may be converted into a hypervisor-specific destination location. A source data object may be copied or moved to the destination location using a hypervisor-agnostic path. | 2016-11-17 |
20160335109 | TECHNIQUES FOR DATA MIGRATION - The present application provides exemplary methods, mediums, and systems for converting a virtual machine from management by one type of hypervisor to management by a second, different type of hypervisor. The exemplary method involves: (1) discovering information about the source VM; (2) making a backup copy of the source VM data (3) storing the information in the source VM; (4) copying the source VM data using cloning; (5) starting the destination VM with the cloned data by attaching the copied disks to the destination VM; (6) restoring the source VM to its original state; and (7) starting the destination VM and applying the saved system configuration to a destination guest OS. In some embodiments, the first type of hypervisor (the source hypervisor) may be a Hyper-V hypervisor, and the second type to hypervisor (the destination hypervisor) may be a VMware hypervisor. | 2016-11-17 |
20160335110 | SELECTIVE VIRTUALIZATION FOR SECURITY THREAT DETECTION - Selective virtualization of resources is provided, where the resources may be intercepted and services or the resources may be intercepted and redirected. Virtualization logic monitors for a first plurality of requests that are initiated during processing of an object within the virtual machine. Each of the first plurality of requests, such as system calls for example, is associated with an activity to be performed in connection with one or more resources. The virtualization logic selectively virtualizes resources associated with a second plurality of requests that are initiated during the processing of the object within the virtual machine, where the second plurality of requests is lesser in number than the first plurality of requests. | 2016-11-17 |
20160335111 | VIRTUAL NETWORK FUNCTION MANAGEMENT WITH DEACTIVATED VIRTUAL MACHINES - A method of managing virtual network functions for a network, the method including providing a virtual network function (VNF) including a number of virtual network function components (VNFCs) of a number of different types, each VNFC comprising a virtual machine (VM) executing application software. The method further includes creating for up to all VNFC types a number of deactivated VMs having application software, monitoring at least one performance level of the VNF, and scaling-out the VNF by activating a number of deactivated VMs of a number of VNFC types when the at least one performance level reaches a scale-out threshold. | 2016-11-17 |
20160335112 | METHOD AND APPARATUS FOR GENERATING UNIQUE IDENTIFIER FOR DISTRIBUTED COMPUTING ENVIRONMENT - Methods for generating a unique identifier of a distributed computing system are provided, one of methods comprise, receiving, by the first virtual machine, a first index range allocated by the identifier allocation server, receiving, by the second virtual machine, a second index range allocated by the identifier allocation server, the second index range being different from the first index range, generating, by the first virtual machine, a first unique identifier using an index in the first index range without intervention of the identifier allocation server and generating, by the second virtual machine, a second unique identifier using an index in the second index range without intervention of the identifier allocation server, wherein the first unique identifier and the second unique identifier are identifiers satisfying uniqueness for the whole distributed computing system. | 2016-11-17 |
20160335113 | AUTOMATED VIRTUAL DESKTOP PROVISIONING - Methods, systems, and techniques for automated provisioning of virtual desktops are provided. Example embodiments provide an Automated Virtual Desktop Provisioning System (“AVDPS”), which enables users to perform self-service provisioning of virtual desktops with little knowledge other than a proper license. The AVDPS is able to accomplish this through the use of pre-configured Blueprints and Templates. The Blueprints fully specify how a particular resource, for example, an application, services, or virtual infrastructure like memory, CPUs, disk space, etc., is to be installed in a user's virtual desktop(s). The Templates provide master images for a virtual infrastructure image instance (e.g., a virtual machine instance). In an example AVDPS, a single virtual infrastructure image instance supports multiple users at one time—avoided need to supply each user with its own virtual machine image and corresponding resources just in order to have a virtual desktop to access resources, for example applications. | 2016-11-17 |
20160335114 | APPARATUS, SYSTEMS AND METHODS FOR CROSS-CLOUD APPLICATION DEPLOYMENT - Embodiments disclosed facilitate obtaining a cloud agnostic representation of a first Virtual Machine Image (VMI) on a first cloud; and obtaining a second VMI for a second cloud different from the first cloud, wherein the second VMI is obtained based, at least in part, on the cloud agnostic representation of the first VMI. | 2016-11-17 |
20160335115 | SYSTEM AND METHOD FOR MULTI-LEVEL REAL-TIME SCHEDULING ANALYSES - A system and method of multi-level scheduling analysis for a general processing module of a real-time operating system. The method includes identifying any processes within respective partitions of the general processing module, for each identified process, determining if the process is local-time centric or global-time centric. The method converts global-time centric process to a local-time centric process, applies a single-level scheduling analysis technique to the processes of respective partitions, and transforms local-time based response times to global-time based response times. The method performs scheduling and response time analyses on one or more of the identified processes of respective partitions. The method can be performed on a synchronous and/or asynchronous system, and on a hierarchical scheduling system that includes a top level scheduler having a static-cyclic schedule and/or a general static schedule. A system and non-transitory computer-readable medium are also disclosed. | 2016-11-17 |
20160335116 | TASK GENERATION METHOD, TASK GENERATION APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING PROGRAM - A task generation method includes: receiving worker information from equipment of a worker over a network, the worker information including attribute information regarding a personal attribute of the worker; calculating degrees of association between each of pieces of analysis information resulting from analysis of pieces of data stored in a storage device connected to a computer and the worker information; extracting a piece of data to be subjected to task processing the worker is requested to perform from the pieces of data as specific data, based on the degrees of association; and generating a request task that is a task for making, to the equipment of the worker, a request for performing task processing for giving label information to the extracted specific data by using the equipment of the worker. | 2016-11-17 |
20160335117 | Hardware Transactional Memory-Assisted Flat Combining - An HTM-assisted Combining Framework (HCF) may enable multiple (combiner and non-combiner) threads to access a shared data structure concurrently using hardware transactional memory (HTM). As long as a combiner executes in a hardware transaction and ensures that the lock associated with the data structure is available, it may execute concurrently with other threads operating on the data structure. HCF may include attempting to apply operations to a concurrent data structure utilizing HTM and if the HTM attempt fails, utilizing flat combining within HTM transactions. Publication lists may be used to announce operations to be applied to a concurrent data structure. A combiner thread may select a subset of the operations in the publication list and attempt to apply the selected operations using HTM. If the thread fails in these HTM attempts, it may acquire a lock associated with the data structure and apply the selected operations without HTM. | 2016-11-17 |
20160335118 | MAPPING TENAT GROUPS TO IDENTITY MANAGEMENT CLASSES - Groups of a plurality of tenants are mapped to identity management classes corresponding to respective roles that grant respective permissions. The identity management classes are associated with hierarchical delegation information that specify delegation rights among the identity management classes, the delegation rights specifying rights of members of the respective identity management classes to perform delegation with respect to further members of the identity management classes. In response to a request by a first member of a first of the identity management classes to perform delegation with respect to a second member of one of the identity management classes, it is determined, based on the hierarchical delegation information, whether the first member is allowed to perform the delegation with respect to the second member. | 2016-11-17 |
20160335119 | BATCH-BASED NEURAL NETWORK SYSTEM - A multi-processor system for batched pattern recognition may utilize a plurality of different types of neural network processors and may perform batched sets of pattern recognition jobs on a two-dimensional array of inner product units (IPUs) by iteratively applying layers of image data to the IPUs in one dimension, while streaming neural weights from an external memory to the IPUs in the other dimension. The system may also include a load scheduler, which may schedule batched jobs from multiple job dispatchers, via initiators, to one or more batched neural network processors for executing the neural network computations. | 2016-11-17 |
20160335120 | ACCELERATING ALGORITHMS & APPLICATIONS ON FPGAs - A method for accelerating algorithms and applications on field-programmable gate arrays (FPGAs). The method includes: obtaining, from a host application, by a run-time configurable kernel, implemented on an FPGA, a first set of kernel input data; obtaining, from the host application, by the run-time configurable kernel, a first set of kernel operation parameters; parameterizing the run-time configurable kernel at run-time, using the first set of kernel operation parameters; and performing, by the parameterized run-time configurable kernel, a first kernel operation on the first set of kernel input data to obtain a first set of kernel output data. | 2016-11-17 |
20160335121 | SUPPORT OF NON-TRIVIAL SCHEDULING POLICIES ALONG WITH TOPOLOGICAL PROPERTIES - A system includes a scheduling unit for scheduling jobs to resources, and a library unit having a machine map of the system and a global status map of interconnections of resources. The library unit determines a free map of resources to execute the job to be scheduled, the free map indicating the interconnection of resources to which the job in a current scheduling cycle can be scheduled. A monitoring unit dispatches a job to the resources in the free map which match the resource mapping requirements of the job and fall within the free map. | 2016-11-17 |
20160335122 | AUTOMATED CAPACITY PROVISIONING METHOD USING HISTORICAL PERFORMANCE DATA - The method may include collecting performance data relating to processing nodes of a computer system which provide services via one or more applications, analyzing the performance data to generate an operational profile characterizing resource usage of the processing nodes, receiving a set of attributes characterizing expected performance goals in which the services are expected to be provided, and generating at least one provisioning policy based on an analysis of the operational profile in conjunction with the set of attributes. The at least one provisioning policy may specify a condition for re-allocating resources associated with at least one processing node in a manner that satisfies the performance goals of the set of attributes. The method may further include re-allocating, during runtime, the resources associated with the at least one processing node when the condition of the at least one provisioning policy is determined as satisfied. | 2016-11-17 |
20160335123 | DYNAMICALLY MODIFYING PROGRAM EXECUTION CAPACITY - Techniques are described for managing program execution capacity, such as for a group of computing nodes that are provided for executing one or more programs for a user. In some situations, dynamic program execution capacity modifications for a computing node group that is in use may be performed periodically or otherwise in a recurrent manner, such as to aggregate multiple modifications that are requested or otherwise determined to be made during a period of time, and with the aggregation of multiple determined modifications being able to be performed in various manners. Modifications may be requested or otherwise determined in various manners, including based on dynamic instructions specified by the user, and on satisfaction of triggers that are previously defined by the user. In some situations, the techniques are used in conjunction with a fee-based program execution service that executes multiple programs on behalf of multiple users of the service. | 2016-11-17 |
20160335124 | Systems and Methods for Task Scheduling - Disclosed herein is a computer implemented method for scheduling a new task. The method comprises: receiving task data in respect of the new task, the task data comprising at least information enabling the new task to be uniquely identified and a target runtime for the new task; recording the received task data in a data structure and determining if a new job needs to be registered with an underlying job scheduler. | 2016-11-17 |
20160335125 | METHOD AND PROCESSOR FOR IMPLEMENTING THREAD AND RECORDING MEDIUM THEREOF - A processor and corresponding method are described including cores having a thread set allocated based on a pre-set implementation order, and a controller configured to receive scheduling information determined based on an implementation pattern regarding the allocated thread set from one of the cores and transmit the scheduling information to another of the cores. The one of cores determines the scheduling information according to characteristics of an application when implementation of the thread set is completed. Each of the cores re-determines an implementation order regarding the allocated thread set based on the determined scheduling information. | 2016-11-17 |
20160335126 | DETERMINISTIC REAL TIME BUSINESS APPLICATION PROCESSING IN A SERVICE-ORIENTED ARCHITECTURE - Methods, apparatus, and products for deterministic real time business application processing in a service-oriented architecture (‘SOA’), the SOA including SOA services, each SOA service carrying out a processing step of the business application where each SOA service is a real time process executable on a real time operating system of a generally programmable computer and deterministic real time business application processing according to embodiments of the present invention includes configuring the business application with real time processing information and executing the business application in the SOA in accordance with the real time processing information. | 2016-11-17 |
20160335127 | SYSTEM AND METHOD FOR DYNAMIC GRANULARITY CONTROL OF PARALLELIZED WORK IN A PORTABLE COMPUTING DEVICE (PCD) - Systems and methods for dynamic granularity control of parallelized work in a heterogeneous multi-processor portable computing device (PCD) are provided. During operation a first parallelized portion of an application executing on the PCD is identified. The first parallelized portion comprising a plurality of threads for parallel execution on the PCD. Performance information is obtained about a plurality of processors of the PCD, each of the plurality of processors corresponding to one of the plurality of threads. A number M of workload partition granularities for the plurality of threads is determined, and a total execution cost for each of the M workload partition granularities is determined An optimal granularity comprising a one of the M workload partition granularities with a lowest total execution cost is determined, and the first parallelized portion is partitioned into a plurality of workloads having the optimal granularity. | 2016-11-17 |
20160335128 | ALLOCATION OF JOB PROCESSES TO HOST COMPUTING SYSTEMS BASED ON ACCOMMODATION DATA - Systems, methods, and software described herein facilitate the allocation of large scale processing jobs to host computing systems. In one example, a method of allocating job processes to a plurality of host computing systems in a large scale processing environment includes identifying a job process for the large scale processing environment, and obtaining accommodation data for a plurality of host computing systems in the large scale processing environment. The method further provides identifying a host computing system in the plurality of host computing systems for the job process based on the accommodation data, and initiating a virtual node on the host computing system for the job process. | 2016-11-17 |
20160335129 | LOGICAL PROCESSING FOR CONTAINERS - Some embodiments provide a local network controller that manages a first managed forwarding element (MFE) operating to forward traffic on a host machine for several logical networks and configures the first MFE to forward traffic for a set of containers operating within a container virtual machine (VM) that connects to the first MFE. The local network controller receives, from a centralized network controller, logical network configuration information for a logical network to which the set of containers logically connect. The local network controller receives, from the container VM, a mapping of a tag value used by a second MFE operating on the container VM to a logical forwarding element of the logical network to which the set of containers connect. The local network controller configures the first MFE to apply the logical network configuration information to data messages received from the container VM that are tagged with the tag value. | 2016-11-17 |
20160335130 | INTERCONNECT STRUCTURE TO SUPPORT THE EXECUTION OF INSTRUCTION SEQUENCES BY A PLURALITY OF ENGINES - A global interconnect system. The global interconnect system includes a plurality of resources having data for supporting the execution of multiple code sequences and a plurality of engines for implementing the execution of the multiple code sequences. A plurality of resource consumers are within each of the plurality of engines. A global interconnect structure is coupled to the plurality of resource consumers and coupled to the plurality of resources to enable data access and execution of the multiple code sequences, wherein the resource consumers access the resources through a per cycle utilization of the global interconnect structure. | 2016-11-17 |
20160335131 | Dynamic Resource Scheduling - Embodiments of the invention relate to a system and method for dynamically scheduling resources using policies to self-optimize resource workloads in a data center. The object of the invention is to allocate resources in the data center dynamically corresponding to a set of policies that are configured by an administrator. Operational parametrics that correlate to the cost of ownership of the data center are monitored and compared to the set of policies configured by the administrator. When the operational parametrics approach or exceed levels that correspond to the set of policies, workloads in the data center are adjusted with the goal of minimizing the cost of ownership of the data center. Such parametrics include yet are not limited to those that relate to resiliency, power balancing, power consumption, power management, error rate, maintenance, and performance. | 2016-11-17 |
20160335132 | PROCESSOR THREAD MANAGEMENT - Provided are a computer program product, system, and method for managing processor threads of a plurality of processors. In one embodiment, a parameter of performance of the computing system is measured, and the configurations of one or more processor nodes are dynamically adjusted as a function of the measured parameter of performance. In this manner, the number of processor threads being concurrently executed by the plurality of processor nodes of the computing system may be dynamically adjusted in real time as the system operates to improve the performance of the system as it operates under various operating conditions. It is appreciated that systems employing processor thread management in accordance with the present description may provide other features in addition to or instead of those described herein, depending upon the particular application. | 2016-11-17 |
20160335133 | TASKS_RCU Detection Of Tickless User Mode Execution As A Quiescent State - A TASKS_RCU grace period is detected whose quiescent states comprise a task undergoing a voluntary context switch, a task running in user mode, and a task running in idle-mode. A list of all runnable tasks is built. The runnable task list is scanned in one or more scan passes. Each scan pass through the runnable task list searches to identify tasks that have passed through a quiescent state by either performing a voluntary context switch, running in user mode, or running in idle-mode. If found, such quiescent state tasks are removed from the runnable task list. Searching performed during a scan pass includes identifying quiescent state tickless user mode tasks that have been running continuously in user mode on tickless CPUs that have not received a scheduling clock interrupt since commencement of the TASKS_RCU grace period. If the runnable task list is empty, the TASKS_RCU grace period is ended. | 2016-11-17 |
20160335134 | DETERMINING STORAGE TIERS FOR PLACEMENT OF DATA SETS DURING EXECUTION OF TASKS IN A WORKFLOW - Provided are a computer program product, system, and method for determining storage tiers for placement of data sets during execution of tasks in a workflow. A representation of a workflow execution pattern of tasks for a job indicates a dependency of the tasks and data sets operated on by the tasks. A determination is made of an assignment of the data sets for the tasks to a plurality of the storage tiers based on the dependency of the tasks indicated in the workflow execution pattern. A moving is scheduled of a subject data set of the data sets operated on by a subject task of the tasks that is subject to an event to an assigned storage tier indicated in the assignment for the subject task subject. The moving of the data set is scheduled to be performed in response to the event with respect to the subject task. | 2016-11-17 |
20160335135 | METHOD FOR MINIMIZING LOCK CONTENTION AMONG THREADS WHEN TASKS ARE DISTRIBUTED IN MULTITHREADED SYSTEM AND APPRATUS USING THE SAME - A method for minimizing lock contention among threads in a multithreaded system is disclosed. The method includes the steps of: (a) a processor causing a control thread, if information on a task is acquired by the control thread, to acquire a lock to thereby put the information on a task into a specific task queue which satisfies a certain condition among multiple task queues; and (b) the processor causing a specified worker thread corresponding to the specific task queue among multiple worker threads, if the lock held by the control thread is released, to acquire a lock to thereby get a task stored in the specific task queue. | 2016-11-17 |
20160335136 | TASKS_RCU Detection Of Tickless User Mode Execution As A Quiescent State - A TASKS_RCU grace period is detected whose quiescent states comprise a task undergoing a voluntary context switch, a task running in user mode, and a task running in idle-mode. A list of all runnable tasks is built. The runnable task list is scanned in one or more scan passes. Each scan pass through the runnable task list searches to identify tasks that have passed through a quiescent state by either performing a voluntary context switch, running in user mode, or running in idle-mode. If found, such quiescent state tasks are removed from the runnable task list. Searching performed during a scan pass includes identifying quiescent state tickless user mode tasks that have been running continuously in user mode on tickless CPUs that have not received a scheduling clock interrupt since commencement of the TASKS_RCU grace period. If the runnable task list is empty, the TASKS_RCU grace period is ended. | 2016-11-17 |
20160335137 | Preemptible-RCU CPU Hotplugging While Maintaining Real-Time Response - A grace period detection technique for a preemptible read-copy update (RCU) implementation that uses a combining tree for quiescent state tracking. When a leaf level bitmask indicating online/offline CPUs is fully cleared due to all of its assigned CPUs going offline as a result of hotplugging operations, the bitmask state is not immediately propagated to the root level of the combining tree as in prior art RCU implementations. Instead, propagation is deferred until all tasks are removed from an associated leaf level task list tracking tasks that were preempted inside an RCU read-side critical section. Deferring bitmask propagation obviates the need to migrate the task list to the combining tree root level in order to prevent premature grace period termination. The task list can remain at the leaf level. In this way, CPU hotplugging is accommodated while avoiding excessive degradation of real-time latency stemming from the now-eliminated task list migration. | 2016-11-17 |
20160335138 | DIGITAL ASSISTANT EXTENSIBILITY TO THIRD PARTY APPLICATIONS - A digital assistant includes an extensibility client that interfaces with application extensions that are built by third-party developers so that various aspects of application user experiences, content, or features may be integrated into the digital assistant and rendered as native digital assistant experiences. Application extensions can use a variety of services provided from cloud-based and/or local sources such as language/vocabulary, user preferences, and context services that add intelligence and contextual relevance while enabling the extensions to plug in and operate seamlessly within the digital assistant context. Application extensions may also access and utilize general digital assistant functions, data structures, and libraries exposed by the services and implement application domain-specific context and behaviors using the programming features captured in the extension. Such extensibility to third party applications can broaden the scope of the database of information that the digital assistant may use to answer questions and perform actions for the user. | 2016-11-17 |
20160335139 | ACTIVITY TRIGGERS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for action items, user defined actions, and triggering activities. In one aspect, a method includes receiving, at a user device, input of a user defined action, the user defined action including a plurality of terms; receiving, by the user device, a selection of a user defined trigger activity, the trigger activity indicating user performance of an activity to trigger the user defined action to be presented; determining at least one environmental condition of an environment in which the user device is located; determining, based on user information and the at least one environmental condition, a user performance of the activity indicated by the trigger activity; and presenting, by the user device, a notification of the user defined action to the user device of the user. | 2016-11-17 |
20160335140 | ACTIVATING DEVICE FUNCTIONS BASED ON CONFIGURATIONS OF DEVICE MODULES - Embodiments are provided for managing operation of an electronic device based on the connection(s) of hardware module(s) to the electronic drive via a support housing. According to certain aspects, the electronic device may activate and identify a hardware module that is connected to a controlling position of the support housing. The electronic device may identify a function associated with the hardware module, where the function may be a built-in function of the hardware module itself or of the electronic device. The electronic device may accordingly activate the identified function. | 2016-11-17 |
20160335141 | COMMAND-BASED STORAGE SCENARIO PREDICTION - An apparatus, method, system, and program product are disclosed for command-based storage scenario prediction. A registration module registers a listener to receive notifications associated with a scenario, which comprises a predefined sequence of a plurality of commands. A command module determines an initial scenario sequence comprising a subset of the plurality of commands of the scenario. A monitor module detects execution of commands on a device. A notification module sends a notification to the listener in response to detecting execution of a sequence of commands comprising the initial scenario sequence. The notification includes a hint indicating to the listener to prepare for one or more remaining commands of the scenario. | 2016-11-17 |
20160335142 | Notification Service Processing Method for Business Process Management and Business Process Management Engine - The present disclosure discloses a notification service processing method for business process management and a business process management engine. The method includes parsing a definition of a business process when business process starts running, and creating a business process instance for a business activity when execution reaches the business activity, where an event listener is configured for the business activity, and where at least one notification service is configured for the event listener, parsing, based on the created business process instance, the event listener configured for the business activity, and invoking the notification service configured for the event listener when the event listener learns by listening that a notification service trigger condition is met to send a notification message to a corresponding party. In this way, complexity of notification service processing in business process management is reduced. | 2016-11-17 |
20160335143 | SYSTEM AND METHOD FOR DETERMINING CONCURRENCY FACTORS FOR DISPATCH SIZE OF PARALLEL PROCESSOR KERNELS - Disclosed is a method of determining concurrency factors for an application running on a parallel processor. Also disclosed is a system for implementing the method. In an embodiment, the method includes running at least a portion of the kernel as sequences of mini-kernels, each mini-kernel including a number of concurrently executing workgroups. The number of concurrently executing workgroups is defined as a concurrency factor of the mini-kernel. A performance measure is determined for each sequence of mini-kernels. From the sequences, a particular sequence is chosen that achieves a desired performance of the kernel, based on the performance measures. The kernel is executed with the particular sequence. | 2016-11-17 |
20160335144 | ADAPTIVE READ DISTURB RECLAIM POLICY - Memory systems may include a memory including a plurality of memory blocks, and a controller suitable for, incrementing a first counter corresponding to a block of the plurality of blocks when the block is read, incrementing a second counter when the first counter reaches a predefined count number, determining an error count of the block when the second counter is incremented, and initiating a reclaim function when the error count exceeds an error threshold. | 2016-11-17 |
20160335145 | Programmable Device, Error Storage System, and Electronic System Device - The present invention aims to provide a programmable device with a configuration memory that can hold the state of the occurrence abnormal situation that is difficult to assume such as a failure occurring in the programmable device due to the terrestrial radiation of the configuration memory, even during power off, in order to improve the reproducibility in device testing based on the held error information. The programmable device with the configuration memory includes: an error detection section for detecting an error in the configuration memory, and outputting the detected error as well as an address in which the error occurred, as error information; and an error information holding section provided with a non-volatile memory to store the output error information. | 2016-11-17 |
20160335146 | NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING PROGRAM FOR SIGN DETECTION, SIGN DETECTION DEVICE, AND SIGN DETECTION METHOD - A non-transitory computer-readable recording medium having stored therein a program for causing a computer to execute a process for detecting a sign, the process includes obtaining message information output from one or a plurality of information processing devices; obtaining configuration information in the one or the plurality of information processing devices; storing the obtained message information and the obtained configuration information in a common format; and outputting predetermined message information and predetermined configuration information according to comparison of a predetermined pattern described in the common format and the message information and the configuration information stored in the common format. | 2016-11-17 |
20160335147 | EXTENDED INTERFRAME SPACE (EIFS) EXEMPTIONS - Certain aspects of the present disclosure relate to selecting a deferral period after detecting an error in a received packet by an apparatus for wireless communications. The apparatus generally includes an interface configured to obtain a frame received over a medium, and a processing system configured to detect an occurrence of an error when processing the frame, determine an intended recipient of the frame based on information included in the frame, and select a deferral period, after detecting the occurrence of the error, during which the apparatus refrains from transmitting on the medium, wherein the selection is based, at least in part, on the determination. | 2016-11-17 |
20160335148 | LIVE ERROR RECOVERY - A packet is identified at a port of a serial data link, and it is determined that the packet is associated with an error. Entry into an error recovery mode is initiated based on the determination that the packet is associated with the error. Entry into the error recovery mode can cause the serial data link to be forced down. In one aspect, forcing the data link down causes all subsequent inbound packets to be dropped and all pending outbound requests and completions to be aborted during the error recovery mode. | 2016-11-17 |
20160335149 | Peripheral Watchdog Timer - In some embodiments, a circuit may include a plurality of peripherals and a peripheral watchdog timer circuit coupled to at least one of the plurality of peripherals. The peripheral watchdog timer circuit may be configured to count clock cycles and concurrently to detect activity associated with at least one of the plurality of peripherals. The peripheral watchdog timer circuit may be configured to reset a count in response to detecting the activity. In some embodiments, the peripheral watchdog timer circuit may be configured to generate an alert signal when the count exceeds a threshold count before detecting the activity. In some embodiments, the peripheral watchdog timer circuit is configured to initiate a reset operation when the alert is not serviced within a period of time. | 2016-11-17 |
20160335150 | PERSONALIZING ERROR MESSAGES BASED ON USER BEHAVIOR - An approach is provided for personalizing an error message for a user. Corrective actions performed by the user are monitored. The corrective actions include the user visiting online forums. The corrective actions cause a resolution of an instance of an error condition described in the error message and which specifies an error in an operation of a software application. Based on the monitored corrective actions, sources of information accessed by the user to resolve the error condition instance are determined. The sources include online forums visited by the user. After resolution of the error condition instance, another instance of the same error condition is detected, and in response, the error message is augmented with a summary of the sources including the online forums and/or hyperlinks that access the sources including the online forums. The augmented error message is presented to the user. | 2016-11-17 |
20160335151 | SYSTEMS AND METHODS FOR PROVIDING SERVICE AND SUPPORT TO COMPUTING DEVICES - Systems and methods for providing service and to computing devices. In some embodiments, an Information Handling System (IHS) includes a Basic I/O System (BIOS) and a memory coupled to the BIOS, the memory including program instructions stored thereon that, upon execution by the IHS, cause the IHS to: determine that the IHS is operating in a degraded state; and initiate one or more support, diagnostics, or remediation operations in response to the determination. | 2016-11-17 |
20160335152 | Self-Stabilizing Network Nodes in Mobile Discovery System - The disclosure relates to cloud-based mobile discovery networks. For example, a mobile discovery network may include a network responsive to successful watermark detection or fingerprint extraction. One claim recites a cloud-based computing resolver cell in a mobile discovery network, the mobile discovery network having a cloud-based traffic router for forwarding requests from remote devices. The resolver cell includes: memory for storing response information; one or more processors programmed for: combine results from a third party inquiry, a traffic router health check, and an internal component or processing check within a certain time period determine whether to enter into a stabilization mode; entering the stabilization mode when a determination indicates stabilization is warranted; verifying, for a predetermined period, the status of the resolver cell before exiting the stabilization mode. Of course other claims and combinations are provided as well. | 2016-11-17 |
20160335153 | RECOVERY MECHANISMS ACROSS STORAGE NODES THAT REDUCE THE IMPACT ON HOST INPUT AND OUTPUT OPERATIONS - Provided are a method, a system, and a computer program product in which a storage controller determines one or more resources that are impacted by an error. A cleanup of tasks associated with the one or more resources that are impacted by the error is performed, to recover from the error, wherein host input/output (I/O) operations continue to be processed, and wherein tasks associated with other resources continue to execute. | 2016-11-17 |
20160335154 | ERROR CORRECTION CODING REDUNDANCY BASED DATA HASHING - Storage infrastructures and methods that generate hash values based on error correction codes. A system is provided that includes: a code retrieval system implemented on a host having logic for issuing a redundancy read command to a storage system to retrieve a redundancy code for an identified data block; and a hashing system implemented on the host for hashing the redundancy code to generate a hash value based on the redundancy code. A storage system is also provided that includes: a memory for storing data blocks and associated redundancy codes; and a controller having: an input/output for receiving a hash value read command for a specified data block from a host and returning a hash value; a decoding system that extracts a redundancy code associated with the specified data block; and an in-memory hashing system for computing a hash operation on the redundancy code. | 2016-11-17 |
20160335155 | Method and Device for Storing Data, Method and Device for Decoding Stored Data, and Computer Program Corresponding Thereto - A method is provided for storing data. The method implements an error-correction code defining a set of variables linked by constraints, each variable being associated with source data and/or redundancy data. The method implements the following steps: determining variables forming at least one stopping set of said code, determining a scheme for allocating said variables, allocating a distinct storage carrier to each variable forming a stopping set, distributing said variables, or data associated with said variables, to said storage carriers according to said allocation scheme. | 2016-11-17 |
20160335156 | ASYMMETRIC ERROR CORRECTION AND FLASH-MEMORY REWRITING USING POLAR CODES - Techniques are disclosed for generating codes for representation of data in memory devices that may avoid the block erasure operation in changing data values. Data values comprising binary digits (bits) can be encoded and decoded using the generated codes, referred to as codewords, such that the codewords may comprise a block erasure-avoiding code, in which the binary digits of a data message m can be encoded such that the encoded data message can be stored into multiple memory cells of a data device and, once a memory cell value is changed from a first logic value to a second logic value, the value of the memory cell may remain at the second logic value, regardless of subsequently received messages, until a block erasure operation on the memory cell. Similarly, a received data message comprising an input codeword, in which source data values of multiple binary digits have been encoded with the disclosed block erasure-avoiding code, can be decoded in the data device to recover an estimated source data message. | 2016-11-17 |
20160335157 | SEMICONDUCTOR MEMORY SYSTEM AND DATA WRITING METHOD - On the basis of data addresses indicative of write bit positions of each of the write data pieces in each of blocks, write page addresses indicative of pages having each of the write data pieces written thereto in each of the blocks are detected. At least one write data piece is incorporated into each of the page data pieces indicated by the write page addresses among k page data pieces corresponding to the k pages, the page data pieces having the write data pieces incorporated therein are used as page data pieces, and an error-correction encoding process is applied to each of the write page data pieces to obtain encoded write data pieces. Then, a voltage based on the encoded write data pieces is applied to each of the memory cells belonging to the pages indicated by the write page addresses. | 2016-11-17 |
20160335158 | HYBRID DISTRIBUTED STORAGE SYSTEM - There is provided a distributed object storage system that includes several performance optimizations with respect to efficiently storing data objects when coping with a desired concurrent failure tolerance of concurrent failures of storage elements which is greater than two and with respect to optimizing encoding/decoding overhead and the number of input and output operations at the level of the storage elements. | 2016-11-17 |
20160335159 | MEMORY CONTROLLER UTILIZING AN ERROR CODING DISPERSAL FUNCTION - A computing core includes memory and a memory controller. The memory is partitioned into a user section and a kernel section. The user section is divided into a first set of pillars and the kernel section is divided into a second set of pillars. The memory controller is operable to dispersed storage error encode a data segment of user data into a set of encoded user data slices. The memory controller is further operable to store the set of encoded user data slices in the first set of pillars of the user section. The memory controller is further operable to dispersed storage error encode a data segment of system data into a set of encoded system data slices. The memory controller is further operable to store the set of encoded system data slices in the second set of pillars of the kernel section. | 2016-11-17 |
20160335160 | MEMORY DEVICE WITH SPECULATED BIT FLIP THRESHOLD - Technologies are described for systems, devices and methods effective to decode data read from a memory. Coded data may be stored in a buffer. A parity check syndrome vector may be calculated by a bit flip module, based on the coded data and based on a parity matrix. The parity check syndrome vector may include unsatisfied bits. The parity check syndrome vector may be stored in the buffer. The bit flip module may calculate a speculated bit flip threshold based on a feature of the parity matrix. The bit flip module may determine, based on the parity check syndrome vector, a number of unsatisfied parity checks participated in by a particular bit of the coded data. The bit flip module may flip the particular bit in response to the number of unsatisfied parity checks for the particular bit being greater than or equal to the speculated bit flip threshold. | 2016-11-17 |
20160335161 | A PATTERN BASED CONFIGURATION METHOD FOR MINIMIZING THE IMPACT OF COMPONENT FAILURES - A configuration is generated for a software that is to be deployed for providing high service availability to satisfy configuration requirements. One or more configuration patterns are identified, each of which specifies a set of attribute values and an actual recovery action for a failed component as a configuration option of the software. The unchangeable attribute values of the software are matched with the configuration patterns to obtain a matching configuration pattern, whose actual recovery action incurs a smallest component failure recovery impact zone. The matching configuration pattern is selected as at least a portion of the configuration of the software. Then the changeable attribute values of the software are set to the corresponding attribute values of the matching configuration pattern to satisfy the configuration requirements. | 2016-11-17 |
20160335162 | OPTIMIZING THE NUMBER AND TYPE OF DATABASE BACKUPS TO ACHIEVE A GIVEN RECOVERY TIME OBJECTIVE (RTO) - A method of optimizing the number and type of database backups to achieve a given RTO is provided and may include receiving a RTO and receiving a heuristic for determining an amount of unencumbered processing time. A type of next backup, (i.e., a next backup), is determined wherein the type of next backup is an incremental backup when the sum of the heuristic, and the times to: restore the latest full backup, restore zero or more incremental backups, complete a current incremental backup, and perform a full backup is less than the received RTO, else the type of the next backup is a full backup. A time to schedule the next backup is scheduled based on the received RTO being a total of an amount of time to: complete the type of next backup; rollforward zero or more transaction log records; and to restore at least one backup. | 2016-11-17 |
20160335163 | IMPORTATION, PRESENTATION, AND PERSISTENT STORAGE OF DATA - Described are methods, systems and computer readable media for the importation, presentation, and persistent storage of data. | 2016-11-17 |
20160335164 | RECOVERING A VOLUME TABLE AND DATA SETS - Provided are a computer program product, system, and method for recovering a volume table and data sets from a volume. Content from a backup volume table comprising a valid backup of a volume table from backup of the volume is processed to generate a recovery volume table for a recovery volume. The data sets in the volume are processed to determine whether they are valid. The valid data sets are moved to the recovery volume. A data recovery operation is initiated for the data sets determined not to be valid. | 2016-11-17 |
20160335165 | SYSTEM AND METHOD FOR PROVIDING SERVER APPLICATION SERVICES WITH HIGH AVAILABILITY AND A MANY-TO-ONE HARDWARE CONFIGURATION - A suite of network-based services, such as the services corresponding to the server application distributed by Microsoft® SharePoint™, may be provided to users with high availability. The suite of network-based services may include browser-based collaboration functions, process management functions, index and search functions, document-management functions, help and help search functions, and/or other functions. A plurality of computing devices functioning as servers may be backed up by a single computing device. | 2016-11-17 |
20160335166 | SMART STORAGE RECOVERY IN A DISTRIBUTED STORAGE SYSTEM - Embodiments include obtaining at least one system metric of a distributed storage system, generating one or more recovery parameters based on the at least one system metric, identifying at least one policy associated with data stored in a storage node of a plurality of storage nodes in the distributed storage system, and generating a recovery plan for the data based on the one or more recovery parameters and the at least one policy. In more specific embodiments, the recovery plan includes a recovery order for recovering the data. Further embodiments include initiating a recovery process to copy replicas of the data from a second storage node to a new storage node, wherein the replicas of the data are copied according to the recovery order indicated in the recovery plan. | 2016-11-17 |
20160335167 | STEPPING AND APPLICATION STATE VIEWING BETWEEN POINTS - Various technologies and techniques are disclosed for providing stepping and state viewing in a debugger application. A start and end breakpoint are assigned. Source code execution begins, and upon reaching the start breakpoint, a logging feature begins storing one or more values that may be impacted upon execution of code between the start breakpoint and an end breakpoint. More lines of source code are executed until the end breakpoint is reached. When the end breakpoint is reached, the debugger is put into break mode. While in break mode, a playback feature is provided to allow a user to play back a path of execution that occurred between the start breakpoint and the end breakpoint. The playback feature uses at least some of the values that were stored with the logging feature to show how each referenced variable changed in value. | 2016-11-17 |
20160335168 | REAL-TIME ANALYSIS OF APPLICATION PROGRAMMING INTERFACES - Systems and methods disclosed herein may include real-time analysis of application programming interfaces (APIs). The method may include detecting that the programming code input is associated with at least a portion of an application programming interface (API). At least one coding error associated with the API may be detected based on static analysis of the code. The static analysis may include receiving an indication of a browser version, and comparing the received code with programming code for the API verified for the browser version, to detect the at least one coding error. Information identifying at least a first remediation action for correcting the at least one coding error may be received based at least in part on the at least one browser version. The at least a first remediation action may be provided for display to a user of the computing device. | 2016-11-17 |
20160335169 | APPLICATION-CENTRIC ANALYSIS OF LEAK SUSPECT OPERATIONS - To identify a source of a memory leak in an application, a pattern of objects is identified in an object hierarchy of a heap dump, the pattern including an indication of the memory leak. The pattern is matched with a metadata of the application. A static entry in the metadata describes a relationship between a component of the application and an object of a class used in the component. A flow entry in the metadata describes a relationship between a pattern of instantiation of a set of objects corresponding to a set of classes and an operation performed using the application. When the pattern matches the flow entry in the flow section of the metadata, a conclusion is drawn that the memory leak is caused in the operation identified in the flow entry. A portion of a code that participates in the operation is selected for modification. | 2016-11-17 |
20160335170 | MODEL CHECKING DEVICE FOR DISTRIBUTED ENVIRONMENT MODEL, MODEL CHECKING METHOD FOR DISTRIBUTED ENVIRONMENT MODEL, AND MEDIUM - A model checking device for a distributed-environment-model according to the present invention, includes: a distributed-environment-model search unit that adopts a first state as start point when obtaining information indicating a distributed-environment-model, searches the state attained by the distributed-environment-model by executing straight line movements for moving from the first state to a second state which is an end position, and determines whether the searched state satisfies a predetermined property; a searched state management unit that stores the searched state in the past; a searched-transition-history management unit that stores an order of the transitions of the straight line movements in the past; a searched state transition association information management unit that stores the transition when moving to another state in the past search in such a manner that the transition is associated with each of the searched states. | 2016-11-17 |
20160335171 | TEST AUTOMATION MODELING - A method of modeling elements in an automated test of a software application is disclosed. An attribute is created for a set of user-interface elements of a selected species. The attribute defines interactions for the selected species. The user-interface elements are reduced to a primitive type. An application program interface can be used to apply the attribute to the software application. | 2016-11-17 |
20160335172 | DEBUGGING SYSTEM - A method is disclosed of generating program analysis data for analysing the operation of a computer program. The method includes running a first instrumented version of machine code representing the program, wherein said running defines a reference execution of said program, capturing a log of non-deterministic events during reference execution to reproduce states of a processor and memory during the re-running, generating a second instrumented version of machine code to replay execution of said machine code representing the program and to capture and store program state information, wherein said program state information comprises one or both of one or more values of registers of said processor and one or more values of memory locations used by said program, running said instrumented machine code whilst reproducing said non-deterministic events during said running to reproduce said reference execution; and capturing said program state information whilst reproducing said reference execution. | 2016-11-17 |
20160335173 | Translating Machine Codes to Store Metadata and to Propagate Metadata for Run Time Checking of Programming Errors - A method translates the native machine codes that do not allocate memory for metadata, do not store, and do not propagate metadata by augmenting them with extra instructions to allocate memory for metadata, to store, and to populate metadata such that metadata are readily available at run time for checking programming errors. | 2016-11-17 |
20160335174 | GENERALIZED SNAPSHOTS BASED ON MULTIPLE PARTIAL SNAPSHOTS - Example embodiments relate to generalized snapshots based on multiple partial snapshots. An example method may include accessing multiple partial snapshots, each from a different client. The method may include creating a generalized snapshot from the multiple partial snapshots. The generalized snapshot includes multiple target pixels, and the color of each of the multiple target pixels may be determined by considering colors of multiple source pixels, each from a different partial snapshot. | 2016-11-17 |
20160335175 | DETERMINING VALID INPUTS FOR AN UNKNOWN BINARY PROGRAM - A method to determine valid input sequences for an unknown binary program is provided. The method includes obtaining multiple input sequences, which each include two or more different inputs, for an unknown binary program. The inputs for the input sequences may be valid inputs for the unknown binary program. The method may further include executing an instrumented version of the unknown binary program separately for each input sequence. For each execution of the instrumented version of the unknown binary program, a set of execution traces may be generated by recording execution traces generated by the execution of the instrumented version of the unknown binary program. The method may further include comparing the sets of execution traces and determining which of the input sequences the unknown binary program accepts as valid based on the comparison of the sets of execution traces. | 2016-11-17 |
20160335176 | MECHANISMS FOR REPRODUCING STORAGE SYSTEM METADATA INCONSISTENCIES IN A TEST ENVIRONMENT - Mechanisms for recreating a first inconsistency in storage system metadata encountered by an installation module during a software installation process on a first computing device are provided. A test computing device accesses on a remote storage device inconsistent storage system metadata associated with the first computing device. The inconsistent storage system metadata includes a plurality of storage system metadata segments, location information that identifies corresponding locations of the respective storage system metadata segments on at least one storage device of the first computing device, and length information that identifies corresponding lengths of the respective storage system metadata segments. For each respective storage system metadata segment of the plurality of storage system metadata segments, the respective storage system metadata segment is stored at the corresponding location on a first test storage device of a test computing device. | 2016-11-17 |
20160335177 | Cache Management Method and Apparatus - A cache management method and apparatus are disclosed, in order to improve cache resource utilization, where the method includes receiving an access request, determining data that is to be accessed and that needs to be accessed according to the access request, determining a strength level of spatial locality of the data to be accessed, and allocating, according to the strength level of the spatial locality of the data to be accessed, a cache subunit corresponding to the level to the data to be accessed, where the method is applicable to the communications field, and may used to implement cache management. | 2016-11-17 |
20160335178 | Systems and Methods for Utilizing Wear Leveling Windows with Non-Volatile Memory Systems - Systems and methods for utilizing wear leveling windows with non-volatile memory systems are disclosed. In one implementation, a memory management module of a non-volatile memory system compares a metric reflecting wear of a memory block to a wear leveling window and determines whether a wear leveling indicator associated with the memory block restricts performing a wear leveling operation on the memory block. The memory management module performs a wear leveling operation on the memory block in response to determining that the metric reflecting wear of the memory block falls outside the wear leveling window and determining that the wear leveling indicator does not restrict performing a wear leveling operation on the memory block. After performing the wear leveling operation, the memory management module places the memory block on a free block list. | 2016-11-17 |
20160335179 | DATA SEPARATION BY DELAYING HOT BLOCK GARBAGE COLLECTION - Memory systems may include a memory including a plurality of blocks, and a controller suitable for determining a pool of blocks from the plurality of blocks as garbage collection (GC) victim block candidates based on a number of valid pages left in each of the plurality of blocks, and selecting a block from the pool of blocks having a minimum number of valid pages as a victim block for garbage collection. | 2016-11-17 |
20160335180 | DISTRIBUTED AND OPTIMIZED GARBAGE COLLECTION OF REMOTE AND EXPORTED TABLE HANDLE LINKS TO UPDATE PROPAGATION GRAPH NODES - Described are methods, systems and computer readable media for distributed and optimized garbage collection of remote and exported object handle links to update propagation graph nodes. | 2016-11-17 |
20160335181 | Shared Row Buffer System For Asymmetric Memory - An architecture for improved memory access in asymmetric memories provides a set of shared row buffers that may be freely allocated between slow and fast memory banks of the asymmetric memory. This permits allocation of row buffers dynamically between the slow and fast memory banks to improve execution speeds and also permits a lightweight memory swap procedure for moving data between the slow and fast memory banks with low processor and memory channel overheads. | 2016-11-17 |
20160335182 | COMPUTER DATA DISTRIBUTION ARCHITECTURE - Described are methods, systems and computer readable media for computer data distribution architecture. | 2016-11-17 |
20160335183 | Preemptible-RCU CPU Hotplugging While Maintaining Real-Time Response - A grace period detection technique for a preemptible read-copy update (RCU) implementation that uses a combining tree for quiescent state tracking. When a leaf level bitmask indicating online/offline CPUs is fully cleared due to all of its assigned CPUs going offline as a result of hotplugging operations, the bitmask state is not immediately propagated to the root level of the combining tree as in prior art RCU implementations. Instead, propagation is deferred until all tasks are removed from an associated leaf level task list tracking tasks that were preempted inside an RCU read-side critical section. Deferring bitmask propagation obviates the need to migrate the task list to the combining tree root level in order to prevent premature grace period termination. The task list can remain at the leaf level. In this way, CPU hotplugging is accommodated while avoiding excessive degradation of real-time latency stemming from the now-eliminated task list migration. | 2016-11-17 |
20160335184 | Method and Apparatus for History-Based Snooping of Last Level Caches - A method and apparatus for snooping caches is disclosed. In one embodiment, a system includes a number of processing nodes and a cache shared by each of the processing nodes. The cache is partitioned such that each of the processing nodes utilizes only one assigned partition. If a query by a processing node to its assigned partition of the cache results in a miss, a cache controller may determine whether to snoop other partitions in search of the requested information. The determination may be made based on history of where requested information was obtained from responsive to previous misses in that partition. | 2016-11-17 |
20160335185 | MEMORY MANAGEMENT METHOD AND APPARATUS - A memory management method includes determining a stride value for stride access by referring to a size of two-dimensional ( | 2016-11-17 |
20160335186 | PREFETCH TAG FOR EVICTION PROMOTION - Various embodiments provide for a system that prefetches data from a main memory to a cache and then evicts unused data to a lower level cache. The prefetching system will prefetch data from a main memory to a cache, and data that is not immediately useable or is part of a data set which is too large to fit in the cache can be tagged for eviction to a lower level cache, which keeps the data available with a shorter latency than if the data had to be loaded from main memory again. This lowers the cost of prefetching useable data too far ahead and prevents cache trashing. | 2016-11-17 |
20160335187 | CREATE PAGE LOCALITY IN CACHE CONTROLLER CACHE ALLOCATION - Integrated circuits are provided which create page locality in cache controllers that allocate entries to set-associative cache, which includes data storage for a plurality of Sets of Ways. A plurality of cache controllers may be interleaved with a processor and device(s), and allocate to any pages in the cache. A cache controller may select a Way from a Set to which to allocate new entries in the set-associative cache and bias selection of the Way according to a plurality of upper address bits (or other function). These bits may be identical at the cache controller during sequential memory transactions. A processor may determine the bias centrally, and inform the cache controllers of the selected Set and Way. Other functions, algorithms or approaches may be chosen to influence bias of Way selection, such as based on analysis of metadata belonging to cache controllers used for making Way allocation selections. | 2016-11-17 |
20160335188 | CACHE DATA PLACEMENT FOR COMPRESSION IN DATA STORAGE SYSTEMS - A technique for managing data storage in a data storage system is disclosed. Data blocks are written to a data storage system cache, pluralities of the data blocks being organized into cache macro blocks, the IO cache macro blocks having a fixed size. Access requests for the data blocks are processed, wherein processing includes generating block access statics. Using access statics, data blocks stored in the cache macroblocks having block a access times that overlap are identified. Data blocks identified as having overlapping access times are rearranged into one or more overlap cache macroblocks. Data storage system cache memory is arranged into multiple input/output (IO) cache macroblocks, where a first set of IO cache macroblocks are configured as compressed IO cache macro blocks, each compressed IO cache macro block storing a plurality of variable sized compressed IO data blocks, and a second set of IO cache macroblocks are configured as non-compressed IO cache macroblocks, each non-compressed IO cache macroblock storing a plurality of fixed sized non-compressed IO data blocks. A write request is receive at the data storage system. If the IO data associated with the write request is determined to be compressible, the IO data is compressed in-line and written to an IO data block in a compressed IO cache macroblock, otherwise non-compressed IO data is written to an IO data block in a non-compressed IO cache macroblock. | 2016-11-17 |
20160335189 | LOCKING A CACHE LINE FOR WRITE OPERATIONS ON A BUS - Provided are a computer program product, system, and method for locking a cache line for a burst write operations on a bus. A cache line is allocated in a cache for a target address. A lock is set for the cache line, wherein setting the lock prevents the data in the cache line from being cast out. Data is written to the cache line. All the data in the cache line is flushed to the target address over a bus in response to completing writing to the cache line. | 2016-11-17 |
20160335190 | Method and Apparatus for Virtualized Control of a Shared System Cache - Aspects include computing devices, systems, and methods for implementing a cache maintenance or status operation for a component cache of a system cache. A computing device may generate a component cache configuration table, assign at least one component cache indicator of a component cache to a master of the component cache, and map at least one control register to the component cache indicator by a centralized control entity. The computing device may store the component cache indicator such that the component cache indicator is accessible by the master of the component cache for discovering a virtualized view of the system cache and issuing a cache maintenance or status command for the component cache bypassing the centralized control entity. The computing device may receive the cache maintenance or status command by a control register associated with a cache maintenance or status command and the component cache bypassing the centralized control entity. | 2016-11-17 |
20160335191 | CACHE CLEANING METHOD AND APPARATUS, CLIENT - The present disclosure provides a cache cleaning method, a cache cleaning apparatus and a client, which improves a cache cleaning efficiency in a client and improves a user experience effectively. The method includes: detecting an amount of used caches in a mobile terminal; if the amount of used caches is larger than a preset threshold, sending a cache application request to an operating system of the mobile terminal so as to trigger a preset cache release rule in the operating system; and after the operating system releases corresponding caches according to the preset cache release rule, sending a cache release request to the operating system such that the operating system releases caches allocated for the cache application request according to the cache release request. The present disclosure may be used in a cache management technique of a mobile terminal. | 2016-11-17 |
20160335192 | COMPUTER SYSTEM AND MEMORY ALLOCATION MANAGEMENT METHOD - A computer system includes: a physical resource including a memory; a virtualization mechanism that provides a virtual computer to which the physical resource is allocated; and a cache state management mechanism that manages a cache state of the virtual computer. The virtualization mechanism provides a first virtual computer and a second virtual computer. The cache state management mechanism manages the cache state of each of the first virtual computer and the second virtual computer. When the cache state management mechanism detects transition of the cache state in a state where a memory area allocated to a cache of the first virtual computer and a memory area allocated to a cache of the second virtual computer include duplicated areas storing same data, the virtualization mechanism releases the duplicated area in one of the first virtual computer and the second virtual computer. | 2016-11-17 |
20160335193 | METHODS AND SYSTEMS FOR PERFORMING A COPY FORWARD OPERATION - A storage device made up of multiple storage media is configured such that one such media serves as a cache for data stored on another of such media. The device includes a controller configured to manage the cache by consolidating information concerning obsolete data stored in the cache with information concerning data no longer desired to be stored in the cache, and erase segments of the cache containing one or more of the blocks of obsolete data and the blocks of data that are no longer desired to be stored in the cache to produce reclaimed segments of the cache. | 2016-11-17 |
20160335194 | PROCESSOR INCLUDING LOAD EPT INSTRUCTION - A processor including an extended page table (EPT) translation mechanism that is enabled for virtualization, and a load EPT instruction. When executed by the processor, the load EPT instruction directly invokes the EPT translation mechanism to directly convert a provided guest physical address into a corresponding true physical address. The EPT translation mechanism may include an EPT paging structure and an EPT tablewalk engine. The EPT paging structure is generated and stored in an external system memory when the EPT translation mechanism is enabled. The EPT tablewalk engine is configured to access the EPT paging structure for the physical address conversion. The EPT tablewalk engine may perform relevant checks to trigger EPT misconfigurations and EPT violations during execution of the load EPT instruction. | 2016-11-17 |
20160335195 | STORAGE DEVICE - The present invention provides a storage device adopting a semiconductor device as a storage media having a nonvolatile property and must be erased for writing data, wherein the device divides and manages a logical storage space provided to a higher level device in logical page units, and manages a virtual address space which is a linear address space to which multiple physical blocks of the semiconductor device are mapped. The storage device uses a page mapping table managing a correspondence between a logical page and an address in the virtual address space, and a virtual address configuration information managing a correspondence between an area in the virtual address space and a physical block, in order to manage the correspondence between the respective logical pages and storage areas of the semiconductor device. | 2016-11-17 |
20160335196 | VIRTUAL ONE-TIME PROGRAMMABLE MEMORY MANAGEMENT - A virtual memory including virtual addresses may be generated. A first virtual address of the virtual memory may be mapped to a first physical address of a one-time programmable (OTP) memory of a device. Furthermore, a second virtual address of the virtual memory may be mapped to a second physical address of a static memory of the device. The virtual memory that is mapped to the OTP memory and the static memory may be provided for accessing of the data of the OTP memory of the device. | 2016-11-17 |
20160335197 | METHOD AND SYSTEM FOR MAINTAINING RELEASE CONSISTENCY IN SHARED MEMORY PROGRAMMING - A method and system for maintaining release consistency in shared memory programming on a computing device having multiple processing units includes, in response to a page fault, initiating a transfer, from one processing unit to another, of data associated with more than one but less than all of the pages of shared memory. | 2016-11-17 |
20160335198 | METHODS AND SYSTEM FOR MAINTAINING AN INDIRECTION SYSTEM FOR A MASS STORAGE DEVICE - Disclosed herein are techniques for maintaining an indirection manager for a mass storage device. According to some embodiments, the indirection manager is configured to implement different algorithms that orchestrate a manner in which data is read from and written into memory sectors when handling I/O requests output by a computing device that is communicatively coupled to the mass storage device. Specifically, the algorithms utilize a mapping table that is limited to two levels of hierarchy: a first tier and a second tier, which constrains the overall size and complexity of the mapping table and can increase performance. The embodiments also set forth a memory manager that is configured to work in conjunction with the indirection manager to provide a mechanism for efficiently allocating and de-allocating variably-sized groups of sectors. | 2016-11-17 |
20160335199 | EXTENDING A CACHE OF A STORAGE SYSTEM - Embodiments of the present disclosure provide a method and system for extending a cache of a storage system, by obtaining information on data in a storage system frequently accessed by a plurality of clients of the storage system; determining, based on the obtained information, storage information related to storage of cacheable data in the storage system, the cacheable data comprising a set of the data frequently accessed by the plurality of clients; and synchronizing the storage information amongst the plurality of clients so that a respective client of the plurality of clients locally caches, based on the storage information, data frequently accessed by the respective client. | 2016-11-17 |
20160335200 | MEMORY CIRCUIT USING DYNAMIC RANDOM ACCESS MEMORY ARRAYS - A memory circuit using dynamic random access memory (DRAM) arrays. The DRAM arrays can be configured as CAMs or RAMs on the same die, with the control circuitry for performing comparisons located outside of the DRAM arrays. In addition, DRAM arrays can be configured for secure authentication where, after the first authentication performed with a non-volatile secure element, subsequent authentications can be performed by the DRAM array. Input patterns can be loaded into a DRAM array by loading logic state ones (“1”) into each of the plurality of input data bit lines in each of the columns in the DRAM array and shunting one or more of the plurality of input data bit lines in the DRAM array corresponding to logic state zeroes (“0”) in the input pattern. | 2016-11-17 |
20160335201 | DATA AND INSTRUCTION SET ENCRYPTION - According to an example, data and instruction set encryption may include generating keys to encrypt data and instructions. The instructions may be executable by a CPU. The keys may be mapped to memory ranges of a PM including a flat address space. The flat address space of the PM may be partitioned according to the memory ranges. The keys and the memory ranges mapped to the keys may be stored in a keymap array. The data and the instructions may be encrypted based on the keys. | 2016-11-17 |
20160335202 | POLICY-BASED STORAGE IN A DISPERSED STORAGE NETWORK - A method for execution by a dispersed storage and task (DST) processing unit operates to receive a write threshold number of slices of a data object and an access policy; determine a current timestamp that indicates a current time value; and store the write threshold number of slices, the access policy, and the timestamp in a plurality of storage units of a dispersed storage network (DSN). | 2016-11-17 |
20160335203 | INTERFACE UNIT FOR ROUTING PRIORITIZED INPUT DATA TO A PROCESSOR - An interface unit for data exchange between a first processor of a computer system and a peripheral environment. The interface unit has a number of input data channels for receiving input data from the peripheral environment and a first access management unit. The access management unit is configured to receive a request for providing the input data, stored in the number of input data channels, from a first interface processor stored in the interface unit and from a second interface processor stored in the interface unit and to provide or not to provide the input data, stored in the number of input data channels, to the first interface processor and the second interface processor. A first priority and a second priority can be stored in the first access management unit. | 2016-11-17 |
20160335204 | APPARATUSES AND METHODS FOR ASYMMETRIC INPUT/OUTPUT INTERFACE FOR A MEMORY - Apparatuses and methods for asymmetric input/output interfaces for memory are disclosed. An example apparatus may include a receiver and a transmitter. The receiver may be configured to receive first data signals having a first voltage swing and having a first slew rate. The transmitter may be configured to provide second data signals having a second voltage swing and having a second slew rate, wherein the first and second voltage swings are different, and wherein the first and second slew rates are different. | 2016-11-17 |