Entries |
Document | Title | Date |
20080209416 | Workflow Definition and Management System - A workflow can be managed by presenting one or more questions to a user, wherein the questions are associated with a present status of an entity being processed through a workflow; receiving input from the user corresponding to the presented questions; evaluating the received input to determine whether one or more tasks associated with the present status have been completed; determining to advance the entity to a subsequent status in the workflow if each of the tasks associated with the present status has been completed; and executing an action mapping to advance the entity. Further, it can be determined not to advance the entity to the subsequent status in the workflow if each of the one or more tasks associated with the present status has not been completed. Thus, the entity can be retained in the present status or transferred to a previous status in the workflow. | 08-28-2008 |
20080209417 | Method and system of project management and task collaboration over instant messenger - A method and apparatus for allowing for the exchange of tasks, over an instant messenger (“IM”) infrastructure, are disclosed. An IM application, running on an electronic device, may allow creation, assigning, tracking, viewing, exporting, importing and managing tasks. IM applications may include, but not be limited to, stand-alone applications, browser plug-ins, on-screen widgets and gadgets, PDA and cellular phone modules, server-sided applications rendered on a client machine, etc. Personal Information Management (“PIM”) applications may use IM infrastructures to exchange of tasks or task information. Project management applications (“PMA”) may be used to define projects, containing tasks with complex sets of rules and inter-dependencies, and leverage IM networks for disseminating these projects and tasks among users. Tasks exchanged on an IM network may be imported into PMAs and PIMs. Tasks may be exchanged in a peer-to-peer IM network, which may span multiple IM service providers. Tasks may be transported in XML data structures which may contain data pertaining to users for whom tasks are intended, the progress made on tasks, documents attached to tasks, etc. User roles and privileges may be defined within tasks structures such that some users are the assignees of a task, while other users may only view task progress and be notified of milestones as tasks are worked on. Users may create task groups and communities, allowing them to control who may assign tasks to members of the group. | 08-28-2008 |
20080209418 | Method of dynamically adjusting number of task request - A method of dynamically adjusting the number of task requests is provided, which is applicable to an Internet Small Computer System Interface (iSCSI) protocol. When a target receives a task request transmitted by an initiator or the target completes the task request, the number of transmissible tasks is calculated according to an average access data volume, an current access data volume, and an allowable access data volume in the target, and returned to the initiator, such that the number of the task requests transmitted simultaneously by the initiator does not exceed the number of transmissible tasks, thereby achieving flow control. The allowable access data volume is obtained through interactive and dynamic adjustment between the target and the initiator. | 08-28-2008 |
20080209419 | Push-type pull printing system, pull printing method, and image forming apparatus - A push-type pull printing system comprising a server and an image forming apparatus, the server sending, to the image forming apparatus, a print job including print data and a print condition instruction command for the print data, and the image forming apparatus executing a print process based on the print job. Here, the image forming apparatus comprises an input receiver operable to receive an input of a print condition, a converter operable to convert the inputted print condition, before being transmitted to the server, to an instruction command in a description language interpretable by the image forming apparatus, and a transmitter operable to transmit the converted instruction command to the server; and the server comprises a job transmitter operable to receive the converted instruction command from and send the print job to the image forming apparatus, the print job including the instruction command as the print condition instruction command. | 08-28-2008 |
20080209420 | PROCESSING SYSTEM, STORAGE DEVICE, AND METHOD FOR PERFORMING SERIES OF PROCESSES IN GIVEN ORDER - Provided is a technology capable of managing the processing status of hardware blocks by a less number of registers. A processing system includes a buffer composed of a plurality of segments which store data, which is to be input to the processing system, in transactions in the order of inputting, respectively; a plurality of processing units which perform a series of processes in a given order for the data; a plurality of first tables corresponding to the plurality of processing units, respectively, the first tables each storing beginning information which indicates a beginning segment among a plurality of segments at continuous addresses completed in the process by the corresponding processing unit, end information which indicates an end segment among them, and existence information which indicates the presence or absence of segments completed in the process by the corresponding processing unit; and a management unit which manages a data transfer between the buffer and the plurality of processing units so that the series of processes are performed in a given order on the basis of the processing status of the series of processes retained in the plurality of first tables. | 08-28-2008 |
20080216072 | TRANSITION BETWEEN PROCESS STEPS - Among other disclosure, a data flow is an entity that completely or substantially encapsulates all or substantially all aspects of a flow of data from a preceding object instance part into a succeeding object instance part. A set of several single flows of data can provide the complete flow of data of an entire process step. | 09-04-2008 |
20080216073 | Apparatus for executing programs for a first computer architechture on a computer of a second architechture - Executing programs coded in an instruction set of a first computer on a computer of a second, different architecture. An operating system maintains an association between each one of a set of concurrent threads and a set of computer resources of the thread's context. Without modifying a pre-existing operating system of the computer, an entry exception is establishing to be raised on each entry to the operating system at a specified entry point or on a specified condition. The entry exception has an associated entry handler programmed to save a context of an interrupted thread and modify the thread context before delivering the modified context to the operating system. A resumption exception is established to be raised on each resumption from the operating system complementary to one of the specified entries. The resumption exception has an associated exit handler programmed to restore the context saved by a corresponding execution of the entry handler. The entry exception, exit exception, entry handler, and exit handler are cooperatively designed to maintain an association between a one of the threads and an extended context of the thread through a context change induced by the operating system, the extended context including resources of the computer associated with the thread beyond those resources whose association with the thread is maintained by the operating system. | 09-04-2008 |
20080216074 | ADVANCED PROCESSOR TRANSLATION LOOKASIDE BUFFER MANAGEMENT IN A MULTITHREADED SYSTEM - An advanced, processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner. | 09-04-2008 |
20080216075 | PROGRAM CREATION SUPPORT APPARATUS, CREATION SUPPORT PROGRAM AND CREATION SUPPORT METHOD FOR THE SAME - An apparatus for supporting creation of a program includes: a program execution module ( | 09-04-2008 |
20080216076 | METHODS FOR DISTRIBUTING PROGRAMS FOR GENERATING TEST DATA - Described herein are methods and systems for distributed execution of circuit testing algorithms, or portions thereof. Distributed processing can result in faster processing. Algorithms or portions of algorithms that are independent from each other can be executed in a non-sequential manner (e.g., parallel) over a network of plurality of processors. The network comprises a controlling processor that can allocate tasks to other processors and conduct the execution of some tasks on its own. Dependent algorithms, or portions thereof, can be performed on the controlling processor or one of the controlled processors in a sequential manner. To ensure consistency between the performance of algorithms, or portions thereof, in a distributed manner and a non-distributed manner, the order of processing results from execution is according to some pre-determined order, or according to the order in which the results would have been processed during a non-distributed (e.g., sequential) execution, for instance. For algorithms that are highly sequential in nature, portions of algorithms can be modified to delay the need for dependent results between algorithm portions by creating a rolling window of independent tasks that is iterated. | 09-04-2008 |
20080222634 | PARALLEL PROCESSING FOR ETL PROCESSES - A technique for parallel processing of data from a plurality of data sources in conjunction with an Extract-Transform-Load (ETL) process, the data being part of a related data set, which comprises the following: staging a unit of extracted data from each of the plurality of data sources, thereby generating a plurality of units of staged data; identifying a plurality of tasks relating to transforming the staged data; assigning a subset of the tasks to each of a plurality of child processes being managed by a master process, such that dependent tasks are assigned to a same child process; concurrently executing the subsets of tasks assigned to the child processes, thereby generating a plurality of units of transformed data from the plurality of units of staged data; and publishing the transformed data after all tasks are completely executed, thereby ensuring that the published data represent the related data set. | 09-11-2008 |
20080222635 | Method for Runtime Execution of One or More Tasks Defined in a Workflow Process Language - Runtime execution of one or more tasks defined in a workflow process language. The method may include obtaining a description of the task from a process ontology (PO). The PO may define a hierarchical taxonomy of executable tasks, where each task refers to at least one frame of a hierarchical frame taxonomy of the PO. The method may further include identifying at least one parameter as described in the frame description to which the task refers, resolving the value of the at least one parameter, and executing the most specific applicable version of the task contained in the task taxonomy of the process ontology. | 09-11-2008 |
20080222636 | SYSTEM AND METHOD OF REAL-TIME MULTIPLE-USER MANIPULATION OF MULTIMEDIA THREADS - Embodiments of configuring elements of a media processing system in mechanisms are described generally herein. Other embodiments may be described and claimed. | 09-11-2008 |
20080222637 | Self-Optimizable Code - Methods, systems, and media to increase efficiency of tasks by observing the performance of generally equivalent code paths during execution of the task are disclosed. Embodiments involve a computer system with software, or hard-coded logic that includes reflexive code paths. The reflexive code paths may be identified by a software or hardware designer during the design of the computer system. For that particular computer system, however, one of the code paths may offer better performance characteristics so a monitor collects performance data during execution of the reflexive code paths and a code path selector selects the reflexive code with favorable performance characteristics. One embodiment improves the performance of memory allocation by selectively implementing a tunable, linear, memory allocation module in place of a default memory allocation module. | 09-11-2008 |
20080222638 | Systems and Methods for Dynamically Managing Virtual Machines - Techniques for dynamic management of virtual machine environments are disclosed. For example, a technique for automatically managing a first set of virtual machines being hosted by a second set of physical machines comprises the following steps/operations. An alert is obtained that a service level agreement (SLA) pertaining to at least one application being hosted by at least one of the virtual machines in the first set of virtual machines is being violated. Upon obtaining the SLA violation alert, the technique obtains at least one performance measurement for at least a portion of the machines in at least one of the first set of virtual machines and the second set of physical machines, and a cost of migration for at least a portion of the virtual machines in the first set of virtual machines. Based on the obtained performance measurements and the obtained migration costs, an optimal migration policy is determined for moving the virtual machine hosting the at least one application to another physical machine. | 09-11-2008 |
20080229305 | WORKFLOW MANAGEMENT SYSTEM - A disclosed workflow management system is capable of dynamically constructing a model while executing a workflow. The workflow management system includes a task page management unit that manages a task page corresponding to a task or a task model of a workflow independently from a workflow engine; an attachment folder management unit that manages an attachment folder of the task page independently from the workflow engine; a registering unit that generates index information of an attached document of the task of the workflow and the task page and registers the index information into a search database; and a presenting unit that, when displaying the task page, presents related information through a search in the attachment folder and in the search database. | 09-18-2008 |
20080229306 | DATA DELIVERY SYSTEM, DATA DELIVERY METHOD, AND COMPUTER PROGRAM PRODUCT - In a first data delivery apparatus, a user-input receiving unit receives data and a request for executing a workflow, a first data processing unit processes the data based on the workflow, a destination obtaining unit obtains a destination from the workflow, and a transferring unit transfers the data, the workflow, and a progress of the workflow to the destination. In a second data delivery apparatus, a transfer receiving unit receives the data, the workflow, and the progress of the workflow, a workflow executing unit executes the workflow from a non-executed part based on the progress of the workflow, and a data delivery unit delivers data to other destination. | 09-18-2008 |
20080229307 | WORKFLOW MANAGEMENT SYSTEM - A workflow management system wherein a workflow model is dynamically constituted when a workflow is executed, its method, and its computer-executable program are disclosed. The workflow management system includes
| 09-18-2008 |
20080229308 | Monitoring Processes in a Non-Uniform Memory Access (NUMA) Computer System - A monitoring process for a NUMA system collects data from multiple monitored threads executing in different nodes of the system. The monitoring process executes on different processors in different nodes. The monitoring process intelligently collects data from monitored threads according to the node it which it is executing to reduce the proportion of inter-node data accesses. Preferably, the monitoring process has the capability to specify a node to which it should be dispatched next to the dispatcher, and traverses the nodes while collecting data from threads associated with the node in which the monitor is currently executing. By intelligently associating the data collection with the node of the monitoring process, the frequency of inter-node data accesses for purposes of collecting data by the monitoring process is reduced, increasing execution efficiency. | 09-18-2008 |
20080229309 | REALTIME-SAFE READ COPY UPDATE WITH LOCK-FREE READERS - A technique for realtime-safe detection of a grace period for deferring the destruction of a shared data element until pre-existing references to the data element have been removed. A pair of counters is established for each of one or more processors. A global counter selector determines which counter of each per-processor counter pair is a current counter. When reading a shared data element at a processor, the processor's current counter is incremented. Following counter incrementation, the processor's counter pair is tested for reversal to ensure that the incremented counter is still the current counter. If a counter reversal has occurred, such that the incremented counter is no longer current, the processor's other counter is incremented. Following referencing of the shared data element, any counter that remains incremented is decremented. Following an update to the shared data element wherein a pre-update version of the element is maintained, the global counter selector is switched to establish a new current counter of each per-processor counter pair. The non-current counter of each per-processor counter pair is tested for zero. The shared data element's pre-update version is destroyed upon the non-current counter of each per-processor counter pair being zero. | 09-18-2008 |
20080235682 | DEFINING AND EXECUTING PROCESSES USING DECLARATIVE PROGRAMMING LANGUAGE CONSTRUCTS - A computer-implemented technique for executing a process is provided. The technique includes providing a class having at least one annotation that defines at least a portion of the process. The annotation is a run-time-readable, non-executable declarative programming construct that is associated with a first method of the class, and specifies at least one transition rule and a second method of the class associated with the transition rule. A process engine, which runs on a computer and is not an instance of the class, parses the annotation to extract the transition rule. The process engine receives a message from a source external to the process engine, and evaluates whether the transition rule is satisfied, responsively to the message. Upon finding that the transition rule is satisfied, the process engine invokes the second method, so as to generate an output with respect to the message. Other embodiments are also described. | 09-25-2008 |
20080235683 | Data Processing System And Method - A method of producing a compartment specification for an application, the method comprising executing the application; determining resource requests made by the executing application; and recording the resource requests in the compartment specification. | 09-25-2008 |
20080235684 | Heuristic Based Affinity Dispatching for Shared Processor Partition Dispatching - A mechanism is provided for determining whether to use cache affinity as a criterion for software thread dispatching in a shared processor logical partitioning data processing system. The server firmware may store data about when and/or how often logical processors are dispatched. Given these data, the operating system may collect metrics. Using the logical processor metrics, the operating system may determine whether cache affinity is likely to provide a significant performance benefit relative to the cost of dispatching a particular logical processor to the operating system. | 09-25-2008 |
20080235685 | METHOD AND SYSTEM FOR DYNAMIC APPLICATION COMPOSITION IN STREAMING SYSTEMS - A system and method for dynamically building applications for stream processing includes providing processing elements with a flow specification describing each input and a stream description describing each output such that the flow specification indicates a stream or streams which are to be received based on processing information and the stream descriptions indicate the processing information. Processing elements that can be reused are identified by determining equivalence between the processing elements. Processing elements that are new and are not reusable are instantiated in a flow graph. An application is dynamically composed, using the instantiated processing elements by routing available streams to the instantiated processing elements in accordance with the flow specifications. | 09-25-2008 |
20080244578 | Managing and Supporting Multithreaded Resources For Native Code in a Heterogeneous Managed Runtime Environment - A computer implemented method and apparatus to manage multithread resources in a multiple instruction set architectures environment comprising initializing a first thread from a first context. The initialization of the first thread is suspended at a position in response to an operating system request call to create the first thread. A second thread from a host environment is created based on the position. After the second thread is created, completion of the initialization of the first thread based on the position is then performed. Other embodiments are described in the claims. | 10-02-2008 |
20080244579 | Method and system for managing virtual and real machines - Managing virtual and real machines through a provisioning system. The provisioning system allows a user to create and manage machines through a “self-service” approach. The provisioning system interacts with one or more agents that manage the lifecycle of a machine. The system may provide templates that enable a user to readily create a virtual machine. The system may also include interfaces for administrators to manage virtual and real machine resources. | 10-02-2008 |
20080244580 | Redundant configuration method of a storage system maintenance/management apparatus - Provided is a method of managing a computer system including a plurality of storage systems and a plurality of management appliances for managing the plurality of storage systems. A first management appliance and a second management appliance hold an identifier of a first storage system and management data obtained from the first storage system. The method includes the steps of: selecting a third management appliance from the plurality of management appliances when a failure occurs in the first management appliance; transmitting the identifier held in the second management appliance from the second management appliance to the selected third management appliance; and holding the identifier transmitted from the second management appliance in the selected third management appliance. Thus, it is possible to prevent, after failing-over due to an abnormality of a maintenance/management appliance, a single point of failure from occurring to reduce reliability of the maintenance/management appliance. | 10-02-2008 |
20080244581 | APPLICATION COLLABORATION SYSTEM, COLLABORATION METHOD AND COLLABORATION PROGRAM - An application collaboration system for allowing a portal application executed on a Web server and a client application executed on a client terminal to collaborate with each other, the application collaboration system including:
| 10-02-2008 |
20080244582 | WEB-Based Task Management System and Method - A task management system and method integrates rich functionality into a web-browser based application. An efficient request for an update enables a user to quickly generate a completely customizable email message to intended recipient(s). By introducing a client side, in-memory database, the client component becomes less susceptible to network connectivity glitches and enables user interfaces to be redrawn without server interaction. Additionally, the task management system and method provides flexibility by enabling tasks to be grouped and organized. Specifically, a task may be associated with multiple task sheets and a task sheet may include multiple tasks in a many-to-many manner. Also, templates may be created that enable a user to start with a base template and to add (or remove) one or more columns. Further, the task management systems allows for multiple users to access and manipulate task data concurrently. In addition, the task management system provides a means for viewing the change history of task data within a task sheet by highlighting task data within a task sheet that has been changed by another user of the task management system. | 10-02-2008 |
20080250408 | PEER TO PEER SHARING OF FUNCTIONALITY OF MOBILE DEVICES - Systems and methodologies for sharing functionality among mobile devices in a peer to peer manner are described herein. A mobile device can include a plurality of functional components that can each perform respective functionality. Examples of the functionalities can include transceiver communications, processing, power, memory, input and output for the mobile device. Further, the mobile device can include a sharing component that enables sharing a particular third party functional component to replace or supplement operation of a corresponding functional component of the mobile device. The third party functional component, for instance, can be made available for sharing by at least one of a disparate mobile device or a stand alone functional component. Moreover, a host component can allow a disparate mobile device to use an available one or more of the plurality of functional components of the mobile device. | 10-09-2008 |
20080250409 | Information processing system and computer readable recording medium storing an information processing program - A unit: (a) sets event largest possible values used for limitation of the number of tasks and/or the size of areas in a buffer; (b) generates a task without reaching the event largest possible values when receiving an event, and reserves an area in the buffer for the task; (c) determines whether another event has been received before process of the task is completed; (d) deletes the task and releases the area if another new event has not been received; (e1) if it has been received and both the number of tasks and the size of the areas do not reach the event largest possible values, generates a new task and reserves another area in the buffer; and (e2) if it has been received and any of them reaches any of the event largest possible values, reuses the task and the area for the new event. | 10-09-2008 |
20080250410 | METHOD FOR CONSOLIDATED LAUNCHING OF MULTIPLE TASKS - A general purpose mechanism is provided for consolidating the launching of multiple tasks, wherein a task is launched when an associated software component is run or executed. In one embodiment, launch descriptions of individual tasks and composition parameters are respectively read, wherein the parameters indicate relationships between the launchings of different tasks, such as launch order. A composite launch description is constructed, by selectively processing the individual launch descriptions and composition parameters, and the tasks are launched according to the composite launch description. In a further embodiment, multiple individual launch descriptions are delivered to a tool, each launch description being usable to launch a corresponding component to perform a corresponding task. The tool includes a set of launch relationships that specify the relationship between launchings of different components. The tool generates a single composite launch description that defines launching of the components in accordance with the launch relationships. | 10-09-2008 |
20080256539 | Fault Tolerant and Hang Resistant Media Processing Applications - Techniques for playing a media file in a multimedia application include launching a multimedia application as one process and automatically launching a pipeline of one or more media processing components as one or more isolated processes. In this manner, any untrustworthy components can be executed in an isolated process that is separate from the execution process of the multimedia application, thereby improving fault tolerance and hang resistance. | 10-16-2008 |
20080256540 | IDENTIFY INDICATORS IN A DATA PROCESSING SYSTEM - A data processing system employing identify indicators associated with various components of the system. The indicator may be activated whenever a corresponding component requires maintenance, field testing, installation, replacement, and the like. The user may specify global and local conditions under which an activated identify indicator is reset. After the indicator is activated, the system monitors for the satisfaction of one of the conditions. When one of the conditions is satisfied, the system deactivates the indicator automatically. The global conditions apply across logical partitions in a logically partitioned system thereby reducing the occurrence of stale identify indicators on all partitions. | 10-16-2008 |
20080263545 | SIGNAL DELIVERY TO A PROCESS IN A PROCESS GROUP - A method of handling a signal for delivery to a process in a process group along with an apparatus and computer-readable medium storing instructions therefore are described. The method comprises obtaining a lock on a portion of a process group management structure and storing a signal to the process group management structure, wherein the signal is to be delivered to one or more processes of a process group, wherein an operating system manages the process group management structure. The method further comprises transmitting a wakeup signal to a signal daemon and releasing the obtained lock. A method of delivering a signal to a process in a process group is also described. The method comprises obtaining a signal from a process group management structure, obtaining a lock on a process list, transmitting the signal to a process specified in the process list; and releasing the lock on the process list. | 10-23-2008 |
20080263546 | Image Formation Apparatus and Program - An image formation apparatus that has a webpage viewing function includes a job receiver that receives a job execution instruction from a user terminal, a job analyzer that analyzes the received job execution instruction, a job executor that executes a job based on a result of the analysis, and a job registration part that, if the received job execution instruction includes URL information specifying a webpage, registers user identification information pertaining to a user who issued the job execution instruction and the URL information included therein in correspondence with each other such that the webpage can be viewed with use of the URL information. | 10-23-2008 |
20080263547 | Providing a Service to a Service Requester - A data processing system for providing a service to a service requester is provided. The data processing system includes a filtering module to receive a request for a service from the service requester, and a ticket module to create a ticket. The ticket includes a risk profile level which is one of a predefined number of levels. The system further includes at least a first and second rule. The first rule specifies a control and the risk profile level response to a result obtained from the service requester. The second rule specifies a maximum acceptable risk profile level required to serve the service. An interface includes an output module to output the control to an agent and an input module to allow input of the service requester's response to the control. A modifier modifies the risk profile level according to the first rule and the service requester's response. The modifier compares the risk profile level with the maximum acceptable risk profile level. A service provision module allows the service to be performed if the risk profile level is less than or equal to the maximum acceptable risk profile level. | 10-23-2008 |
20080263548 | Process and Implementation for Dynamically Determining Probe Enablement Using Out of Process Correlating Token - The present invention addresses the problem of linking cross-process and cross-thread subtransactions into a single user transaction. The mechanism of the present invention employs bytecode inserted probes to dynamically detect out of process correlating tokens in an inbound request. The bytecode inserted probes retrieve the correlating token in the inbound request. Based on the correlating token retrieved, the bytecode inserted probes are then used to dynamically determine if the inbound user request should be recorded and linked to a transaction that began in another thread or process. | 10-23-2008 |
20080263549 | MANAGING LOCKS AND TRANSACTIONS - Under control of a first agent, a resource controlled by a second agent is locked with a first operation identifier. Under control of the second agent: a request is received to lock the resource controlled by the second agent with a second operation identifier for a client request for a client application, wherein the resource is already locked with the first operation identifier; it is determined whether the first operation identifier and the second operation identifier are determined to be a same identifier; if it is determined that the first operation identifier and the second operation identifier are the same identifier, the request is responded to with an indication that the resource is locked with the same operation identifier; and, if it is determined that the first operation identifier and the second operation identifier are not the same identifier, the lock request is denied. | 10-23-2008 |
20080271021 | MULTI CORE OPTIMIZATIONS ON A BINARY USING STATIC AND RUN TIME ANALYSIS - An apparatus and method provide for profile optimizations at a binary level. Thread specific data may be used to lay out a procedure in a binary. In one example, a hot thread may be identified and a layout may be generated based on the identified hot thread. Also, threads of an application may be ranked according to frequency of execution of the corresponding threads. The layout may be created based on the different threads of differing frequency of execution and conflicts between a hottest thread and each of the other threads of the application. In another example, different threads of the application may conflict. For example, two threads may contain operations that overlap temporally to create a race condition. A layout of the application threads may be created based on conflicting threads. | 10-30-2008 |
20080271022 | UTILIZING GRAPHS TO DETECT AND RESOLVE POLICY CONFLICTS IN A MANAGED ENTITY - A method and system are disclosed for changing the structure of one or more policies and/or the order of application of one or more policies to resolve conflicts among a set of policies using graph-theoretic techniques. Policies are used to govern the states of managed entities (e.g., resources and services). The set of states of the set of managed entities are represented as nodes of a graph. The output of the set of applicable policies governing all or part of the nodes is then used to control the transition between some or all nodes in the graph. | 10-30-2008 |
20080271023 | DEVICE MANAGEMENT - A framework whereby mobile terminals are configured and managed by a central server. In accordance with one aspect of the present invention, there is provided a mobile telecommunications terminal including a first execution environment and a second execution environment, each execution environment being arranged to execute a respective device management agent and each agent issuing, in accordance with instructions from a device management server, management actions that act upon one or more respective management entities running within one or more of the execution environments; wherein the management entities of the second execution environment are grouped into a management structure, the management structure being one of the management entities within the first execution environment, whereby the he first and second execution environments permit the device management server is permitted to manage applications and/or services running within both. | 10-30-2008 |
20080271024 | Information processing apparatus, information processing system and information processing method for processing tasks in parallel - In a micro processor unit, when processing to be requested to another processor unit which connects via a network, occurs during task processing in a task processing unit in an application SPU, a communication controller in a PU specifies a network with which a processor unit, which is a request destination, connects. An interface selector in the application SPU selects one network included in the specified networks, with the view of communication capability or the like, and writes that information in a look-aside buffer. In case that processing for the same processing target is requested next time and after that, a system SPU or the PU transmits the processing request depending on required communication capability. | 10-30-2008 |
20080276236 | DATA PROCESSING DEVICE WITH LOW-POWER CACHE ACCESS MODE - A processor can operate in three different modes. In an active mode, a first voltage is provided to the processor, where the first voltage is sufficient to allow the processor to execute instructions. In a low-power mode, a retention voltage is provided to the processor. The processor consumes less power in the retention mode than in the active mode. In addition, the processor can operate in a third mode, where a voltage is provided to the processor sufficient to allow the processor to process cache messages, such as coherency messages, but not execute other normal operations or perform normal operations at a very low speed relative to their performance in the active mode. | 11-06-2008 |
20080276237 | Method For Processing Work Items Of a Workflow System - A method for processing work items of a workflow system is done in the following manner. Information identifying work items from a server responsible for handling work items is retrieved based at least on a set of configuration rules. The information is stored in a cache. In response to a work item request form an application, matching work items are searched for in the cache, and a piece of information identifying the requested work item is delivered to the application in response to finding at least one work item matching the work item request from the application. Statistics on work item requests are maintained, and the set of configuration rules are modified according to the statistics. | 11-06-2008 |
20080276238 | Use of Metrics to Control Throttling and Swapping in a Message Processing - A system and method of using metrics to control throttling and swapping in a message processing system is provided. A workload status of a message processing system is determined, and the system polls for a new message according to the workload status. The message processing system identifies a blocked instance and calculates an expected idle time for the blocked instance. The system dehydrates the blocked instance if the expected idle time exceeds a predetermined threshold. | 11-06-2008 |
20080282244 | Distributed transactional deadlock detection - Aspects of the subject matter described herein relate to deadlock detection in distributed environments. In aspects, nodes that are part of the environment each independently create a local wait-for graph. Each node transforms its local wait-for graph to remove non-global transactions that do not need resources from multiple nodes. Each node then sends its transformed local wait-for graph to a global deadlock monitor. The global deadlock monitor combines the local wait-for graphs into a global wait-for graph. Phantom deadlocks are detected and removed from the global wait-for graph. The global deadlock monitor may then detect and resolve deadlocks that involve global transactions. | 11-13-2008 |
20080288943 | MANAGEMENT SYSTEM AND MANAGEMENT METHOD - A system for managing a component or application determines whether to allow changeover of a component, which is used by an application, or launch of an application in accordance with amount of resources set for a component and used by the component. | 11-20-2008 |
20080288944 | Consistent Method System and Computer Program for Developing Software Asset Based Solutions - A method, computer program and system for consuming reusable software assets, said assets being described with elements and attributes, said assets containing at least one variable element (VPs) themselves containing at least one variant. The user executes a program on a computer by first choosing the asset to be consumed. A decision tree corresponding to the asset is traversed, each decision point corresponding to a variable element. The decision point is processed by asking the user inputs to modify the variants of the corresponding variable elements. The modified variable elements are stored. The dependency of the decision point is indicated by a dependency attribute in the variable element. | 11-20-2008 |
20080295097 | Techniques for sharing resources among multiple devices in a processor system | 11-27-2008 |
20080301677 | APPARATUS AND METHOD FOR PARALLEL PROCESSING - A parallel processing apparatus and method are provided. The parallel processing apparatus includes: a control unit determining whether one or more threads can access one or more control blocks of a first container that exists in a direction in which at least one of the one or more threads perform a task; a container generating unit generating a second container that includes one or more control blocks on the basis of the result of the determination; and a container management unit connecting the one or more control blocks of the first container or the one or more control blocks of the first container and the one or more control blocks of the second container and control blocks in which the one or more threads perform tasks, in a ring shape. | 12-04-2008 |
20080301678 | Method Computer Program and Device for Generation of Individualized Print Media Copies - In a method or system for generation of customer-specific, individualized media print copies of at least one media title, running data are generated in an editing computer, the running data being associated with a media page, the running data comprising reference information regarding media pages of the media title. The running data are transferred to a job system. In the job system, job data of an individual job are generated from the running data and from customer-specific data that correspond to media categories of the media title. Individual page frame data are formed from the job data that correspond to a job-individual layout of the media print copy. Control information is attained from the job data and with the control information the page frame data are merged with page content data of pages filled with editorial information generated by an editorial department according to the running data to form print data for printing of the media print copy. | 12-04-2008 |
20080301679 | TECHNIQUE OF DETERMINING PERFORMING ORDER OF PROCESSES - The present invention provides to a technique of determining a performing order of processes. In particular, the present invention relates to a technique of optimizing a performing order of processes in such a case that a result of performing a previous process could be modified later depending on a performing order of processes. The invention further provides a method to determine a performing order of processes so as to minimize required time for a process of modifying a result of an already performed process based on a result of a process performed later. | 12-04-2008 |
20080301680 | INFORMATION PROCESSING APPARATUS - To realize a situation that while the number of inspection performed on a stack used as a variable storage area is reduced, a use efficiency of a CPU is improved and the stack used as the variable storage area is preferably inspected. In a cellar phone according to an embodiment of the present invention, a main control section controls an execution of an intermittent operation in the cellar phone, when the cellar phone returns from the intermittent operation in accordance with the control, a timer for a stack inspection is set in a first state, and each time the timer for the stack inspection times out, the stack inspection is executed. | 12-04-2008 |
20080301681 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND COMPUTER PROGRAM - An information processing apparatus including: a plurality of data processing functional blocks each used for carrying out individual data processing; a flow control section configured to execute control of data flows among the data processing functional blocks; and a control section configured to carry out a setting process to set the data processing functional blocks and the flow control section. The control section acquires configuration information in accordance with a task list for data processing to be carried out; carries out the setting process to set the data processing functional blocks and the flow control section on the basis of the acquired configuration information; and constructs a data processing configuration adapted to various kinds of data processing to be carried out. | 12-04-2008 |
20080307416 | DEVICE MANAGEMENT APPARATUS, DEVICE MANAGEMENT METHOD, AND STORAGE MEDIUM - A management apparatus that promotes productivity of management of a plurality of devices and of a plurality of functions of the device. The management apparatus for managing a plurality of functions for use in a device by a user, wherein each function has one or more functional elements, the apparatus including a first memory module that stores correspondence data describing a relationship between a functional element and unit configuration information of the device, the unit configuration information indicating units of the device used by the functional element, a second memory module that stores use authority data corresponding to the user, a management module that selects an available function based on the correspondence data and the use authority data, and sends the functional elements corresponding to the selected function to the device. | 12-11-2008 |
20080313632 | METHODS, DEVICES, AND PRODUCTS FOR PROVIDING ACCESS TO SYSTEM-ADMINISTRATION FUNCTIONS OF A COMPUTER OR RELATED RESOURCES - Methods, devices, and products relating to displaying, in a shared user-interface object, system-administration-related content from a heterogeneous group of resources associated with a computing device and/or executing software operable to cause a system-administration task to be performed. In one embodiment, a user interface for a local program executing on a computing device is presented as a part of a Web page that is supplied from a remote server. | 12-18-2008 |
20080313633 | Software feature usage analysis and reporting - Described is a technology for analyzing usage of a software program's features. Software instrumentation data is during actual user program usage sessions. The collected data is then processed to determine various feature usage counts and other information, cross-feature usage (e.g., among users who use a feature, how many use another feature or program), and characteristics of feature users, e.g., how long, how much, how often and how extensive feature users use a program. Session analysis may be performed to provide information about the number of sessions in which a set of features occur. Feature usage trends over time may also be determined via analysis. A user interface is described for facilitating selection of one or more features to analyze, for facilitating selection of a group of users, and/or for outputting results corresponding to the analysis. | 12-18-2008 |
20080313634 | WORKFLOW MANAGEMENT SERVER AND METHOD - Even when an error has occurred in a device that is executing an activity, the workflow is continued as much as possible. When an error has occurred in an activity that is in progress in a device X, a server notifies the device X of an alternative device. The name of the alternative device (device Y) is displayed on the device X. The operator can send a job catalog request from the device Y to the server and select an aborted job from the provided job catalog. The server is notified of the selected job. When password authentication has succeeded, the device Y is permitted to continue the activity. | 12-18-2008 |
20080320475 | Switching user mode thread context - Various technologies and techniques are disclosed for switching user mode thread context. A user mode portion of a thread can be switched without entering a kernel by using execution context directly based on registers. Upon receiving a request to switch a user mode part of a thread to a new thread, user mode register contexts are switched, as well as a user mode thread block by changing an appropriate register to point at the user mode thread block of the new thread. Switching is available in environments using segment registers with offsets. Each user mode thread block in a process has a descriptor in a local descriptor table. When switching a user mode thread context to a new thread, a descriptor is located for a user mode thread block of the new thread. A shadow register is updated with a descriptor base address of the new thread. | 12-25-2008 |
20090007113 | SYSTEM AND METHOD FOR INITIATING THE EXECUTION OF A PROCESS - A method and computer program product for defining a plurality of tags, each of which is associated with a discrete process executable on activity content. At least one of the plurality of tags is associated with a piece of content within an activity, thus defining one or more associated tags. | 01-01-2009 |
20090007114 | ESTIMATION METHOD AND SYSTEM - A time estimation method and system. The method comprises performing a loop of one or more iterations. Each iteration is for calculating a remaining time duration (RD) for completing a process for performing tasks. The loop is performed until the RD equals zero. Each iteration comprises receiving first data related to a plurality of objects associated with the process. A time to complete each object of the plurality of objects (POT) is calculated based on the first data. A number of objects of the plurality of objects remaining in the process (OR) is calculated based on the first data. Second data related to a plurality of work units is received. The plurality of work units is comprised by the plurality of objects. Each work unit is associated with a different task of the tasks. The RD is calculated based on the POT, the OR, and the second data. | 01-01-2009 |
20090007115 | Method and apparatus for parallel XSL transformation with low contention and load balancing - A method for parallel transformation of an XML document by a plurality of execution modules and the serialization of output according to semantic order of the XML document. | 01-01-2009 |
20090007116 | Adjacent data parallel and streaming operator fusion - Various technologies and techniques are disclosed for handling data parallel operations. Data parallel operations are composed together to create a more complex data parallel operation. A fusion plan process is performed on a particular complex operation dynamically at runtime. As part of the fusion plan process, an analysis is performed of a structure of the complex operation and input data. One particular algorithm that best preserves parallelism is chosen from multiple algorithms. The structure of the complex operation is revised based on the particular algorithm chosen. A nested complex operation can also be fused, by inlining its contents into an outer complex operation so that parallelism is preserved across nested operation boundaries. | 01-01-2009 |
20090007117 | METHOD AND APPARATUS FOR PERFORMING RELATED TASKS ON MULTI-CORE PROCESSOR - A method and apparatus for performing related tasks in a multi-core processor are provided. The method of performing at least one related task on the multi-core processor including a plurality of cores, includes: determining whether data and address information which are required for performing the at least one related task are loaded in the cores of the multi-core processor; and controlling the multi-core processor based on a result of the determining so that the cores concurrently start to perform the at least one related task. | 01-01-2009 |
20090013322 | EXECUTION AND REAL-TIME IMPLEMENTATION OF A TEMPORARY OVERRUN SCHEDULER - The automatic generation of a real-time scheduler for scheduling the execution of tasks on a real-time system is disclosed. The scheduler may allow task overruns in the execution of the tasks on the real-time system. The task overruns may occur when the execution of a task for a current sample hit is not completed before a next sample hit. When the task overruns occur, the scheduler may delay the execution of the task for the next sample hit until the execution of the task for the current sample hit is completed. The execution of the task for the next sample hit is performed after the execution of the task for the current sample hit is completed. The present invention may enable users to input information relating to the behavior in real-time execution of the graphical programs or models. The present invention may simulate the graphical programs or models using the information on the behavior of the graphical programs or models executed in the real-time execution. | 01-08-2009 |
20090019438 | METHOD AND APPARATUS FOR SELECTING A SYSTEM MANAGEMENT PRODUCT FOR PERFORMANCE OF SYSTEM MANAGEMENT TASKS - A computer implemented method, apparatus, and computer program product for managing a system. The process stores information regarding performance of a system management task to form a task execution history in response to performing a system management task. After receiving a request to perform to subsequent system management task, the process determines whether a task execution history is present for the subsequent system management task. The process then presents the task execution history for the subsequent task to a user for use in selecting a system management product from a plurality of system management products in response to the task execution history being present. | 01-15-2009 |
20090019439 | THREAD POOL MANAGEMENT APPARATUS AND METHOD - A thread pool management apparatus and method are provided. The thread pool management method includes setting a management policy for managing a thread pool; and managing the thread pool according to the management policy. | 01-15-2009 |
20090019440 | PROGRAM DETERMINING APPARATUS AND PROGRAM DETERMINING METHOD - A disclosed program determining apparatus includes a log recording unit configured to record, in response to at least one of a use request for use of a predetermined function of the image forming apparatus from a program for use in the image forming apparatus and consumption of a predetermined resource of the image forming apparatus by the program, content of said at least one of use request and consumption as log information; and a determining unit configured to determine whether said at least one of use of the predetermined function requested by the program and consumption of the predetermined resource by the program satisfies a predetermined restriction. | 01-15-2009 |
20090019441 | METHOD, SYSTEM, AND COMPUTER PROGRAM FOR MONITORING PERFORMANCE OF APPLICATIONS IN A DISTRIBUTED ENVIRONMENT - A method, system, and computer program include receiving a request string, and mapping the received request string to a distinguishable request string and a collapsible request string. The received request string may be in the form of a JSP, a servlet, and remote Enterprise Java Bean calls. A user may be prompted to create rules for mapping of a received request string to a distinguishable request string and a collapsible request string. | 01-15-2009 |
20090024996 | Blade server and service start method therefore - A blade server includes a management blade and managed blade. The management blade manages service data necessary for the service of an application. In the managed blade, the application is activated. The managed blade includes a service data list creation unit and service data list transmission unit. The service data list creation unit creates a service data list representing service data necessary for the service of the application. The service data list transmission unit transmits the service data list created by the service data list creation unit to the management blade. The management blade includes a service data transmission unit. The service data transmission unit transmits, to the managed blade, service data in the service data list transmitted from the service data list transmission unit before the service of the application starts. A service start method for a blade server is also disclosed. | 01-22-2009 |
20090031305 | Modeling Homogeneous Parallelism - A model of a process is created using novel “fan-out” and “fan-in” symbols. A fan-out symbol represents a point in the process flow where a variable number of homogeneous parallel outgoing threads are being split out from a single incoming thread. The fan-in symbol represents a point in the process flow where a variable number of parallel incoming threads with homogeneous output are combined into one or more outgoing threads. | 01-29-2009 |
20090031306 | METHOD AND APPARATUS FOR DATA PROCESSING USING QUEUING - A computing device is provided having a central processing unit, random access memory, and read only memory interconnected by a bus. The central processing unit is configured to execute a plurality of programming instructions representing a plurality of software objects. The software objects comprise a read queue for storing unprocessed packets and a write queue for storing processed packets. The software objects include a reader thread for reading packets from the read queue and a lock free queue for receiving packets received via the reader thread. The software objects also include at least one processor thread for performing an operation on the packets in the lock free queue. The software objects include a writer thread for writing packets that have been processed by the at least one processor thread to the write queue. | 01-29-2009 |
20090031307 | MANAGING A VIRTUAL MACHINE - Management of a virtual machine is enhanced by establishing an initial availability policy for the machine. Once the virtual machine is invoked, the real environment for the virtual machine is monitored for the occurrence of predetermined events. If a real environment event is detected that could affect the availability of the virtual machine, the availability policy of the virtual machine is automatically adjusted to reflect the new or predicted state of the real environment. | 01-29-2009 |
20090037910 | METHODS AND SYSTEMS FOR COORDINATED TRANSACTIONS IN DISTRIBUTED AND PARALLEL ENVIRONMENTS - Automated techniques are disclosed for coordinating request or transaction processing in a data processing system. For example, a technique for handling compound requests, in a system comprising multiple nodes for executing requests in which an individual request is associated with a particular node, comprises the following steps. A compound request comprising at least two individual requests associated with a same node is received. It is determined if both of the at least two individual requests are executable. The compound request is executed if it is determined that all individual requests of the compound request can execute. | 02-05-2009 |
20090037911 | ASSIGNING TASKS TO PROCESSORS IN HETEROGENEOUS MULTIPROCESSORS - Methods and arrangements of assigning tasks to processors are discussed. Embodiments include transformations, code, state machines or other logic to detect an attempt to execute an instruction of a task on a processor not supporting the instruction (non-supporting processor). The method may involve selecting a processor supporting the instruction (supporting physical processor). In many embodiments, the method may include storing data about the attempt to execute the instruction and, based upon the data, making another assignment of the task to a physical processor supporting the instruction. In some embodiments, the method may include representing the instruction set of a virtual processor as the union of the instruction sets of the physical processors comprising the virtual processor and assigning a task to the virtual processor based upon the representing. | 02-05-2009 |
20090037912 | DISTRIBUTED TASK HANDLING - Various embodiments described herein provide systems, methods, software, and data structures that may be used in distributed task handling. Some embodiments include a generic architecture for loosely coupled associations of globally managed tasks and artifacts within user defined task descriptions. As a result, such embodiments provide a flexible and adaptable task model. | 02-05-2009 |
20090044188 | METHOD AND SYSTEM FOR PERFORMING REAL-TIME OPERATION - An information processing system performs a real-time operation periodically at specific time intervals. The system includes a unit for performing a scheduling operation of assigning the real-time operation to a processor to perform the real-time operation periodically at the specific time intervals by the processor, a unit for computing a ratio of an execution time of the real-time operation to be performed by the processor at a first operating speed, based on the specific time intervals and cost information concerning a time required to perform the real-time operation by the processor at the first operating speed, and a unit for performing an operating speed control operation to operate the processor at a second operating speed that is lower than the first operating speed, the second operating speed being determined based on the computed ratio. | 02-12-2009 |
20090049443 | Multicore Distributed Processing System - A distributed processing system delegates the allocation and control of computing work units to agent applications running on computing resources including multi-processor and multi-core systems. The distributed processing system includes at least one agent associated with at least one computing resource. The distributed processing system creates work units corresponding with execution phases of applications. Work units can be associated with concurrency data that specifies how applications are executed on multiple processors and/or processor cores. The agent collects information about its associated computing resources and requests work units from the server using this information and the concurrency data. An agent can monitor the performance of executing work units to better select subsequent work units. The distributed processing system may also be implemented within a single computing resource to improve processor core utilization of applications. Additional computing resources can augment the single computing resource and execute pending work units at any time. | 02-19-2009 |
20090055823 | System and method for capacity planning for systems with multithreaded multicore multiprocessor resources - A method for expressing a hierarchy of scalabilities in complex systems, including a discrete event simulation and an analytic model, for analysis and prediction of the performance of multi-chip, multi-core, multi-threaded computer processors is provided. Further provided is a capacity planning tool for migrating data center systems from a source configuration which may include source systems with multithreaded, multicore, multichip central processing units to a destination configuration which may include destination systems with multithreaded, multicore and multichip central processing units, wherein the destination systems may be different than the source systems. Apparatus and methods are taught for the assembling of and utilization of linear and exponential scalability factors in the capacity planning tool when a plurality of active processor threads populate processors with multiple chips, multiple cores per chip and multiple threads per core. | 02-26-2009 |
20090064139 | Method for Data Processing Using a Multi-Tiered Full-Graph Interconnect Architecture - A method is provided for implementing a multi-tiered full-graph interconnect architecture. In order to implement a multi-tiered full-graph interconnect architecture, a plurality of processors are coupled to one another to create a plurality of processor books. The plurality of processor books are coupled together to create a plurality of supernodes. Then, the plurality of supernodes are coupled together to create the multi-tiered full-graph interconnect architecture. Data is then transmitted from one processor to another within the multi-tiered full-graph interconnect architecture based on an addressing scheme that specifies at least a supernode and a processor book associated with a target processor to which the data is to be transmitted. | 03-05-2009 |
20090064140 | System and Method for Providing a Fully Non-Blocking Switch in a Supernode of a Multi-Tiered Full-Graph Interconnect Architecture - A method, computer program product, and system are provided for transmitting data from a first processor of a data processing system to a second processor of the data processing system. In one or more switches, a set of virtual channels is created, the one or more switches comprising, for each processor, a corresponding switch in the one or more switches. The data is transmitted from the first processor to the second processor through a path comprising a subset of processors of a set of processors in the data processing system. In each processor of the subset of processors, the data is stored in a virtual channel of a corresponding switch before transmitting the data to a next processor. The virtual channel of the corresponding switch in which the data is stored corresponds to a position of the processor in the path through which the data is transmitted. | 03-05-2009 |
20090064141 | EFFICIENT UTILIZATION OF TRANSACTIONS IN COMPUTING TASKS - A method of performing a computing transaction is disclosed. In one disclosed embodiment, during performance of a transaction, if an operation in a transaction can currently be performed, then a result for the operation is received from a transaction system. On the other hand, if the operation in the transaction cannot currently be performed, then a message indicating that the operation would fail is received from the transaction system. The transaction ends after receiving for each operation in the transaction a result or a message indicating that the operation would fail. | 03-05-2009 |
20090064142 | INTELLIGENT RETRY METHOD USING REMOTE SHELL - Method for issuing and monitoring a remote batch job, method for processing a batch job, and system for processing a remote batch job. The method for issuing and monitoring a remote batch job includes formatting a command to be sent to a remote server to include a sequence identification composed of an issuing server identification and a time stamp, forwarding the command from the issuing server to the remote server for processing, and determining success or failure of the processing of the command at the remote server. When the failure of the processing of the command at the remote server is determined, the method further includes instructing the remote server to retry the command processing. | 03-05-2009 |
20090064143 | Subscribing to Progress Indicator Treshold - Methods and apparatus, including computer program products, implementing and using techniques for providing a notification to a user about the progress of a task running on a digital processing device. A user input identifying a progress indicator for the task running on the digital processing device is received. A user input selecting a threshold value is received. The threshold value indicates a point on the progress indicator at which the user is to be notified about the progress of the task. A notification is provided to the user when the threshold value is reached. | 03-05-2009 |
20090064144 | Community boundaries in a geo-spatial environment - A method and system of community boundaries in a geo-spatial environment are disclosed. In one embodiment, a method of organizing a community network includes obtaining a location on a geo-spatial map, determining a representative in the community network associated with the location, obtaining a community boundary selection associated with a community from the representative, determining a region corresponding to the community boundary selection on the geo-spatial map, and creating a community boundary associated with the community on the geo-spatial map from the community boundary selection. The method may further include determining a residence of a member of the community network in the region, and associating the member with the community based on the residence. The method may also include obtaining a privacy preference corresponding to the community, and hiding a profile associated with the member from a public view of the community network based on the privacy preference. | 03-05-2009 |
20090064145 | Computer System and Method for Activating Basic Program Therein - A computer system capable of executing a basic program for providing a program execution environment. The system has a storage device for storing data that is necessary to the basic program during startup, and, for each basic program, configuration data that indicates information relating to data necessary during startup. In the computer system, data relating to the basic program that is to be started is read from the storage device setting, data necessary during startup is acquired from the storage device on the basis of information written in the configuration data, the data necessary during startup is stored in memory space that is in the memory device and that can be accessed from the basic program that is to be started, and a process for starting the designated basic program is executed. | 03-05-2009 |
20090064146 | INSTRUCTION GENERATING APPARATUS, DOCUMENT PROCESSING SYSTEM AND COMPUTER READABLE MEDIUM - An instruction generating apparatus includes a receiving section and a generating section. The receiving section receives job information including a plurality of jobs determined in a given order. Each job includes a process of a document by a processing device. The generating section generates instruction information based on the job information received by the receiving section. The instruction information includes, in the given order, a plurality of sets of (i) document corresponding to each job and (ii) detailed process of the document so as to instruct the processing device to perform each document process. | 03-05-2009 |
20090077553 | PARALLEL PROCESSING OF PLATFORM LEVEL CHANGES DURING SYSTEM QUIESCE - Various embodiments described herein provide one or more of systems, methods, and software/firmware that provide increased efficiency in implementing configuration changes during system quiesce time. Some embodiments may separate a quiesce data buffer into small slices wherein each slice includes configuration change data or instructions. These slices may be individually distributed by a system bootstrap processor, or other processor, to other processors or logical processors of a multi-core processor in the system. In some such embodiments, the system bootstrap processor and application processors may change system configuration in parallel while a system is in a quiesce state so as to minimize time spent in the quiesce state. Furthermore, typical system configuration change become local operations, such as local hardware register modifications, which suffer much less transaction delay than remote hardware register accesses as has been previously performed. These embodiments, and others, are described in greater detail herein. | 03-19-2009 |
20090083737 | Device, System, and Method of Classifying a Workload of a Software Service - Some embodiments include, for example, devices, systems, and methods of classifying a workload of a software service. A method of classifying a workload of a software service may include, for example, sampling a plurality of values of at least one parameter of the software service by performing out-of-band monitoring of the at least one parameter; and classifying the workload of the software service by selecting a workload classification from a plurality of predefined workload classifications based on the plurality of values. Other embodiments are described and claimed. | 03-26-2009 |
20090083738 | AUTOMATED DATA OBJECT SET ADMINISTRATION - Modern computer systems may comprise massive sets of data objects of various types, such as data files, application binaries, database objects, proprietary objects managed by applications such as email systems, and system configuration information. Applying complex operations, such as archiving and synchronization operations, to many and varied data objects may be difficult to perform manually or through a script. A more advantageous technique involves applying data object managers to the data object set, where such data object managers are configured to apply various rule comprising a task to be performed on the data object set in furtherance of the operation to various data object types in the data object set. Additionally, the data object set may be modeled as a hierarchical data object set map, to which the rules may be applied through the data object managers in a more uniform manner. | 03-26-2009 |
20090089782 | METHOD AND SYSTEM FOR POWER-MANAGEMENT AWARE DISPATCHER - In general the invention relates to a system. The system includes processors each having a processing state. The system further includes a dispatcher operatively connected to the plurality of processors and configured to: receive a first thread to dispatch, select one of the processors to dispatch the thread to based on the processing state the processors and a power management policy, and dispatch the thread to the selected one of the plurality of processors. | 04-02-2009 |
20090089783 | PARTIAL ORDER REDUCTION USING GUARDED INDEPENDENCE RELATIONS - A system and method for conducting symbolic partial order reduction for concurrent systems includes determining a guarded independence relation which includes transitions from different threads that are independent for a set of states, when a condition or predicate holds. Partial order reduction is performed using the guarded independence relation to permit automatic pruning of redundant thread interleavings when the guarded independence condition holds. | 04-02-2009 |
20090094605 | METHOD, SYSTEM AND PROGRAM PRODUCTS FOR A DYNAMIC, HIERARCHICAL REPORTING FRAMEWORK IN A NETWORK JOB SCHEDULER - The present invention employs a master node for each job to be scheduled and in turn the master node distributes job start information and executable tasks to a plurality of nodes configured in a hierarchical node tree of a multinode job scheduling system. The status of the various tasks executing at the leaf nodes and other nodes of the tree report status back up the same hierarchical tree structure used to start the job, not to a scheduling agent but rather to the master node which has been established by the scheduling agent as the focal point, not only for job starting, but also for the reporting of status information from the leaf and other nodes in the tree. | 04-09-2009 |
20090094606 | Method for fast XSL transformation on multithreaded environment - An XSLT method is used in a multi-thread environment. In the XSLT method, an XML file is analyzed in view of XSLT templates. Relationships between the transforming processes of the XSLT templates and the tree nodes of the XML file are built. Time for the execution of the transforming process of each of the XSLT templates and the number of a related one of the tree nodes are calculated. Threads are scheduled for the transforming processes of the XSLT templates. The transforming processes of the XSLT templates are executed. | 04-09-2009 |
20090100426 | METHODS AND SYSTEMS OF RECONCILING SOURCES OF PRINT JOB PROCESSING INFORMATION IN A PRINT PROCESSING ENVIRONMENT - A method of processing a print job in a document production environment includes receiving a job ticket having job ticket parameters, identifying a process plan template having processing instructions for processing the print job and performing a parameter value resolving process for each job ticket parameter. The resolving process may include identifying candidate values, identifying the source associated with each of the candidate values and determining whether a candidate value has a source having precedence. If a source has precedence, the corresponding candidate value may be selected as a resolved parameter value. A user may be presented with a representation of a set of the resolved parameter values and may be permitting to modify at least one of the resolved parameter values. A first portion of the print job may be processed using the identified process plan template, the resolved parameter values, and any user-modified parameter values. | 04-16-2009 |
20090100427 | Search-Based User Interaction Model for Software Applications - Data is received that characterizes one or more terms within a task initiation request. These terms are then associated with a task template. At least a portion of such a task template is populated based on the terms so that the populated task template can be presented to the user to enable a user to conduct one or more actions associated with the presented populated task template. Related techniques, apparatus, systems, and methods are also described. | 04-16-2009 |
20090100428 | RFID SYSTEM AND METHOD - A method and computer program product for obtaining a token identifier from a token device using a token reading system coupled to a local computing device. A determination is made concerning whether the token identifier obtained is associatable with a defined workflow. If the token identifier obtained is associatable with a defined workflow, at least a portion of the defined workflow is executed on the local computing device. | 04-16-2009 |
20090106755 | Programmable Controller with Multiple Processors Using Scanning and Data Acquisition Architectures - Operating a programmable controller with a plurality of processors. The programmable controller may utilize a first subset of the plurality of processors for a scanning architecture. The first subset of the plurality of processors may be further subdivided for execution of periodic programs or asynchronous programs. The programmable controller may utilize a second subset of the plurality of processors for a data acquisition architecture. Execution of the different architectures may occur independently and may not introduce significant jitter (e.g., for the scanning architecture) or data loss/response time lag (e.g., for the data acquisition architecture). However, the programmable controller may operate according to any combination of the divisions and/or architectures described herein. | 04-23-2009 |
20090106756 | Automatic Workload Repository Performance Baselines - Techniques that improve manageability of systems. Techniques are provided for creating different types of baselines that are more flexible and dynamic in nature. A future-based baseline may be created defining a period of time, wherein at least a portion of the period of time is in the future. A baseline may be created that is a composite of multiple baselines. In general, baselines may be specified having one or more periods of time that are either contiguous or non-contiguous. A template for creating a set of baselines based on a set of time periods may also be created, where the template can be used to create a baseline for each of the set of time periods. A moving window baseline may be created having an associated time window that changes with passage of time, where accordingly the data associated with the baseline may also dynamically change with passage of time. | 04-23-2009 |
20090106757 | WORKFLOW SYSTEM, INFORMATION PROCESSING APPARATUS, DATA APPROVAL METHOD, AND PROGRAM - An electronic approval system which is capable of improving reliability of approval in a workflow. The electronic approval system comprises a server and a plurality of multifunction peripherals (MFPs). The server manages a status of data to be handled in the workflow. An MFP performs a visual output of the data to be handled in the workflow, and transmits configuration information containing output configuration for the visual output of the data by the MFP and/or information for identifying the MFP, to the server. The server manages approval permitting conditions as information indicative of conditions for making the data approvable, in association with the workflow, and determines whether or not to manage the data as approvable, based on the configuration information transmitted from the MFP and the approval permitting conditions. | 04-23-2009 |
20090113427 | Program Management Effectiveness - The present invention provides for a system and method for consistently evaluating program management effectiveness against established or historical benchmarks, involving defining specific performance areas by subfactors, weighting the subfactors, scoring the subfactors, and totaling all weighted subfactor scores to obtain a performance area score. By evaluating all performance area scores, a composite score for an evaluated program may be obtained. Scores may be compared to historical values and optimized based on such values. | 04-30-2009 |
20090113428 | METHOD AND APPARATUS FOR FACILITATING A LOCATION-BASED, DISTRIBUTED TO-DO LIST - One embodiment of the present invention provides a system that facilitates a location-based, distributed to-do list. During operation, the system receives a request at a task-management system to create a task, wherein the request specifies a location for the task and an assignee for the task. In response to the request, the system creates the task. Next, the system receives a status update at the task management system, wherein the status update indicates a location of the assignee. Finally, when the location of the assignee substantially matches the location for the task, the system sends the task to the assignee. | 04-30-2009 |
20090113429 | Processing Signals in a Wireless Network - Systems are described that reduce or obviate the impact of limited processing resources and/or limit the power consumption in a receiver having signal processing functions at least partially implemented in software. A wireless receiver includes reception means for receiving a signal over a wireless channel in a wireless external environment. The receiver includes storage means, and a processor configured to perform a plurality of signal processing functions for extracting processed data from said signal, each of said signal processing functions having a plurality of alternative software implementations requiring different levels of usage of a processing resource. The processor estimates at least one parameter relating to the external environment and selects and executes one of the software alternatives for each of the respective signal processing functions to apply a set of implementations adapted to a required quality of said processed data. Related methods and computer program products are described. | 04-30-2009 |
20090119666 | APPARATUS FOR COOPERATIVE DISTRIBUTED TASK MANAGEMENT IN A STORAGE SUBSYSTEM WITH MULTIPLE CONTROLLERS USING CACHE LOCKING - The present invention provides an apparatus for cooperative distributed task management in a storage subsystem with multiple controllers using cache locking. The present invention distributes a task across a set of controllers acting in a cooperative rather than a master/slave nature to perform discrete components of the subject task on an as-available basis. This minimizes the amount of time required to perform incidental data manipulation tasks, thus reducing the duration of instances of degraded system performance. | 05-07-2009 |
20090125905 | METHOD, APPARATUS AND COMPUTER PROGRAM FOR MODIFYING A MESSAGE - There is disclosed a method, apparatus and computer program for modifying a message. A message is received from a first entity. The message contains a first level of detail appropriate to the first entity and the message is for communication to a second entity. It is determined whether the message contains a scope sensitive field. Once it has been determined that the message does contain a scope sensitive field, information is accessed indicating how to transform the scope sensitive field to a second level of detail appropriate to the second entity. The scope sensitive field is then transformed to produce the second level of detail. | 05-14-2009 |
20090133019 | EVALUATION OF SYNCHRONIZATION GATEWAYS IN PROCESS MODELS - A system may include a thread monitor that is arranged and configured to monitor progress of multiple threads of a workflow process at a synchronization point with each of the threads having a state, and configured to generate at least one inspection trigger for inspection of the threads. A thread inspector may inspect the threads at the synchronization point for a change in the state in any of the threads in response to the inspection trigger. A firing rules engine may determine whether or not the synchronization point should fire based at least in part on the change in the state of at least one of the threads. | 05-21-2009 |
20090133020 | Method for Managing Hardware Resource Usage by Application Programs Within a Computer System - A method for managing the usage of hardware resources by application programs within a computer system is disclosed. A use cost value is set for a device within a computer system. A number of tickets associated with a process is held. Upon execution of the process, the use cost value is compared to the number of tickets held by the process. The process is permitted to use the device based on the result of the comparison. | 05-21-2009 |
20090144734 | METHOD AND IMPLEMENTATION OF AUTOMATIC PROCESSOR UPGRADE - A method for automatically adding capacity to a computer for a workload is provided. Metric information is received, defined in a policy, about a workload running on a computer. Capacity information for the computer is retrieved and is serialized in a serialized list in accordance with the policy. A demand for the workload is received and is a request for additional capacity of the computer. The demand for the workload is analyzed to determine whether the demand is characterized as a speed or a general purpose demand. Speed demand includes an increase in a speed level for the computer, and general purpose demand includes an increase in a speed level and/or a number of processors for the computer. An appropriate capacity is determined, from the serialized list, to add to the computer for the workload based on the analysis. The appropriate capacity of the computer is activated for the workload. | 06-04-2009 |
20090144735 | APPARATUS AND METHOD FOR GENERATING USER INTERFACE BASED ON TASK SERVICE - An apparatus for generating a task-based UI includes a task ontology unit for maintaining task information with respect to the task, a device ontology unit for maintaining device information with respect to a device, a UI description generation unit for reading the task information and/or the device information using the task ontology unit and/or the device ontology unit, respectively, and generating UI description information from the read task information and/or the read device information, the UI description information being made by a task-based language, and a UI description parsing unit for parsing the UI description information to output the task-based UI. | 06-04-2009 |
20090144736 | Performance Evaluation of Algorithmic Tasks and Dynamic Parameterization on Multi-Core Processing Systems - A method for evaluating performance of DMA-based algorithmic tasks on a target multi-core processing system includes the steps of: inputting a template for a specified task, the template including DMA-related parameters specifying DMA operations and computational operations to be performed; evaluating performance for the specified task by running a benchmark on the target multi-core processing system, the benchmark being operative to generate data access patterns using DMA operations and invoking prescribed computation routines as specified by the input template; and providing results of the benchmark indicative of a measure of performance of the specified task corresponding to the target multi-core processing system. | 06-04-2009 |
20090144737 | DYNAMIC SWITCHING OF MULTITHREADED PROCESSOR BETWEEN SINGLE THREADED AND SIMULTANEOUS MULTITHREADED MODES - An apparatus and program product utilize a multithreaded processor having at least one hardware thread among a plurality of hardware threads that is capable of being selectively activated and deactivated responsive to a control circuit. The control circuit additionally provides the capability of controlling how an inactive thread can be activated after the thread has been deactivated, e.g., by enabling or disabling reactivation in response to an interrupt. | 06-04-2009 |
20090150886 | Data Processing System And Method - A method of producing a compartment specification for an application, the method comprising executing the application; determining resource requests made by the executing application; and recording the resource requests in the compartment specification. | 06-11-2009 |
20090158276 | DYNAMIC DISTRIBUTION OF NODES ON A MULTI-NODE COMPUTER SYSTEM - A method and apparatus dynamically distribute I/O nodes on a multi-node computing system. An I/O configuration mechanism located in the service node of a multi-node computer system controls the distribution of the I/O nodes. The I/O configuration mechanism uses job information located in a job record to initially configure the I/O node distribution. The I/O configuration mechanism further monitors the I/O performance of the executing job to then dynamically adjusts the I/O node distribution based on the I/O performance of the executing job. | 06-18-2009 |
20090158277 | METHODS AND SYSTEMS FOR USER CONTROLLED REPRODUCTION JOB REMOVAL - A driver module can be configured to generate a driver interface. The driver module can be configured to include, in the interface, various menus, selectors, and buttons to allow the user to specify the parameters and settings of the job. The driver module can be configured to include, in the interface, an option for the user to remove a job, sent to the reproduction device, after the job is processed by the reproduction device. | 06-18-2009 |
20090158278 | System, method, and apparatus for multi-channel user interaction - A system and method for receiving a communication from a user over a communication channel, selecting and executing a bot, flow or procedure. The bot, flow or procedure to execute are selected by parameters such as the communication channel used, the device used by the user or a user profile. | 06-18-2009 |
20090158279 | Information Processing Method and Information Processing Apparatus | 06-18-2009 |
20090164995 | MANAGING TASKS IN A DISTRIBUTED SYSTEM - Apparatuses, systems, methods, and computer program products for facilitating the management of tasks in a distributed system with modular service architecture and distributed control functions are provided. The system includes an Application Manager, an Application Node, a Service Manager, and a number of Service Nodes that are capable of executing certain services. Upon receiving a task request from the Application Manager, the Application Node generates a task identifier associated with the particular task. The Application Node may then communicate with the Service Manager using the task identifier to receive a designation of a Service Node capable of executing the service required to complete the requested task. The Application Node can then communicate the service to the designated Service Node, including the task identifier. Once completed services are received from the various Service Nodes involved, they are assembled into a completed task using the common task identifier. | 06-25-2009 |
20090164996 | Weak Dependency - The subject matter disclosed herein provides methods and apparatus, including computer program products, for providing a weak dependency linking two tasks of a workflow of task. In one aspect, there is provided a computer-implemented method. The method receives, from a user interface, an indication representing a link between a first task and a second task. The link being a weak dependency linking the first and second task. The weak dependency representing that one or more tasks may be inserted between the first and second tasks. The first and second tasks including the link representing the weak dependency may be provided to the user interface. The link presented at the user interface to enable identification of the weak dependency. Related apparatus, systems, methods, and articles are also described. | 06-25-2009 |
20090164997 | SYSTEM AND METHOD FOR PROCESSING WORKFLOW TASKS - A computer-implemented method for processing workflow tasks is disclosed. The method includes receiving a user operation to start a task, determining if the user operation starts a new task according to a task status table, obtaining a version number of a task processing program from a configuration file or the task status table according to a determined result. The method further includes sending the version number to a task processing server, the task processing server comprising one or more task processing programs with different version numbers, loading the task processing program for processing the user operation according to the version number, recording processed results, and storing the processed results in the task status table. | 06-25-2009 |
20090172668 | CONDITIONAL COMPUTER RUNTIME CONTROL OF AN INFORMATION TECHNOLOGY ENVIRONMENT BASED ON PAIRING CONSTRUCTS - Management of an Information Technology (IT) environment is conditionally controlled based on runtime analysis of resource pairing constructs. Resource pairings are provided, and evaluated based on the current state of the environment. This real-time information is then used in performing managerial tasks, the results of which are effected by the runtime conditions. | 07-02-2009 |
20090172669 | USE OF REDUNDANCY GROUPS IN RUNTIME COMPUTER MANAGEMENT OF BUSINESS APPLICATIONS - A Redundancy Group includes one or more functionally equivalent resources, and is employed in the dynamic reconfiguration of resources. This enables a business application associated with the resources to be actively managed during runtime. | 07-02-2009 |
20090172670 | DYNAMIC GENERATION OF PROCESSES IN COMPUTING ENVIRONMENTS - Workflows to be used in managing a computing environment are dynamically and programmatically created and/or activities are invoked, based on the current state of the environment. In creating a workflow, activities are conditionally included in the workflow based on the state of the environment. Different types of workflows may be created. | 07-02-2009 |
20090172671 | ADAPTIVE COMPUTER SEQUENCING OF ACTIONS - A recommended sequence of tasks to complete a complex task is programmatically defined. The recommended sequence is adaptive in that the sequence can be altered based on the completion status of one or more of the tasks. | 07-02-2009 |
20090172672 | SYSTEM AND METHOD FOR APPROVING A TASK FILE VIA A MOBILE PHONE - An application server is included in a system for approving a task file via a mobile phone. The application server provides a storage system for storing at least one task, each of the at least one task includes at least one task file. The application server is configured for providing message sending function, task selecting function, information analyzing function, and file processing function to the mobile phone to approve a task file in the storage system. | 07-02-2009 |
20090178040 | METHOD AND SYSTEM FOR CONTROLLING NETWORK DEVICE AND RECORDING MEDIUM STORING PROGRAM FOR EXECUTING THE METHOD - A task-based device control method and system for easily providing a service desired by a user in a home network using universal plug and play (UPnP), and a recording medium storing a program for executing the method are provided. The method includes, if a service is selected using a first device connected to the network, the first device generating a task according to the service; the first device notifying another device of the generation of the task via the network; if the first device receives a command to browse or search for the task from a second device via the network, the first device transmitting information on the task to the second device; and if the first device receives a request to fetch the task from the second device via the network, the first device transferring the task to the second device. | 07-09-2009 |
20090178041 | Method for Counting Events - The invention relates to a method for counting events in an information technology system which performs one or more threads of executions, wherein an event counter is introduced comprising:
| 07-09-2009 |
20090187905 | METHOD AND APPARATUS FOR WORKFLOW EXECUTION - A method of executing a workflow in a computer includes representing a main workflow of at least one computer as a series of steps which are linked to define the workflow, the steps being grouped in at least two groups. The method includes representing a transition workflow as at least one transition step, the transition step not being linked to the steps of the main workflow and defining a rule for executing the transition workflow based on transitions between steps of different groups. The method includes executing in at least one computer the steps of the main workflow in an order defined by the links. The method includes evaluating the rule to determine whether the rule applies when, during the executing, a step from one of the groups is followed by a step from another of the groups and executing in at least one computer the steps of the transition workflow when the rule applies. | 07-23-2009 |
20090193415 | DEVICE AND METHOD FOR EXECUTING A POSITIONAL CONDITION TASK BASED ON A DEVICE POSITION AND POSITIONAL DERIVATIVES - A device that executes and a method for executing a positional condition executable task which includes a position determining unit that determines a position of the device, a position derivative generation unit that generates one of a distance-based and a time-based derivative from the position of the device, an input interface that receives a user-defined positional trigger condition, a memory unit that stores a positional condition executable task, and a processor that executes the positional condition executable task based on one of the determined position of the device and the generated derivative from the position of the device with respect to the user-defined positional trigger condition. | 07-30-2009 |
20090193416 | DECIDABILITY OF REACHABILITY FOR THREADS COMMUNICATING VIA LOCKS - A system and method for deciding reachability includes inputting a concurrent program having threads interacting via locks for analysis. Bounds on lengths of paths that need to be explored are computed to decide reachability for lock patterns by assuming bounded lock chains. Reachability is determined for a pair of locations using a bounded model checker. The program is updated in accordance with the reachability determination. | 07-30-2009 |
20090193417 | TRACTABLE DATAFLOW ANALYSIS FOR CONCURRENT PROGRAMS VIA BOUNDED LANGUAGES - A system and method for dataflow analysis includes inputting a concurrent program comprised of threads communicating via synchronization primitives and shared variables. Synchronization constraints imposed by the primitives are captured as an intersection problem for bounded languages. A transaction graph is constructed to perform dataflow analysis. The concurrent program is updated in accordance with the dataflow analysis. | 07-30-2009 |
20090193418 | High level operational support system - A high level Operational Support System (OSS) framework provides the infrastructure and analytical system to enable all applications and systems to be managed dynamically at runtime regardless of platform or programming technology. Applications are automatically discovered and managed. Java applications have the additional advantage of auto-inspection (through reflection) to determine their manageability. Resources belonging to application instances are associated and managed with that application instance. This provides operators the ability to not only manage an application, but its distributed components as well. They are presented as belonging to a single application instance node that can be monitored, analyzed, and managed. The OSS framework provides the platform-independent infrastructure that heterogeneous applications require to be monitored, controlled, analyzed and managed at runtime. New and legacy applications written in C++ or Java are viewed and manipulated identically with zero coupling between the applications themselves and the tools that scrutinize them. | 07-30-2009 |
20090193419 | DYNAMIC PERFORMANCE AND RESOURCE MANAGEMENT IN A PROCESSING SYSTEM - A system may monitor, store, and retrieve resource requirements to improve system resources, including energy resources, when executing one or more applications. | 07-30-2009 |
20090199179 | System and method for terminating workflow instances in a workflow application - The illustrative embodiments described herein provide a method, apparatus, and computer program product for terminating workflow instances in a workflow application. In one embodiment, the process receives a set of identifiers. The process identifies a set of workflow instance identifiers associated with the set of identifiers. The process terminates a set of workflow instances identified by the set of workflow instance identifiers. | 08-06-2009 |
20090199180 | RESOURCE SHARING FOR DOCUMENT PRODUCTION - A system for resource sharing for document production. Available resources are determined for performing at least one document production task. The capacity of an available resource is determined. A task is selecting to send to the resource based at least partly on its availability. An identifier is associating with a set of task data, which can include document data and insertion data. The identifier can be used to track the progress and completion of the task. The task can be sent to the resource for processing. The completed task is received from the resource. | 08-06-2009 |
20090199181 | Use of a Helper Thread to Asynchronously Compute Incoming Data - A set of helper thread binaries is created from a set of main thread binaries. The helper thread monitors software or hardware ports for incoming data events. When the helper thread detects an incoming event, the helper thread asynchronously executes instructions that calculate incoming data needed by the main thread. | 08-06-2009 |
20090199182 | Notification by Task of Completion of GSM Operations at Target Node - A method for providing global notification of completion of a global shared memory (GSM) operation during processing by a target task executing at a target node of a distributed system. The distributed system has at least one other node on which an initiating task that generated the GSM operation is homed. The target task receives the GSM operation from the initiating task, via a host fabric interface (HFI) window assigned to the target task. The task initiates execution of the GSM operation on the target node. The task detects completion of the execution of the GSM operation on the target node, and issues a global notification to at least the initiating task. The global notification indicates the completion of the execution of the GSM operation to one or more tasks of a single job distributed across multiple processing nodes. | 08-06-2009 |
20090199183 | Wake-and-Go Mechanism with Hardware Private Array - A wake-and-go mechanism is provided for a data processing system. When a thread is waiting for an event, rather than performing a series of get-and-compare sequences, the thread updates a wake-and-go array with a target address associated with the event. The wake-and-go mechanism may save the state of the thread in a hardware private array. The hardware private array may comprise a plurality of memory cells embodied within the processor or pervasive logic associated with the bus, for example. Alternatively, the hardware private array may be embodied within logic associated with the wake-and-go storage array. | 08-06-2009 |
20090199184 | Wake-and-Go Mechanism With Software Save of Thread State - A wake-and-go mechanism is provided for a data processing system. When a thread is waiting for an event, rather than performing a series of get-and-compare sequences, the thread updates a wake-and-go array with a target address associated with the event. Software may save the state of the thread. The thread is then put to sleep. When the wake-and-go array snoops a kill at a given target address, logic associated with wake-and-go array may generate an exception, which may result in a switch to kernel mode, wherein the operating system performs some action before returning control to the originating process. In this case, the trap results in other software, such as the operating system or background sleeper thread, for example, to reload thread from thread state storage and to continue processing of the active threads on the processor. | 08-06-2009 |
20090199185 | Affordances Supporting Microwork on Documents - Microwork customers may create microtasks and publish them with a microwork broker. Microwork providers may discover published microtasks and complete them in exchange for specified compensation. Microtask creation, publication, discovery and workflow facilities may be integrated into document editors, productivity tools, and the like. To facilitate trust and efficiency, particularly in the context of large public microwork brokers, public reputations may be maintained for microwork participants by the microwork broker. Distinct reputations may be maintained with respect to particular microwork categories. Discovery of microtasks, access to microtasks, selection of providers, and even compensation may be based on reputation. Microwork confidentially mechanisms, such as controlled access to specified portions of a microtask, or anonymization of portions of a workpiece not salient to a particular microtask, may be employed to protect potentially sensitive information while still taking advantage of the services of public microwork providers. | 08-06-2009 |
20090199186 | Method for controlling a batch process recipe - A method for controlling a batch process recipe, having a first recipe phase and a recipe phase step enabling condition is described, wherein a function module assigned to the first recipe phase is executed by a programmable controller and wherein a first setpoint value and a first actual value are stored in the first recipe phase. Measures are proposed whereby in the context of recipe creation particular functionalities are implemented graphically and therefore visibly for the user without it being necessary to adapt a function module running in the programmable controller. | 08-06-2009 |
20090204966 | UTILITY FOR TASKS TO FOLLOW A USER FROM DEVICE TO DEVICE - A “follow-me” utility runs on each of a plurality devices a person may typically use. This utility monitors applications running on a device and intelligently saves the state of tasks a user is performing. When the follow-me utility detects that the user has initialized another device having the follow-me utility and connectivity to the original device, the utility automatically and transparently creates an environment on the new device so that the user may continue the task at the same point as when he or she last performed the task on the original device. When the user continues a task or starts a new task, the follow-me utility automatically and transparently updates files and task states on any devices having the follow-me utility and connectivity. The follow-me utility may make intelligent task migration decisions based on conditions such as network bandwidth, security policy, location, and device capability. | 08-13-2009 |
20090204967 | Reporting of information pertaining to queuing of requests - Various approaches for capturing context data in a data processing arrangement are described. In one approach, a method controls shared access to an object for a plurality of requestors. An access control routine receives an access control request from a first routine of one of the requestors. The access control request specifies a type of control over access to the object and specifies a role descriptor that describes a processing context in the first routine of the requested access control to the object. The context is not visible to the access control routine without the role descriptor. The method determines whether the type of control can or cannot be immediately granted to the requestor. If the type of control cannot be immediately granted to the one of the requestors, the specified type of requested control and role descriptor are stored by the access control routine. | 08-13-2009 |
20090204968 | SYSTEM AND METHOD FOR MONOTONIC PARTIAL ORDER REDUCTION - A system and method for analyzing concurrent programs that guarantees optimality in the number of thread inter-leavings to be explored. Optimality is ensured by globally constraining the inter-leavings of the local operations of its threads so that only quasi-monotonic sequences of threads operations are explored. For efficiency, a SAT/SMT solver is used to explore the quasi-monotonic computations of the given concurrent program. Constraints are added dynamically during exploration of the concurrent program via a SAT/SMT solver to ensure quasi-montonicity for model checking. | 08-13-2009 |
20090210876 | Pull-model Workload Management with Synchronous-Asynchronous-Synchronous Bridge - A method, computer program product and computer system for workload management that distributes job requests to a cluster of servers in a computer system, which includes queuing job requests to the cluster of servers, maintaining a processing priority for each of the job requests, and processing job requests asynchronously on the cluster of servers. The method, computer program product and computer system can further include monitoring the job requests and dynamically adjusting parameters of the workload management. | 08-20-2009 |
20090210877 | Mobile Communications Device Application Processing System - A system and method of pre-linking classes for use by one or more applications. The system and method may also be used where the runtime processing is split between a host system and a target system. At the host system at least several classes are loaded and linked. At least one host-linked module is generated from the linked classes. The host-linked module is made available for use by the one or more applications operating on the target system. | 08-20-2009 |
20090217266 | STREAMING ATTACHMENT OF HARDWARE ACCELERATORS TO COMPUTER SYSTEMS - A method of streaming attachment of hardware accelerators to a computing system includes receiving a stream for processing, identifying a stream handler based on the received stream, activating the identified stream handler, and steering the stream to an associated hardware accelerator. | 08-27-2009 |
20090217267 | Dynamic Resizing of Applications Running on Virtual Machines - Methods and apparatus, including computer program products, are provided for sizing an application running on a virtual machine. In one aspect, there is provided a computer-implemented method. The method may include registering, at a monitor, one or more controllers associated with one or more corresponding applications. Configuration information may be received for one or more corresponding applications. Event information may be provided to the one or more controllers to enable the one or more controllers to adjust one or more aspects of the corresponding applications. The event information may represent changes in resources (e.g., at the physical machine hosting the virtual machine and application). The aspects may be adjusted based on the changes. Related apparatus, systems, methods, and articles are also described. | 08-27-2009 |
20090217268 | MULTI-TIERED CONSTRAINT CHECKING FOR MANAGING PRINT JOBS - A method implemented in a print job management apparatus for processing print jobs in a multiple-printer print shop environment is described. When an operator manually assigns a print job to a printer, the print job requirements are compared with capabilities of the printer to detect any constraints (i.e. incompatibilities between printer capabilities and print job requirements). The job is printed if no constraint is detected. If a constraint of a first category is detected (e.g. incompatible color capabilities, paper size and type, layout, etc.), printing will not proceed and an error message is displayed. If a constraint of a second category is detected (e.g. inadequate finishing capabilities), a warning message is displayed with a request for operator instruction regarding whether to proceed with printing. If the operator chooses to proceed, the job will be printed, and a banner page containing instructions regarding uncompleted job requirements is generated. | 08-27-2009 |
20090217269 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR PROVIDING MULTIPLE QUIESCE STATE MACHINES - A system, method and computer program product for providing multiple quiesce state machines. The system includes a first controller including logic for processing a first quiesce request. The system also includes a second controller including logic for processing a second quiesce request. All or a portion of the processing of the second quiesce request overlaps in time with the processing of the first quiesce request. Thus, multiple quiesce requests may be active in the system at the same time. | 08-27-2009 |
20090217270 | NEGATING INITIATIVE FOR SELECT ENTRIES FROM A SHARED, STRICTLY FIFO INITIATIVE QUEUE - A computer program product, apparatus and method for negating initiative for select entries from a shared, strictly FIFO initiative queue in a multi-tasking multi-processor environment. An exemplary embodiment includes a computer program product for negating initiative for select entries from a shared initiative queue in a multi-tasking multi-processor environment, the computer program product including a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method including identifying an element within the environment that has failed and recovered, not removing the element from the shared initiative queue and entering a boundary element entry into the shared initiative queue. | 08-27-2009 |
20090217271 | METHOD AND APPARATUS FOR MANAGING DATA - A method and apparatus for managing data by a computer capable of selecting a storage destination of a file from a plurality of drives including a first drive and a second drive are provided. The method includes: editing a target file in the first drive in accordance with an input from a user; listing one or more tasks as unprocessed tasks to be processed on a task list, each of the one or more tasks comprising a content of an edit applied to the target file in association with a name of the target file; and creating a copy of the target file in the second drive by sequentially processing the unprocessed tasks listed on the task list independently of a flow of editing the target file. | 08-27-2009 |
20090222817 | Navigation in Simulated Workflows - An enabled task accessed by a user within a workflow is identified. The workflow is expressed as a Petri net and includes enabled tasks and non-enabled tasks. A non-enabled task selected by a user is identified, and a suitable state that enables the non-enabled state is determined based on the identified enabled task. A simulated workflow for the selected non-enabled task is generated based on the determined suitable state. The simulated workflow is expressed as a Petri net. The user is enabled to navigate through the simulated workflow. | 09-03-2009 |
20090222818 | FAST WORKFLOW COMPLETION IN A MULTI-SYSTEM LANDSCAPE - A method of performing a business reporting job process may include starting, on a central system, a distributed job process, including a plurality of jobs. The method may also include initiating one of the plurality of jobs to be performed by at least one assigned satellite system. Such initiating may include transmitting a central system job context, associated with the initiated job by the central system, from the central system to the satellite system. The method may further include processing the job on the assigned satellite system utilizing the central system job context. And, upon a completion of the job by the satellite system, reporting the completion of the job to the central system, and transmitting at least one result of the job from the satellite system to the central system. The method may further include checking whether or not the result of the job is acceptable, based upon a set of predetermined criteria. | 09-03-2009 |
20090222819 | USER OPERATION ACTING DEVICE, USER OPERATION ACTING PROGRAM, AND COMPUTER READABLE RECORDING MEDIUM | 09-03-2009 |
20090222820 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM - An information processing apparatus that processes a digital document having a hierarchical structure through a workflow made up of at least one process, according to the order of the processes of the workflow. The apparatus includes a generation unit that generates structural elements that make up the digital document in association with the respective processes that make up the workflow; a storage unit that stores results of the processes that make up the workflow in the structural elements corresponding to the respective processes; and a referring unit that refers to the results of the respective processes stored by the storage unit, from the root of the hierarchical structure of the digital document. | 09-03-2009 |
20090241115 | APPLICATION TRANSLATION COST ESTIMATOR - The invention provides a computer-implemented method for estimating the cost of translating a body of text associated with a software application, wherein the software application is configured to perform one or more tasks. In particular the method comprises: determining one or more content types associated with the body of text, wherein each content type has an average word count per content unit; assigning a number of tasks associated with the software application to each content type, wherein each task has an associated number of content units; generating an estimated word count for each content type based on the number of tasks assigned to each content type and the average word count per unit for each content type; summing the estimated word count for each content type to generate an estimated word count for the body of text; and calculating an estimated translation cost based on the estimated word count. | 09-24-2009 |
20090241116 | Systems and Methods for Automating Tasks Associated with an Application Packaging Job - A system and method for generating a configurable workflow for application packaging jobs are disclosed. A method may include receiving input from a user interface by a packaging application configured to manage an application packaging job. The method may also include creating a plurality of workflow states based on at least the received input, each workflow state associated with a particular step in the application packaging job. The method may further include associating at least one action with at least one workflow state based on at least the received input, each action defining a transition from its associated workflow state to a target workflow state. Additionally, the method may include associating an assignee type with at least one action based on at least the received input, the assignee type defining at least one assignee that may assigned to the application packaging job for the particular action. | 09-24-2009 |
20090249338 | METHOD AND SYSTEM TO MODIFY TASK IN A WORKFLOW SYSTEM - Delegatee of a task in workflow system receives data related to task to be performed. This invention realizes a method and system which provides capability to the said delegatee to initiate the process of modifing to tasks which resemble the task delegated to the delegatee. This capability given to the delegatee to initiate the process of modification of resembling tasks is realized by creating a modification task data which includes rules which identifies the resembling tasks in the workflow system; information regarding proposed modification to the resembling tasks in the workflow system and information which identifies the delegatee of the modification task data. | 10-01-2009 |
20090249339 | ASSOCIATING COMMAND SURFACES WITH MULTIPLE ACTIVE COMPONENTS - The same command surface on a page may be associated with unrelated components and applications. Each of the components registers the commands associated with a shared command surface that they will be utilizing. Each component may utilize an arbitrary number of commands that are associated with the command surface. The command manager acts as a message broker between the components on the page and the command surfaces. When a command that is associated with a command surface is received, the command manager dispatches the command message to the appropriate components. | 10-01-2009 |
20090249340 | Managing the Progress of a Plurality of Tasks - Computer systems, methods and computer program products for managing progress of a plurality of tasks associated with respective configuration items. The configuration items are included in a system defined by a digital design specification. In one embodiment of the invention, a computer system includes a repository for holding a data set representative of at least one predetermined attribute of a configuration item and a relation between the configuration item and another configuration item, data regarding tasks, and association data for associating the configuration item and at least one task; and a discovery unit for detecting data regarding the configuration item. | 10-01-2009 |
20090249341 | PROCESSING ELEMENT, CONTROL UNIT, PROCESSING SYSTEM INCLUDING PROCESSING ELEMENT AND CONTROL UNIT, AND DISTRIBUTED PROCESSING METHOD - A processing system has a processing element and a control unit. The processing element has a processing section which carries out a specific function, a communication section which outputs to an outside, function information related to the specific function according to a request from the outside, and a data holding section which holds the function information. The control unit has a communication section which outputs the function information of the processing element connected, according to a request from the outside. | 10-01-2009 |
20090254902 | METHOD FOR IMPROVING ACCESS EFFICIENCY OF SMALL COMPUTER SYSTEM INTERFACE STORAGE DEVICE - A method for improving an access efficiency of a small computer system interface (SCSI) storage device is used to process a plurality of access requests for a physical storage device from a request end. The task processing method includes setting a task queue in each virtual disk, for receiving a plurality of disk access tasks sent from a server; writing the disk access tasks to storage addresses in the virtual disk; executing a storage address recording, for recording the disk access tasks having the same storage address, and sending the rest disk access tasks in sequence to the physical storage device; saving the disk access tasks into a request queue of the physical storage device; executing a program sequence optimization on the disk access tasks in the request queue; and sending back the disk access tasks after the program optimization process to the virtual disk. | 10-08-2009 |
20090254903 | Open framework to interface business applications and content management in media production and distribution environment - An open framework to interface at least one business applications and content management in a media production and distribution environment utilizes a standard messaging protocol and a reliable communication bus. The business application creates a work package in a workflow by sending appropriate messages to a workflow engine. The workflow engine generates a work package template corresponding to the intake work orders at the business application. Devices connected to the communication bus are managed and their respective services are exposed to the workflow engine through the illustrative embodiment of the present principles of managed device interfaces. The work package enables the triggering of a complex sequence of actions via the standard messaging protocol. | 10-08-2009 |
20090254904 | Intent-Based Ontology for Grid Computing Using Autonomous Mobile Agents - A Grid application framework uses semantic languages to describe the tasks and resources used to complete them. A Grid application execution framework comprises a plurality of mobile agents operable to execute one or more tasks described in an intent based task specification language; VO circuitry operable to receive input that describes a task in the task specification language; an analysis engine for generating a solution to the described task; and an intent knowledge base operable to store information contained within tasks of the plurality of mobile agents | 10-08-2009 |
20090260009 | CONTINUATION BASED RUNTIMES IN TRANSACTIONS - A continuation based runtime that participates in transactions that are not generated by the continuation based runtime, but rather are generated externally to the continuation based runtime. The continuation based runtime marshals in transaction data related to the pre-existing externally generated transaction. In one embodiment, the continuation based runtime itself may not do this, but perhaps may use a transaction enabled activity. Once the activity marshals in the data, the activity may request that the continuation based runtime enlist in the transaction, whereupon the continuation based runtime may then register and the transaction may be performed in the context of the continuation based runtime. | 10-15-2009 |
20090260010 | ELECTRONIC DEVICE WORKSPACE RESTRICTION - The drive for multi-tasking and/or availability of numerous applications can interfere with productivity. Constant interruptions from email and real-time online communication can lead to decreased productivity. In addition, attempting to tackle a massive number of different projects with different applications can impede progress on any one of the projects. Functionality can be implemented in a workspace to focus interaction with one or more applications in the workspace. Focused interaction allows a user to limit distractions (e.g., email notifications, instant message notifications, etc) and restrict activities not related to his or her current task. | 10-15-2009 |
20090271788 | WEB BASED TASK COMPLETENESS MEASUREMENT - A system, method and program product for providing measure the completeness of a task in a web based environment and for providing dynamic marketing and other adaptive behavior based on how far a user has completed the task. A system is provided that includes: a task definition system for associating subsets of documents available via a content delivery system with a plurality of tasks; a tracking system for tracking which documents have been viewed by a user; a task determination system for determining which of the plurality of tasks the user is engaged in performing; and a progress analysis system for analyzing a progress the user has achieved towards completing the task. | 10-29-2009 |
20090271789 | METHOD, APPARATUS AND ARTICLE OF MANUFACTURE FOR TIMEOUT WAITS ON LOCKS - Embodiments of the invention provide techniques for performing timeout waits of process threads. Generally, a thread requesting access to locked resource sends a timeout request to a timeout handler process, and then goes to sleep. The timeout request is received by a receiving thread of the timeout handler process. The receiving thread may insert the timeout request into a minimum heap of timeout requests, and may determine whether the inserted request is due earlier than any of the existing timeout requests. If so, the receiving thread may interrupt a timing thread of the timeout handler process. The timing thread may then wait until reaching the requested timeout, and then send a wakeup message to the sleeping thread. | 10-29-2009 |
20090271790 | COMPUTER ARCHITECTURE - A computer processor comprises a memory and logic and control circuitry utilizing instructions and operands used thereby. The logic and control circuitry includes: an execution buffer each location of which can contain an instruction or data together with a tag indicating the status of the information in the location; means for executing the instructions in the buffer in dependence on the statuses of the current instruction and the operands in the buffer used by that instruction, and a program counter for fetching instructions sequentially from the memory. The tags include data, instruction, reserved, and empty tags. The processor may to execute instructions as parallel tasks subject to their data dependencies and a system may include several such processors. FIGS. | 10-29-2009 |
20090276775 | PCI Function South-Side Data Management - A hypervisor, during device discovery, has code which can examine the south-side management data structure in an adapter's configuration space and determine the type of device which is being configured. The hypervisor may copy the south-side management data structure to a hardware management console (HMC) and the HMC can populate the data structure with south-side data and then pass the structure to the hypervisor to replace the data structure on the adapter. In another embodiment the hypervisor may copy the data structure to the HMC and the HMC can instruct the hypervisor to fill-in the data structure, a virtual function at a time, with south-side management data associations. The administrator can assign south-side data, such as a MAC address for a virtual instance of an Ethernet device, to LPARs sharing the adapter. Thus, a standard way to manage the south-side data of virtual functions is provided. | 11-05-2009 |
20090276776 | System and Method for Automatic Throttling of Resources in an Information Handling System Chassis - Systems and methods for automatic throttling of resources in an information handling system are disclosed. A method may include determining whether a first throttling condition exists, the first throttling condition existing when a chassis management controller fails to communicate a clock or synchronization signal to one or more devices in an information handling system chassis for a particular duration of time. The method may also include determining whether a second throttling condition exists, the second throttling condition existing when the chassis management controller fails to communicate data to one or more devices in the information system handling chassis. The method may further include throttling a resource in the information handling system chassis if at least one of the first throttling condition and the second throttling condition exists. | 11-05-2009 |
20090282405 | System and Method for Integrating Best Effort Hardware Mechanisms for Supporting Transactional Memory - Systems and methods for integrating multiple best effort hardware transactional support mechanisms, such as Read Set Monitoring (RSM) and Best Effort Hardware Transactional Memory (BEHTM), in a single transactional memory implementation are described. The best effort mechanisms may be integrated such that the overhead associated with support of multiple mechanisms may be reduced and/or the performance of the resulting transactional memory implementations may be improved over those that include any one of the mechanisms, or an un-integrated collection of multiple such mechanisms. Two or more of the mechanisms may be employed concurrently or serially in a single attempt to execute a transaction, without aborting or retrying the transaction. State maintained or used by a first mechanism may be shared with or transferred to another mechanism for use in execution of the transaction. This transfer may be performed automatically by the integrated mechanisms (e.g., without user, programmer, or software intervention). | 11-12-2009 |
20090282406 | Method and System for Transaction Resource Control - A method of controlling resource consumption of running processes, sub processes and/or treads (such as a database or an application transaction) in a computerized system, in which resources consumed by less important processes are freed by periodically suspending (by causing them to demand less resources) and resuming these processes transparently to other entities of the computerized system and externally to the OS without intervention in its inherent resource allocation mechanism and allowing the OS of the computerized system to allocate the free resources to other running processes. | 11-12-2009 |
20090282407 | TASK SWITCHING APPARATUS, METHOD AND PROGRAM - A method of assigning task management blocks for first type tasks to time slot information on a one-by-one basis, assigning a plurality of task management blocks for second type tasks to time slot information, selecting a task management block according to a priority classification when switching to the time slot of the time slot information, and switching to the time slot except the time slot information. Additionally a task switching apparatus selects the task management block assigned to the time slot and executes the task. | 11-12-2009 |
20090282408 | SYSTEMS AND METHODS FOR MULTI-TASKING, RESOURCE SHARING, AND EXECUTION OF COMPUTER INSTRUCTIONS - In a multi-tasking pipelined processor, consecutive instructions are executed by different tasks, eliminating the need to purge an instruction execution pipeline of subsequent instructions when a previous instruction cannot be completed. The tasks do not share registers which store task-specific values, thus eliminating the need to save or load registers when a new task is scheduled for execution. If an instruction accesses an unavailable resource, the instruction becomes suspended, allowing other tasks' instructions to be executed instead until the resource becomes available. Task scheduling is performed by hardware; no operating system is needed. Simple techniques are provided to synchronize shared resource access between different tasks. | 11-12-2009 |
20090288085 | Scaling and Managing Work Requests on a Massively Parallel Machine - A method, computer program product and computer system for scaling and managing requests on a massively parallel machine, such as one running in MIMD mode on a SIMD machine. A submit mux (multiplexer) is used to federate work requests and to forward the requests to the management node. A resource arbiter receives and manges these work requests. A MIMD job controller works with the resource arbiter to manage the work requests on the SIMD partition. The SIMD partition may utilize a mux of its own to federate the work requests and the computer nodes. Instructions are also provided to control and monitor the work requests. | 11-19-2009 |
20090293059 | AUTOMATICALLY CONNECTING ITEMS OF WORKFLOW IN A COMPUTER PROGRAM - A workflow design system receives a set of parameters that are to be used in a workflow, as well as an indication of a function that is to be performed in the workflow. The workflow design system uses a mapping component to map the parameters to inputs of the identified function. The workflow design system then outputs suggested mappings of the parameters to the function inputs, and optionally waits for user confirmation. Once user confirmation is received (if it is required), either the workflow design system or the mapping component automatically generates the connections between the parameters and the function inputs. | 11-26-2009 |
20090300615 | METHOD FOR GENERATING A DISTRIBUTED STREAM PROCESSING APPLICATION - Techniques for generating a distributed stream processing application are provided. The techniques include obtaining a declarative description of one or more data stream processing tasks, wherein the declarative description expresses at least one stream processing task, and generating one or more execution units from the declarative description of one or more data stream processing tasks, wherein the one or more execution units are deployable across one or more distributed computing nodes, and comprise a distributed data stream processing application. | 12-03-2009 |
20090300616 | AUTOMATED TASK EXECUTION FOR AN ANALYTE MONITORING SYSTEM - In one aspect, method and apparatus including providing one or more scheduled tasks associated with an analyte monitoring device and executing the scheduled one or more tasks in accordance with a predetermined execution sequence are provided. | 12-03-2009 |
20090300617 | SYSTEM AND METHOD OF GENERATING AND MANAGING COMPUTING TASKS - A method, computer program product, and system of managing computing tasks includes storing at least one build information element within at least one attribute of a configuration management tool A computing task is generated from within the configuration management tool based upon, at least in part, the at least one build information element. The computing task is initiated from within the configuration management tool. The computing task is deployed on a computing device. | 12-03-2009 |
20090300618 | Method and Apparatus to Facilitate Negotiation of a Selection of Capabilities to Be Employed When Facilitating a Task - Determine ( | 12-03-2009 |
20090300619 | Product independent orchestration tool - A self-replicating machine includes a virtualization tool, a provisioning tool, and a configuration tool, stored in a distributable self-contained repository of the machine. The machine is able to automatically rebuild itself solely from the tools stored in the distributable self-contained repository. The virtualization tool is configured to build one or more virtual machines on the machine. Each virtual machine has a corresponding operating system and environment. The provisioning tool is configured to provision the one or more virtual machines. The configuration tool is to configure the one or more provisioned virtual machines. A custom configuration management tool further customize and configure the physical machine for specific users. A configuration management tool is configured to orchestrate and automate a deployment process, and to interface with an underlying product having a corresponding functionality. | 12-03-2009 |
20090300620 | CONTROL DEVICE AND METHOD FOR PROVIDING USER INTERFACE (UI) THEREOF - A control device which displays menus generated based on tasks is provided. The control device includes an input unit which receives a user command for performing a task, and a control unit which, if a task to be performed is selected via the input unit, generates a menu list showing menus for each of a plurality of apparatuses available to perform the selected task. Therefore, it is possible for a user to conveniently perform a desired task. | 12-03-2009 |
20090300621 | Local and Global Data Share - A graphics processing unit is disclosed, the graphics processing unit having a processor having one or more SIMD processing units, and a local data share corresponding to one of the one or more SIMD processing units, the local data share comprising one or more low latency accessible memory regions for each group of threads assigned to one or more execution wavefronts, and a global data share comprising one or more low latency memory regions for each group of threads. | 12-03-2009 |
20090307691 | COORDINATION AMONG MULTIPLE MEMORY CONTROLLERS - Systems and methods that coordinate operations among a plurality of memory controllers to make a decision for performing an action based in part on state information. A control component facilitates exchange of information among memory controllers, wherein exchanged state information of the memory controllers are further employed to perform computations that facilitate the decision making process. | 12-10-2009 |
20090307692 | SYSTEM AND METHOD TO DYNAMICALLY MANAGE APPLICATIONS ON A PROCESSING SYSTEM - A method and system in accordance with the present invention provides an intelligent prediction approach for populating and depopulating multiple applications at the system level across applications. The detection and management of user behavior patterns to anticipate the user's next request is provided. Further the present invention is to account for a situation to relate dynamically to user behavior and where that user behavior changes to adjust so as to more accurately set forth a desired result for a user of the present invention. The present invention in various implementations provides an intelligent prediction scheme for populating and depopulating multiple applications at the system level across a diversity of applications. | 12-10-2009 |
20090307693 | SYSTEM AND METHOD TO DYNAMICALLY MANAGE APPLICATIONS ON A PROCESSING SYSTEM - A method and system in accordance with the present invention provides an intelligent prediction approach for populating and depopulating multiple applications at the system level across applications. The detection and management of user behavior patterns to anticipate the user's next request is provided. Further the present invention is to account for a situation to relate dynamically to user behavior and where that user behavior changes to adjust so as to more accurately set forth a desired result for a user of the present invention. The present invention in various implementations provides an intelligent prediction scheme for populating and depopulating multiple applications at the system level across a diversity of applications. | 12-10-2009 |
20090307694 | USING DATA IN ELEMENTS OF A SINGLY LINKED LIST WITHOUT A LOCK IN A MULTITHREADED ENVIRONMENT - A method and system for validating a scan of a chain in a multithreaded environment. A modification counter and an anchor address are atomically copied from the chain's header into a first variable (browse counter) and second variable, respectively. The second variable is set to a next address stored in a current element of the chain. The next address references a next element of the chain. The browse counter is incremented. If the browse counter is greater than a current value of the modification counter (M.Counter) and if the second variable includes a valid address, then the scan is valid up to the current element, the scan continues with the next element as the current element, and the process repeats starting with setting the second variable to the next address. Otherwise, if the browse counter is less than or equal to M.Counter, then the scan is invalid. | 12-10-2009 |
20090313622 | SYNCHRONIZING QUEUED DATA ACCESS BETWEEN MULTIPLE GPU RENDERING CONTEXTS - Synchronized access to a shared surface from multiple rendering contexts is provided. Only one rendering context is allowed to access a shared surface at a given time to read from and write to the surface. Other non-owning rendering contexts are prevented from accessing and rendering to the shared surface while the surface is currently owned by another rendering context. A non-owning rendering context makes an acquire call and waits for the surface to be released. When the currently owning rendering context finishes rendering to the shared surface, it release the surface. The rendering context that made the acquire call then acquires access and renders to the shared surface. | 12-17-2009 |
20090313623 | MANAGING THE PERFORMANCE OF A COMPUTER SYSTEM - Some embodiments of the present invention provide a system that manages a performance of a computer system. During operation, a current expert policy in a set of expert policies is executed, wherein the expert policy manages one or more aspects of the performance of the computer system. Next, a set of performance parameters of the computer system is monitored during execution of the current expert policy. Then, a next expert policy in the set of expert policies is dynamically selected to manage the performance of the computer system, wherein the next expert policy is selected based on the monitored set of performance parameters to improve an operational metric of the computer system. | 12-17-2009 |
20090313624 | UNIFIED AND EXTENSIBLE ASYNCHRONOUS AND SYNCHRONOUS CANCELATION - A cancelation registry provides a cancelation interface whose implementation registers cancelable items such as synchronous operations, asynchronous operations, type instances, and transactions. Items may be implicitly or explicitly registered with the cancelation registry. A consistent cancelation interface unifies cancelation management for heterogeneous items, and allows cancelation of a group of items with a single invocation of a cancel-registered-items procedure. | 12-17-2009 |
20090313625 | Workload management, control, and monitoring - A first computer program runs in user memory space of a computing environment, and a second computer program runs in kernel memory space of the computing environment. The first computer program determines processes that constitute a workload. The second computer program creates a workload identifier corresponding to the workload, and associates the processes with the workload identifier. The first computer program requests metrics regarding the workload. In response, the second computer program collects such metrics by collecting metrics regarding the processes that constitute the workload and that are associated with the workload identifier The second computer program reports the metrics regarding the workload to the first computer program. | 12-17-2009 |
20090313626 | Estimating Recovery Times for Data Assets - Estimating a recovery time for a data asset is provided. A request is received to project a recovery time for a data asset that uses a repository. A determination is made as to whether there are one or more existing recovery times for other data assets and other repositories that have characteristics similar to the data asset and the repository of the request. The recovery time for the data asset is projected using the one or more existing recovery times in response to an existence of the one or more existing recovery times. | 12-17-2009 |
20090313627 | TECHNIQUE FOR PERFORMING A SYSTEM SHUTDOWN - Technique for expediting a shutdown process in a computerized system, comprising a number of software modules MUC, a number of functional components and at least one user entity U. A user entity applies requests to a MUC and serves an access provider of the MUC for accessing the functional components. The method performs accelerated shutting down of the software module MUC, by the following steps: initiating shut down of the MUC (by a user entity U); making the MUC software module opaque so as to stop managing of the functional components; shutting down the software module MUC. | 12-17-2009 |
20090320021 | DIAGNOSIS OF APPLICATION PERFORMANCE PROBLEMS VIA ANALYSIS OF THREAD DEPENDENCIES - A “Performance Evaluator” provides various techniques for tracking system events to diagnose root causes of application performance anomalies. In general, traces of system events involved in inter-thread interactions are collected at application runtime. These traces are then used to construct inter-thread dependency patterns termed “control patterns.” Control patterns are then evaluated to determine root causes of performance anomalies. Where an application terminates abnormally or full traces cannot be collected for some reason, partial control patterns are constructed for that application. In various embodiments, “fingerprints” are then generated from full or partial control patterns and are matched to fingerprints corresponding to operations in other control patterns extracted from reference traces collected on the same or similar systems. Matched fingerprints or control patterns are then used to deduce the root cause of application performance anomalies associated with full or partial traces. | 12-24-2009 |
20090320022 | File System Object Node Management - Embodiments of the invention provide a method for assigning a home node to a file system object and using information associated with file system objects to improve locality of reference during thread execution. Doing so may improve application performance on a computer system configured using a non-uniform memory access (NUMA) architecture. Thus, embodiments of the invention allow a computer system to create a nodal affinity between a given file system object and a given processing node. | 12-24-2009 |
20090320023 | Process Migration Based on Service Availability in a Multi-Node Environment - A process on a highly distributed parallel computing system is disclosed. When a first compute node in a first pool is ready to hand-off a task to second pool for further processing, the first compute node may first determine whether a node is available in the second pool. If no node is available from the second pool, then the first compute node may begin performing a primary task assigned to the second pool of nodes, up to the point where a service available exclusively to the nodes of the second pool is required. In the interim, however, one of the nodes of the second pool may become available. Alternatively, an application program running on a compute node may be configured with an exception handling routine that catches exceptions and migrates the application to a compute node where a necessary service is available, as such exceptions occur. | 12-24-2009 |
20090320024 | CONTROL DEVICE AND CONTROL METHOD THEREOF - A control device for displaying a menu generated based on the media included in the devices within a home network. The control device includes a media information collection unit which collects information on media that can be processed by devices within a home network, a control unit which generates a task list for tasks that can be performed based on the collected information, and a display unit which displays the generated task list. A user thereby may control the devices within a home network using the menu generated based on the media included in the devices, making it possible to improve convenience to the user. | 12-24-2009 |
20090320025 | ENTERPRISE TASK MANAGER - An enterprise task management system ( | 12-24-2009 |
20090320026 | METHODS AND SYSTEM FOR EXECUTING A PROGRAM IN MULTIPLE EXECUTION ENVIRONMENTS - A system and methods are disclosed for executing a technical computing program in parallel in multiple execution environments. A program is invoked for execution in a first execution environment and from the invocation the program is executed in the first execution environment and one or more additional execution environments to provide for parallel execution of the program. New constructs in a technical computing programming language are disclosed for parallel programming of a technical computing program for execution in multiple execution environments. It is also further disclosed a system and method for changing the mode of operation of an execution environment from a sequential mode to a parallel mode of operation and vice-versa. | 12-24-2009 |
20090328039 | Deterministic Real Time Business Application Processing In A Service-Oriented Architecture - Methods, apparatus, and products for deterministic real time business application processing in a service-oriented architecture (‘SOA’), the SOA including SOA services, each SOA service carrying out a processing step of the business application where each SOA service is a real time process executable on a real time operating system of a generally programmable computer and deterministic real time business application processing according to embodiments of the present invention includes configuring the business application with real time processing information and executing the business application in the SOA in accordance with the real time processing information. | 12-31-2009 |
20090328040 | Determining Real Time Stateful Business Application Processing In An Otherwise Stateless Service-Oriented Architecture - Methods, apparatus, and products for deterministic real time stateful business application processing in an otherwise stateless service-oriented architecture (‘SOA’), the SOA including SOA services with each SOA service carrying out a processing step of the business application, each SOA service is a real time process executable on a real time operating system of a generally programmable computer and business application processing according to embodiments of the present invention includes: configuring each service of the SOA to record state information describing the state of the service upon completion of a processing step in the business application and provide the state information to a subsequent service, the state information including real time processing information; and executing the business application in the SOA in real time, including sending requests for data processing among the services, each such request comprising a specification of the state of the executing business application. | 12-31-2009 |
20090328041 | Shared User-Mode Locks - Technologies are described herein for implementing shared locks for controlling synchronized access to a shared resource. In one method, in a user mode of an operating system, a notification is received indicating that a first process begins execution. The first process is adapted to acquire the shared lock during execution of the first process. Upon receiving the notification, it is determined whether the first process terminates execution without releasing the shared lock. Upon determining that the first process terminates execution without releasing the shared lock, the shared lock is released for access by a second process. | 12-31-2009 |
20090328042 | DETECTION AND REPORTING OF VIRTUALIZATION MALWARE IN COMPUTER PROCESSOR ENVIRONMENTS - Methods and systems to detect virtualization of computer system resources, such as by malware, include methods and systems to evaluate information corresponding to a computer processor operating environment, outside of or secure from the operating environment, which may include one or more of a system management mode of operation and a management controller system. Information may include processor register values. Information may be obtained from within the operating environment, such as with a host application running within the operating environment. Information may be obtained outside of the operating environment, such as from a system state map. Information obtained from within the operating environment may be compared to corresponding information obtained outside of the operating environment. Direct memory address (DMA) translation information may be used to determine whether an operating environment is remapping DMA accesses. Page tables, interrupt tables, and segmentation tables may be used to reconstruct a view of linear memory corresponding to the operating environment, which may be scanned for malware or authorized code and data. | 12-31-2009 |
20100005466 | METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR ASYNCHRONOUS RESUMPTION OF A DATAFLOW - A method, system, and computer program product for providing asynchronous resumption of a dataflow are provided. The method includes building an executable directed graph from a dataflow that includes multiple interconnected nodes, where at least one of the interconnected nodes is an asynchronous node. The method further includes creating an event flow that includes the asynchronous node and interconnections subsequent to the asynchronous node. The method also includes invoking execution of the executable directed graph, and creating a state object with an identifier associated with the event flow in response to reaching the asynchronous node. The method additionally includes continuing execution of the executable directed graph while avoiding the asynchronous node and the interconnections subsequent to the asynchronous node, and resuming execution of the event flow as identified via the state object upon receiving a response for the asynchronous node. | 01-07-2010 |
20100005467 | THREAD SYNCHRONIZATION METHODS AND APPARATUS FOR MANAGED RUN-TIME ENVIRONMENTS - Thread synchronization methods and apparatus for managed run-time environments are disclosed. An example method to maintain state information for optimistically balanced synchronization of a lock of an object in a managed runtime environment disclosed herein comprises storing state information comprising a state of each pending optimistically balanced release operation corresponding to each pending optimistically balanced synchronization to be performed on the lock of the object, each pending optimistically balanced synchronization comprising respective paired acquisition and release operations between which an unknown number of unpaired locking operations are to occur, and modifying a first stored state of a first pending optimistically balanced release operation when a subsequent unpaired locking operation is performed on the lock, but not modifying any stored state of any pending optimistically balanced release, including the first stored state of a first pending optimistically balanced release operation, when a subsequent optimistically balanced synchronization is performed on the lock. | 01-07-2010 |
20100011360 | Lock Windows for Reducing Contention - Methods and arrangements to assign locks to threads are discussed. Embodiments include transformations, code, state machines or other logic to assign locks to threads. Embodiments may include setting a window of time at the end of a time slice of a thread. The embodiment may also involve prohibiting the thread from acquiring a lock during the window of time, based upon determining that the thread is within the window of time and determining that the thread does not hold any locks. Other embodiments include an apparatus to assign locks to threads and a computer program product to assign locks to threads. | 01-14-2010 |
20100011361 | Managing Task Requests - Systems and methods are disclosed herein for managing task requests. An end user device include one or several possible implementation for managing task requests. Specifically, the end user device comprises a processing device and a memory device, which is configured to store a task request managing program. The processing device is configured to execute the task request managing program. The processing device is configured to analyze a string of characters of a natural language request from a user to extract a requested task and a requested object. The processing device is further configured to check whether the user is permitted to initiate the requested task on the requested object. In addition, the processing device is configured to perform the requested task on the requested object when it is determined that the user is permitted to initiate the requested task on the requested object. | 01-14-2010 |
20100011362 | METHODS FOR SINGLE-OWNER MULTI-CONSUMER WORK QUEUES FOR REPEATABLE TASKS - There are provided methods for single-owner multi-consumer work queues for repeatable tasks. A method includes permitting a single owner thread of a single owner, multi-consumer, work queue to access the work queue using atomic instructions limited to only a single access and using non-atomic operations. The method further includes restricting the single owner thread from accessing the work queue using atomic instructions involving more than one access. The method also includes synchronizing amongst other threads with respect to their respective accesses to the work queue. | 01-14-2010 |
20100017803 | WORKFLOW PROCESSING APPARATUS AND WORKFLOW PROCESSING METHOD - The present invention allows for storing new document data immediately after the details of processing associated with a box are changed. The present invention provides a workflow processing apparatus executing processing procedures in sequence for data existing in a storage location based on setting information, where the workflow processing apparatus includes a data registration unit changing the setting information, a data acquisition unit determining whether or not data exists in the first storage location when the setting information is changed, and a box operation unit that changes the original name of the first storage location to a new name and that generates the second storage location having the original name when it is determined that the data exists in the first storage location. | 01-21-2010 |
20100023944 | Suspend Profiles and Hinted Suspending - Methods, systems and computer program products for suspend profiles and hinted suspending. Exemplary embodiments include a suspend mode management method, including determining a task to perform in the computer system during a suspend period of the computer system, detecting a suspend event in the computer system, the suspend event initiating the suspend period and performing the task during the suspend period. | 01-28-2010 |
20100031260 | Object-Oriented Thread Abort Mechanism for Real Time C++ Software - A method and architecture are disclosed for gracefully handling aborted threads in object-oriented systems. In accordance with the illustrative embodiment, a platform adaptation software layer intercepts calls from an application to the operating system and checks for a request to abort a thread. When such a request is detected, the platform adaptation software layer throws an exception and allows the intercepted call to reach its destination (i.e., the operating system). When the exception is caught at the application layer, the appropriate object instances' destructors can be invoked, and resources associated with the thread can be released and cleaned up. | 02-04-2010 |
20100031261 | VIRTUAL SPACE PROVIDING SYSTEM, METHOD FOR CONTROLLING IMAGE FORMING APPARATUS, AND MEDIUM STORING PROGRAM THEREOF - It is determined whether a print command to a virtual image forming apparatus defined in the virtual space is a command to create a virtual output product by the virtual image forming apparatus, or a command to print an output product by the image forming apparatus that is linked to the virtual image forming apparatus. A virtual space providing system simulates printing of an output product by the image forming apparatus by creating virtual printed matter in the virtual space, when it is determined that the command is to create a virtual output product by the virtual image forming apparatus as a result of the determination. The virtual space providing system outputs the print command to the image forming apparatus, when it is determined that the command is to print an output product by the image forming apparatus that is linked to the virtual image forming apparatus. | 02-04-2010 |
20100037222 | Detecting the starting and ending of a task when thread pooling is employed - Starting and ending of a task is detected, where thread pooling is employed. Threads perform a wait operation on a given object are monitored, and threads performing a notify/notify-all operation on the given object are monitored. A labeled directed graph is constructed. Each node of the graph corresponds to one of the threads. Each edge of the graph has a label and corresponds to performance of the wait or notify/notify-all operation. An identifier of the given object is a label of a number of the edges. A set of nodes is selected that each has an edge having the same label. The threads of these nodes are worker threads of a thread pool. The threads of the nodes that are connected to the set of nodes are master threads. An object having an identifier serving as the label of the edges to the set of nodes is a monitoring mechanism. | 02-11-2010 |
20100037223 | METHOD FOR CONTROLLING STORAGE APPARATUS AND STORAGE APPARATUS - A method for controlling a storage apparatus includes rearranging an order of execution of commands supplied from an external apparatus and queued in the storage apparatus so as to optimize execution of commands, and adjusting a maximum retry time of a command on the basis of a passage time from receipt of the command, the maximum retry time being defined for each command. The adjusting includes at least one of a first adjustment and a second adjustment. The first adjustment reduces the maximum retry time of at least one of the commands queued in the storage apparatus on the basis of the passage time of said at least one of the commands at a first timing when the order of execution of commands is rearranged. The second adjustment reduces the maximum retry time of a selected command to be executed in the storage apparatus on the basis of the passage time of the selected command at a second timing before the selected command is executed in the storage apparatus. | 02-11-2010 |
20100037224 | APPLICATION PLATFORM - Activation and deactivation of multiple applications on a multi-function peripheral are more appropriately performed to enable users to comfortably use the multi-function peripheral at a workplace having various operating environments. At a time of a logout, control is performed to select and activate at least one of applications in a shutdown state, the selected application being likely to be used at the next login. Further, the number of applications being activated is limited to a predetermined number or less. | 02-11-2010 |
20100042995 | DATA PROCESSING DEVICE AND METHOD FOR MONITORING CORRECT OPERATION OF A DATA PROCESSING DEVICE - A method for monitoring the correct operations of a data processing device including changing a subsystem from an authorized state to an unauthorized state, executing the partial operating sequence, and resetting any subsystem state from the unauthorized state to the authorized state. | 02-18-2010 |
20100042996 | UTILIZATION MANAGEMENT - In an illustrative embodiment, a computer implemented method for utilization management is provided. The computer implemented method initiates a utilization monitor to monitor a set of processes, records utilization data for an identified process of the set of processes to form recorded utilization data, and determines whether the recorded utilization data exceeds a utilization threshold. The computer implemented method, responsive to a determination that the recorded utilization data exceeds a utilization threshold, performs an action to manage utilization. | 02-18-2010 |
20100042997 | CONDITIONED SCALABLE NON-ZERO INDICATOR - Apparatus, methods, and computer-program products are disclosed for performing an Arrive operation on a concurrent hierarchical Scalable Non-Zero Indicator (SNZI) object wherein the concurrent hierarchical SNZI object is a conditioned-SNZI (CSNZI) object that includes a parent CSNZI node. The method invokes a parent Arrive operation on the parent CSNZI node and returns an arrive failure status if the CSNZI object is disabled. | 02-18-2010 |
20100050175 | IMAGE FORMING APPARATUS AND RESOURCE SAVING MODE CONTROL METHOD THEREOF - An image forming apparatus that saves resources supports a resource saving mode which has at least one function, includes a button to select the resource saving mode; a storage unit to store a preset function which is applied when the resource saving mode is selected; and a controller to execute the resource saving mode by applying the preset function to a corresponding job when the button is pressed, wherein the preset function is set by default or editable. Hence, resources may be saved while simultaneously increasing the convenience to the user. | 02-25-2010 |
20100050176 | PROCESS AUTO-RESTART SYSTEMS AND METHODS - Systems and methods for auto-restarting abnormally terminated processes are disclosed. An auto-restart system can include a parent task control block, a child process, and a shared resource. The parent task control block can spawn the child process. The child process can operate on the shared resource. When the child process finds the shared resource locked, the child process can terminate abnormally. The parent task control block can recognize the abnormal termination of the child process, and can automatically rollback and restart the child process. Accordingly, the child process can be restarted to operate on the shared resource without human intervention. | 02-25-2010 |
20100058343 | WORKFLOW TRACKING SYSTEM, INTEGRATION MANAGEMENT APPARATUS, METHOD AND INFORMATION RECORDING MEDIUM HAVING RECORDED IT - To provide a workflow tracking apparatus having scalability that can be efficiently operated. A framework of management using a hierarchical structure such as DNS is introduced. A simple decentralization structure leads to a failure in establishing semantic consistencies of Wf definition and meta-information, results in incapability of management. Therefore, there are additionally prepared (1) an element measure that manages, as a predetermined designated standard form, the descriptions of element workflows to be decentralized; (2) an element measure that transforms the descriptions of element workflows to be decentralized into the standard form; and (3) a workflow combining measure that combines the descriptions of element workflows to be decentralized, thereby reproducing the description of the entire workflows. These measures are used to combine the descriptions of element workflows to be decentralized, thereby generating consistent Wf definition and meta-information, thereafter copying replicas of them to a workflow monitor system that manages the domain. | 03-04-2010 |
20100070973 | GENERIC WAIT SERVICE: PAUSING A BPEL PROCESS - A generic wait service for facilitating the pausing of service-oriented applications. In one set of embodiments, the generic wait service receives, from a paused instance of an application, an initiation message comprising a set of key attributes and an exit criterion. The key attributes uniquely identify the paused instance, and the exit criterion identifies a condition that should be satisfied before the paused instance is allowed to proceed. The generic wait service then receives, from one or more event producers, notification messages comprising status information (e.g., statuses of business events) and information correlating the notification messages to particular instances. If a notification message is determined to be correlated to the paused instance, the generic wait service evaluates the exit criterion based on the status information included in the message. If the exit criterion is satisfied, the paused instance is notified of the status information and is allowed to proceed. | 03-18-2010 |
20100083252 | Controlling Access to Physical Indicators in a Logically Partitioned Computer System - A low-level logical partitioning function associates partitions, partitionable entities, and location codes. Partitionable entities are hardware components, not necessarily individually replaceable. The location code reflects the physical topology of the system packaging. It is preferably a string of concatenated elements, each element representing a device in a hierarchical level of devices which may contain other devices. A respective location code is likewise associated with each of multiple physical indicators. The location code of partitionable entities allocated to a partition is compared with the indicator's location code to determine whether a process executing within a partition can access the indicator. Preferably, each partition has a virtual indicator corresponding to a physical indicator, the state of the physical indicator being derived as a function of the states of multiple virtual indicators for multiple partitions. | 04-01-2010 |
20100083253 | TASK MANAGEMENT SYSTEM - A device may receive, over a network, a message that describes a task, create a new task object based on the message, determine whether the task includes performing a follow up task or a new task based on the message, discard the new task object when the task is neither a follow up task nor a new task, perform a follow up to verify a performance of another task when the task is a follow up task, and assign the task to one of multiple queues for processing when the task is a new task. | 04-01-2010 |
20100083254 | FLEXIBLE AND SCALABLE OPERATING SYSTEM ACHIEVING A FAST BOOT AND RELIABLE OPERATION - Systems and methods are provided for a flexible and scalable operating system achieving a fast boot. A computing system is described that includes a reserved static object memory configured to store predefined static threads, and a secure kernel configured to be executed in a fast boot mode. The secure kernel further may be configured to chain the static threads to a secure kernel thread queue stored in a secure kernel work memory, and to create temporary threads in the secure kernel work memory during the fast boot mode. The computing system may include a main kernel configured to be initialized by creating dynamic threads in a main kernel work memory during the fast boot mode. The main kernel may be configured to chain the static threads to a main kernel thread queue, and to assume control of the static threads from the secure kernel. | 04-01-2010 |
20100088700 | SUB-DISPATCHING APPLICATION SERVER - Multiple sub-dispatched application server threads are provided in a single local process, where the multiple sub-dispatched application server threads carry out their own task dispatching. The multiple sub-dispatched application server threads are linked in the single local process using a distributed programming model. Scope-aware access is managed by the multiple sub-dispatched application server threads to shared memory content. It is determined if an application request is eligible to execute at a local sub-dispatched application server thread. | 04-08-2010 |
20100088701 | COMPOSING AND EXECUTING SERVICE PROCESSES - A computer-implemented method for automatically and dynamically composing and executing workflow-based service processes may include receiving a request, the request including a user-selected service type, guided by one or more rules for questionnaire creation, dynamically generating a sequence of one or more electronic inquiries in accordance with the user-selected service type, receiving information based on the sequence of the one or more electronic inquiries, based on the information received, creating a goal for the request by constructing logical state representations of a current state constituting a pre-condition of the goal and of a target state constituting a post-condition of the goal and generating a service process by determining a sequence of services which together fulfill the goal, where the services are selected from a plurality of services such that pre-conditions and post-conditions associated with the selected services together match the pre-condition and the post-condition of the goal. | 04-08-2010 |
20100095298 | SYSTEM AND METHOD FOR ADDING CONTEXT TO THE CREATION AND REVISION OF ARTIFACTS - A system includes a process-related-data handling component operative to handle process-related data corresponding to an operation associated with an artifact, such as the creation or revision of the artifact. An application component is operatively coupled to the process-related-data handling module and is operative to interact with the artifact. A storage element is also operatively coupled to the process-related-data handling module and is operative to store the process-related data. The process-related data may be displayed, created, or otherwise manipulated through a data management tool, which may include, a calendar interface, a task interface, and/or a media capture module. A method is also directed towards establishing process-related context concerning at least one artifact. | 04-15-2010 |
20100107164 | Method, System, and Apparatus for Process Management - Process management involves facilitating the application of a user action to an electronic document that changes a state of a thread. The thread includes data that collectively describes states and relationships of interrelated tasks of a process. Metadata of the electronic document is changed to reflect the changed state of the thread. The changed metadata is communicated via an electronic messaging operation of the process to update the changed state of the thread. | 04-29-2010 |
20100107165 | METHOD, SYSTEM, AND APPARATUS FOR PROCESS MANAGEMENT - Process management involves determining a thread from metadata embedded in an electronic document that used in the performance of a process via an electronic messaging operation. The thread includes data that collectively describes states and relationships of interrelated tasks of the process. User role data is determined from the thread, and processing the electronic document by a participant of the process is facilitated. Processing of the electronic document is governed by the user role data relative to a user role of the participant in the process. | 04-29-2010 |
20100115515 | NETWORK EXECUTION PATTERN - A plurality of nodes may be arranged within a hierarchy to perform actions, each node may perform a task associated an action. A dependency evaluator may determine, based on a request to perform an action, the first subset of the nodes configured to perform the action, wherein a first node of a higher level of the hierarchy is dependent upon a response from a second node of a lower level of the hierarchy to perform a task associated with the action. A request engine may provide the request to a lowest level of the hierarchy, wherein the second node of the lowest level may perform a task associated with the requested action and respond to the dependent first node. A response engine may receive the response from one of the nodes on a highest level of the hierarchy, including a performance of the tasks and the requested action. | 05-06-2010 |
20100115516 | METHOD AND SYSTEM FOR STORING AND REFERENCING PARTIAL COMPLEX RESOURCES USING OBJECT IDENTIFIERS IN A PRINTING SYSTEM - A print control unit coupled with a printer, the print control unit having host to provide partial resource components to a complex resource generator, the partial resources components including printing instructions. The complex resource generator to generate a shell representing a complex resource, generate a partial complex resource having the partial resource components, the shell to hold the partial complex resource, and store the partial complex resource to be referenced later. | 05-06-2010 |
20100115517 | DOCUMENT PROCESSING APPARATUS AND CONTROLLING METHOD THEREOF AND DOCUMENT MANAGEMENT SYSTEM AND DATA PROCESSING METHOD THEREFOR - A method for controlling a document processing apparatus which registers input document data in a document management server includes acquiring, from the document management server, information about input items necessary for registration of the document data into the document management server, determining whether each of the input items necessary for the registration of the document data are input, based on the acquired information, and performing control to complete the registration of the document data into the document management server when it is determined that the input items are input, while to temporarily register the document data into the document management server when it is determined that at least one of the input items is not input. | 05-06-2010 |
20100115518 | BEHAVIORAL MODEL BASED MULTI-THREADED ARCHITECTURE - Multiple parallel passive threads of instructions coordinate access to shared resources using “active” and “proactive” semaphores. The active semaphores send messages to execution and/or control circuitry to cause the state of a thread to change. A thread can be placed in an inactive state by a thread scheduler in response to an unresolved dependency, which can be indicated by a semaphore. A thread state variable corresponding to the dependency is used to indicate that the thread is in inactive mode. When the dependency is resolved a message is passed to control circuitry causing the dependency variable to be cleared. In response to the cleared dependency variable the thread is placed in an active state. Execution can proceed on the threads in the active state. A proactive semaphore operates in a similar manner except that the semaphore is configured by the thread dispatcher before or after the thread is dispatched to the execution circuitry for execution. | 05-06-2010 |
20100122251 | REALIZING JUMPS IN AN EXECUTING PROCESS INSTANCE - A method for realizing jumps in an executing process instance can be provided. The method can include suspending an executing process instance, determining a current wavefront for the process instance and computing both a positive wavefront difference for a jump target relative to the current wavefront and also a negative wavefront difference for the jump target relative to the current wavefront. The method also can include removing activities from consideration in the process instance and also adding activities for consideration in the process instance both according to the computed positive wavefront difference and the negative wavefront difference, creating missing links for the added activities, and resuming executing of the process instance at the jump target. | 05-13-2010 |
20100122252 | Scalable system and method thereof - A scalable system and method thereof may improve scalability and Quality of Service. The scalable system may include a first scalability adapter (scaldapter) configured to manage resource consumption of at least one component based on measured data received from the at least one component, where the component is configured to run an application, and a scalability manager (scalator) configured to modify strategies of the first scaldapter for managing the resource consumption based on measured data received from a plurality of processes, each of the plurality of processes including the first scaldapter. | 05-13-2010 |
20100122253 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR PROGRAMMING A CONCURRENT SOFTWARE APPLICATION - A system, method and computer program product for programming a concurrent software application includes a plurality of shared resources. An ordered list of the shared resources is used by each thread of the application for maintaining a strict total ordering of the shared resources, where each of the shared resources includes a unique identifier. A plurality of mutexes enables a thread to acquire exclusive access to the shared resources. A mutex lock list is associated with a shared resource for maintaining a list of mutex locks acquired during access of the shared resource by the thread. The list is comprised of the mutex associated with the shared resource and all mutexes of shared resources preceding the shared resource in the ordered list, wherein each of the mutex locks has been acquired in an order corresponding to the strict total ordering of the shared resources. | 05-13-2010 |
20100125846 | AD HOC TASK CREATION AND PROCESS MODIFICATION - The invention provides a method, system, and program product for modifying a computer-executed process. In one embodiment, the invention includes creating an ad hoc task for inclusion in an existing process, accessing the existing process, and adding the ad hoc task to the existing process. | 05-20-2010 |
20100131951 | DYNAMIC PROCESSING OF EMBEDDED COMPILED PROGRAMMING LANGUAGE CODE - Development using the JavaScript programming language can be limited since JavaScript code is interpreted. Compiling code at a client may interfere with the dynamicity and portability of web pages. Dynamicity and portability of web pages can be preserved while providing the features of a compiled programming language. A compiled programming language code can be embedded within an interpreted programming language code. The embedded compiled programming language code can be extracted and compiled with resources of a server to deliver the robustness and flexibility of the compiled programming language without burdening a client with compiling. | 05-27-2010 |
20100131952 | Assistance In Performing Action Responsive To Detected Event - Assistance in performing an action for a detected event for a monitoring target resource whose connection is not an always-on connection to perform an appropriate action as soon as possible in response to occurrence of a failure. The assistance device stores, in association with an occurrence pattern of an event, information related to plural tasks for determining whether a predetermined condition is fulfilled, and an action to be performed by a corresponding device. Then, the assistance device calculates an index value for determining the level of probability of the occurrence pattern of the event, determines whether the calculated index value is larger than a predetermined value, and sends, to a device to perform the action, the occurrence pattern of the event the index value of which is determined to be larger than the predetermined value, and information related to the plural tasks and the action corresponding to the occurrence pattern. | 05-27-2010 |
20100138833 | RESOURCE COVERAGE AND ANALYSIS - User interfaces called by a target application can be quickly and efficiently identified and stored for future resource coverage analysis. User interfaces of the target application that are accessed by users executing the target application on their computing device can be automatically tracked during execution of the target application. Information gathered on user interfaces of a target application and accessed user interfaces of the target application can be employed to generate one or more reports. Generated reports on user interface usage can be used to, e.g., identify the application resources to localize, prioritize the application resources to localize, discern application resource trends for, e.g., maintenance and upgrade activities, detect unused application resources, and select appropriate application resources for test scenarios. | 06-03-2010 |
20100138834 | APPLICATION SWITCHING IN A SINGLE THREADED ARCHITECTURE FOR DEVICES - A method and system for launching multiple applications simultaneously on a device under the control of application switching framework so that the operating system is only running one task for all the applications is provided. A single task is run under the control of an operating system. An application manager is run within the task. One or more applications are launched within the task under the control of the application manager. One of the applications is made the current application by switching, under user control, among the launched applications. A list of application descriptors is maintained for all the launched applications, and when switching, the application descriptor of one of the applications is used for displaying the application to a user on a screen. Each application descriptor contains forms of the launched applications. Each of the application descriptors contains a tree of forms with one root or parent form. A form represents an image to be displayed to the user. The image consists of text, pictures, bitmaps, or menus. | 06-03-2010 |
20100138835 | Workflow information generation unit, method of generating workflow information, image processing apparatus, control program, and storage medium - A workflow information generation unit is used for constructing a workflow configured with a plurality of processes. Information of the processes is storable in a workflow information storage. The workflow information generation unit includes a process-designation information obtaining unit, an advance notice output unit, an implementation-determination information obtaining unit, a process information output unit, a result information output unit. The process-designation information obtaining unit obtains information designating a process to be included in the workflow. The advance notice output unit outputs advance notice information to notify that information of the designated process is to be stored in the workflow information storage. The implementation-determination information obtaining unit obtains implementation-determination information indicating whether the designated process is allowed to be included in the workflow. The process information output unit stores information of the designated process to the workflow information storage. The result information output unit outputs result information for the designated process. | 06-03-2010 |
20100146508 | NETWORK DRIVEN ACTUATOR MAPPING AGENT AND BUS AND METHOD OF USE - A system and method for a network driven actuator mapping agent and bus. The system includes at least one sensor configured to sense an event in a first environment. The system also includes an actuator configured to perform an action in a second environment. Moreover, the system further includes a mapping manager configured to map the sensed event to the actuator to provide a custom interaction throughout a plurality of second environments. | 06-10-2010 |
20100153950 | POLICY MANAGEMENT TO INITIATE AN AUTOMATED ACTION ON A DESKTOP SOURCE - A method, apparatus, and system of policy management to initiate an automated action on a desktop source are disclosed. In one embodiment, a machine-readable medium embodying a set of instructions is disclosed. An event is detected. The event associated with a desktop source is automatically determined. A category of the event is determined. A policy is associated to the event based on the category. The policy is applied to the desktop source. Desktop sources may be reshuffled based on the policy. The internal event may be determined as a load balancing issue in which the desktop source may reside in a pool having maximum utilization. The desktop source may be transferred to anther pool having less utilization based on the policy. | 06-17-2010 |
20100153951 | OPERATING SYSTEM SHUTDOWN REVERSAL AND REMOTE WEB MONITORING - A method is disclosed for reversing operating system shutdown, including: detecting, by a monitoring program, an attempt by a user to log off, shut down, or restart a computer containing an operating system capable of running a plurality of program windows; determining if any program window is still open in the operating system; automatically cancelling, by the monitoring program, the logoff, shutdown, or restart request if it is determined that a program window is still open; and attempting to close any open program window by the monitoring program. | 06-17-2010 |
20100162243 | Context based virtualization - Methods, systems, apparatuses and program products are disclosed for managing device virtualization in hypervisor and hypervisor-related environment which include both pass-thru I/O and emulated I/O. | 06-24-2010 |
20100162244 | COMPUTER WORK CHAIN AND A METHOD FOR PERFORMING A WORK CHAIN IN A COMPUTER - A computerized work chain and methods are provided. The work chain comprises at least one processing device configured to perform the computerized work chain M work queues implemented in the one or more processing devices, and a work queue handler implemented in the one or more processing devices, where M is a positive integer that is greater than or equal to one. Each work queue comprises a queue monitor, an exception monitor, a pool of worker threads, a logger, and a data queue. The work queue handler forms the work chain by linking the M work queues together such that respective outputs of a first one of the work queues through an M | 06-24-2010 |
20100186013 | Controlling Access to a Shared Resource in a Computer System - A computer system and method are provided that control access to shared resources using a plurality of locks (e.g. mutex locks or read-write locks). A locking unit grants the locks to a plurality of threads of execution of an application in response to lock access requests. A guardian unit monitors the lock access requests and records the locks that are granted to each of the threads. The guardian unit selectively blocks the lock access requests when, according to a predetermined locking protocol, a requested lock must not be acquired after any of the locks which have already been granted to the requesting thread. | 07-22-2010 |
20100199278 | JOB EXECUTION APPARATUS, JOB EXECUTION METHOD, AND COMPUTER-READABLE STORAGE MEDIUM FOR COMPUTER PROGRAM - An image forming apparatus is provided with the following functional portions: a printable/unprintable determination portion that determines whether or not a print job can be executed based on any of a plurality of conditions specified by a user; and a first printing process portion that executes, if it has been determined that the print job can be executed based on any of the plurality of conditions, the print job based on any of executable conditions among the plurality of conditions, the executable conditions being conditions based on which the print job can be executed. | 08-05-2010 |
20100199279 | USER CONNECTIVITY PROCESS MANAGEMENT SYSTEM - A system is disclosed according to the present invention that manages the process of providing a client access to a secured service. In the exemplary embodiment, the secured service is a computer system that allows the client to trade financial instruments. Management of this process includes managing execution of tasks that can be automatically executed and delegating tasks that require manual execution; communicating with entities outside of the process management system; and handling “demands,” or unexpected problems that arise in the middle of the client connectivity process. | 08-05-2010 |
20100205603 | SCHEDULING AND DISPATCHING TASKS IN AN EMULATED OPERATING SYSTEM - Approaches for dispatching routines in an emulated operating system. A method includes executing a first operating system (OS) on an instruction processor of a data processing system. The first OS includes instructions of a first instruction set that are native to the instruction processor. A second OS is emulated on the first OS and includes instructions of a second instruction set that are not native to the instruction processor. A first plurality of tasks is created by the emulated second OS. The first OS individually schedules the first plurality of tasks and dispatches the first plurality of emulated tasks for emulation according to the scheduling. | 08-12-2010 |
20100211948 | METHOD AND SYSTEM FOR ALLOCATING A RESOURCE TO AN EXECUTION ENTITY - A method for allocating a resource to a requesting execution entity may include deriving at least one independently accessible resource head from the global resource, assigning the at least one resource head to the execution entity, and allocating resources from the assigned resource head to the execution entity. | 08-19-2010 |
20100211950 | Automated Termination of Selected Software Applications in Response to System Events - The illustrative embodiments disclose a computer implemented method, apparatus, and computer program product for managing a set of applications. In one embodiment, the process registers a system management event in an application configuration database. Responsive to detecting the registered system management event during execution of one application of the set of applications, the process identifies applications of the set of applications associated with the registered system management event that are executing. The process then terminates the applications of the set of applications associated with the registered system management event that are executing. Responsive to terminating the applications of the set of applications associated with the registered system managing event that are executing, the process then executes a handler that processes the registered system management event. | 08-19-2010 |
20100211951 | IMAGE PROCESSING APPARATUS, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM - An image processing apparatus that is capable of more reliably synchronizing an execution state of a job flow set in a plurality of machines without using a management server. In the digital multi-function peripheral, a job flow list management section receives a job flow setting file in which a job flow to be executed is described, and a job execution section executes the job flow based on the received job flow setting file. A communication section notifies other digital multi-function peripherals of execution of the job flow when the job flow is executed, and notifies the other digital multi-function peripherals of termination of the job flow when the execution of the job flow is terminated. | 08-19-2010 |
20100218185 | Implementation of a User-Controlled Transactional Resource - In one embodiment, a mechanism for implementation of a user as a transactional resource in a telecommunications platform is disclosed. In one embodiment, a method includes initiating a transaction as part of a transactional application in a transaction processing architecture, performing one or more transaction operations as part of the transaction on one or more transactional resources of the transaction processing architecture, contacting a user of the transactional application as one of the transaction operations performed on a user-controlled transactional resource, and storing a result of contacting the user in at least one of the user-controlled transactional resource or a transaction manager overseeing the transaction. | 08-26-2010 |
20100218186 | Data Centers Task Mapping - A data center management system may include a processor coupled to a network. The network may be further coupled to a primary data center and a secondary data center located at a physical location remote from the primary data center. The processor may be adapted to execute computer implemented instructions to determine a first transition point for the primary data center with respect to a secondary data center on the basis of one or more financial indicators, transfer one or more data center tasks from the primary data center to the secondary data center at substantially the first transition point, and execute the one or more transferred data center tasks at the secondary data center. | 08-26-2010 |
20100218187 | TECHNIQUES FOR CONTROLLING DESKTOP STATE - Techniques for controlling desktop state are provided. Processing events are associated with desktop states and are associated with resource actions. When a desktop encounters the processing events and a known state is established, automated actions are forced on the resources to customize the known state. | 08-26-2010 |
20100218188 | POLICY DRIVEN AUTONOMIC PERFORMANCE DATA COLLECTION - An autonomic method, apparatus, and program product are provided for performance data collection. A start time and a stop time are monitored for an application. The start time is compared with the stop time to determine whether or not the application is meeting a performance target of the application. If the application is not meeting the performance target for the application, performance data collection is autonomically started for the application. | 08-26-2010 |
20100218189 | Method for managing java applications - The present invention relates to a method for managing java applications executable in a user device. The present invention provides an expandability for and a continuity between java applications by changing states of the java applications in execution and sharing information between the java applications. | 08-26-2010 |
20100223616 | REMOVING OPERATING SYSTEM JITTER-INDUCED SLOWDOWN IN VIRTUALIZED ENVIRONMENTS - Techniques for eradicating operating system jitter-induced slowdown are provided. The techniques include allocating one or more computing resources to one or more logical partitions of one or more parallel programs in proportion of one or more cycles consumed by one or more sources of operating system jitter in each compute phase in each of the one or more logical partitions. | 09-02-2010 |
20100223617 | METHOD AND COMPUTER SYSTEM FOR DESIGNING AND/OR PROVIDING COMPUTER-AIDED TASKS FOR MEDICAL TASK FLOWS - A method and a computer system are disclosed for designing and/or providing computer-aided tasks for medical task flows. In at least one embodiment, the method includes providing one or more tasks of at least one task flow, which can exchange data with one or a number of other tasks, in so far as they comply with at least one requirement for exchanging data; providing task flow management, which manages requirements in respect of a task and grants a task access for a task flow according to at least one of the requirements; providing at least one task container, which is made available as host for a task, in so far as the task complies with at least one requirement for access to the host; and providing at least one domain platform, which is used to convert the functionality and logic of at least one task, in so far as the task complies with at least one requirement in respect of the conversion. | 09-02-2010 |
20100229172 | THREAD LIVELOCK UNIT - Method, apparatus and system embodiments to assign priority to a thread when the thread is otherwise unable to proceed with instruction retirement. For at least one embodiment, the thread is one of a plurality of active threads in a multiprocessor system that includes memory livelock breaker logic and/or starvation avoidance logic. Other embodiments are also described and claimed. | 09-09-2010 |
20100235838 | METHOD, COMPUTER PROGRAM PRODUCT, AND APPARATUS FOR ENABLING TASK AGGREGATION IN AN ENTERPRISE ENVIRONMENT - A method for enabling access task aggregation in an enterprise environment may include receiving indications of a plurality of task related events including at least a first task related event received from a first enterprise application and a second task related event received from a second enterprise application that is different from the first enterprise application, aggregating task related events associated with a particular individual into aggregated information, providing for a display of the aggregated information including indications regarding the task related events associated with the particular individual at a client device associated with the particular individual, enabling receipt of data defining an action taken via the client device in which the action taken defines a response of the particular individual to the aggregated information, and providing information related to the action taken to a corresponding one of the first and second enterprise applications based on to which one of the task related events the action taken corresponds. | 09-16-2010 |
20100235839 | APPARATUS AND METHOD FOR AUTOMATION OF A BUSINESS PROCESS - An apparatus for automation of a business process, the business process comprising a plurality of tasks. The apparatus comprises a diagram editor for creating and editing a business process diagram, the business process diagram including the plurality of tasks. The apparatus also comprises an implementation editor for creating and editing an implementation of at least one of the plurality of tasks in the business process diagram, the implementation comprising a number of activities. The business process diagram and the implementation together form an executable business process definition. | 09-16-2010 |
20100251239 | Component Lock Tracing - Methods, systems, and products for lock tracing at a component level. The method includes associating one or more locks with a component of the operating system; initiating lock tracing for the component; and instrumenting the component-associated locks with lock tracing program instructions in response to initiating lock tracing. The locks are selected from a group of locks configured for use by an operating system and individually comprise locking code. The component lock tracing may be static or dynamic. | 09-30-2010 |
20100251240 | ADAPTABLE MANAGEMENT IN SYNC ENGINES - Synchronization of two or more items can be optimized through the use of parallel execution of synchronization tasks and adaptable processing that monitors and adjusts for system loading. Two or more synchronization tasks required to be performed for an item can, if not inherently serial in nature, be performed in parallel, optimizing synchronization of the item. Even if multiple synchronization tasks required for one item must be serially executed, e.g., download the item prior to translating the item, these synchronization tasks can be executed in parallel for different items, optimizing a download request involving two or more items. Moreover, multiple threads for one or more synchronization tasks can be concurrently executed when supportable by the current operating system resources. Rules can be established to ensure synchronization activity is not degraded by the overextension of system resources. | 09-30-2010 |
20100251241 | MANAGING JOB EXECUTION - This disclosure describes monitoring the execution of jobs in a work plan. In an embodiment, a system maintains a risk level associated with the critical job to represent whether the execution of a job preceding the critical job has a problem, and it maintains the list associated with the critical job so as to quickly identify the preceding job which may cause a delay to the critical job execution. | 09-30-2010 |
20100251242 | Control Service for Relational Data Management - Aspects of a data environment, such as the creation, provisioning, and management of data stores and instances, are managed using a separate control environment. A user can call into an externally-facing interface of the control environment, the call being analyzed to determine actions to be performed in the data environment. A monitoring component of the control plane also can periodically communicate with the data environment to determine any necessary actions to be performed, such as to recover from faults or events in the data environment. A workflow can be instantiated that includes tasks necessary to perform the action. For each task, state information can be passed to a component in the data environment operable to perform the task, until all tasks for an action are completed. Data in the data environment can be accessed directly using an externally-facing interface of the data environment, without accessing the control plane. | 09-30-2010 |
20100251243 | SYSTEM AND METHOD OF MANAGING THE EXECUTION OF APPLICATIONS AT A PORTABLE COMPUTING DEVICE AND A PORTABLE COMPUTING DEVICE DOCKING STATION - A method of managing applications within a portable computing device (PCD) and a PCD docking station is disclosed. The method may include determining whether the PCD is docked with the PCD docking station when an application is selected and determining whether a first application version is available when the PCD is not docked. Further, the method may include executing a second application version when the first application version is unavailable and executing the first application version when the first application is available. | 09-30-2010 |
20100251244 | STATUS NOTIFICATION SYSTEM, STATUS NOTIFICATION DEVICE, STATUS MONITORING DEVICE, STATUS DETECTOR, METHOD FOR STATUS NOTIFICATION, AND STORAGE MEDIUM INCLUDING STATUS NOTIFICATION PROGRAM - A status notification system and method including acquiring status information of a monitor target, and performing a process in response to the status information representing a status of the monitor target. A process is performed based on first status information, information of a process execution period from a reception of the first status information to a completion of the process is retrieved, and notification pertaining to second status information is controlled based on the retrieved process execution period. | 09-30-2010 |
20100251245 | PROCESSOR TASK AND DATA MANAGEMENT - Task and data management systems methods and apparatus are disclosed. A processor event that requires more memory space than is available in a local storage of a co-processor is divided into two or more segments. Each segment has a segment size that is less than or the same as an amount of memory space available in the local storage. The segments are processed with one or more co-processors to produce two or more corresponding outputs. The two or more outputs are associated into one or more groups. Each group is less than or equal to a target data size associated with a subsequent process. | 09-30-2010 |
20100262965 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - It is determined whether a workflow includes a process to be executed by an information processing apparatus. Upon determining that the workflow includes a process to be executed by the information processing apparatus, it is determined whether the workflow includes a process to be executed by an external apparatus in accordance with an instruction of the information processing apparatus. Upon determining that such process is not included, display is controlled to display a parameter for only the process to be executed by the information processing apparatus. Upon determining that such process is included, the function information of the external apparatus is acquired. After the function information has been acquired, display is controlled to display the parameters while reflecting the function information on the parameters of the processes of the workflow. | 10-14-2010 |
20100269110 | EXECUTING TASKS THROUGH MULTIPLE PROCESSORS CONSISTENTLY WITH DYNAMIC ASSIGNMENTS - A developer can declare one or more tasks as being replicable. A library manages all tasks that are accessed by an application, including replicable tasks, and further establishes a task manager during requested task execution. During execution, the library generates a plurality of worker threads, and each of the worker threads is assigned to be processed on one of a plurality of different central processing units. When one or more worker threads have finished processing assigned tasks, and other threads are still busy processing other tasks, the one or more idle worker thread scan copy over and process replicable tasks assigned to the other, busier worker thread(s) to help with processing. The system can also synchronize processing of the replicable task by the plurality of different worker threads and different processors to ensure no processing discrepancies. | 10-21-2010 |
20100269111 | TASK MANAGEMENT - Techniques for controlling the execution of tasks in data center are generally described. In some examples, a management system may include a processor and a memory coupled to the processor. The processor, in some examples, may be adapted to execute computer implemented instructions to retrieve one or more data center tasks, determine execution parameters for the one or more of the data center tasks on based at least in part on one or more financial indicators, and communicate the execution parameters to the data center to execute the one or more data center tasks in accordance with the execution parameters. | 10-21-2010 |
20100269112 | SYSTEM AND METHOD FOR THE DYNAMIC DEPLOYMENT OF DISTRIBUTED TREATMENTS - System enabling the dynamic deployment and/or the creation of tasks within a network comprising at least one master node ( | 10-21-2010 |
20100269113 | METHOD FOR CONTROLLING AT LEAST ONE APPLICATIONS PROCESS AND CORRESPONDING COMPUTER PROGRAM PRODUCT - A method and apparatus are provided for controlling at least one application process comprising a plurality of application services, which are executed in an application environment in order to provide a service. One such method includes steps enabling a globally asynchronous implementation of the aforementioned application services without generating a timeout in relation to a client that requested the implementation of said application process. | 10-21-2010 |
20100275206 | STANDALONE SOFTWARE PERFORMANCE OPTIMIZER SYSTEM FOR HYBRID SYSTEMS - Standalone software performance optimizer systems for hybrid systems include a hybrid system having a plurality of processors, memory operably connected to the processors, an operating system including a dispatcher loaded into the memory, a multithreaded application read into the memory, and a static performance analysis program loaded into the memory; wherein the static performance analysis program instructs at least one processor to perform static performance analysis on each of the threads, the static performance analysis program instructs at least one processor to assign each thread to a CPU class based on the static performance analysis, and the static performance analysis program instructs at least one processor to store each thread's CPU class. An embodiment of the invention may also include the dispatcher optimally mapping threads to processors using thread CPU classes and remapping threads to processors when a runtime performance analysis classifies a thread differently from the static performance analysis. | 10-28-2010 |
20100281480 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR DECOMPOSING A SAMPLING TASK INTO A PLURALITY OF JOBS - A system, method, and computer program product are provided for decomposing a sampling task into a plurality of jobs. In operation, a sampling task is identified. Additionally, the sampling task is decomposed into a plurality of jobs. Further, each of the plurality of jobs are processed in parallel. Still yet, each of the plurality of jobs are allowed to terminate independently of the other plurality of jobs. | 11-04-2010 |
20100281481 | APPARATUS AND METHOD FOR PROVIDING A USER INTERFACE WITHIN A COMPUTING DEVICE - A user interface for simultaneously representing tasks and notifications in a computing device. The user interface presents the tasks as reduced size representations of the output of the corresponding tasks which are continually updated. The user interface allows a user to bring a selected task to the foreground or to close the task, both by interacting with the representations of the tasks. The user interface further associates notifications with corresponding tasks by superimposing an icon of the notification on the representation of the corresponding task. The user interface orders and arranges the task representations and icons of the notifications according to certain layout rules. | 11-04-2010 |
20100287550 | Runtime Dependence-Aware Scheduling Using Assist Thread - A runtime dependence-aware scheduling of dependent iterations mechanism is provided. Computation is performed for one or more iterations of computer executable code by a main thread. Dependence information is determined for a plurality of memory accesses within the computer executable code using modified executable code using a set of dependence threads. Using the dependence information, a determination is made as to whether a subset of a set of uncompleted iterations in the plurality of iterations is capable of being executed ahead-of-time by the one or more available threads in the data processing system. If the subset of the set of uncompleted iterations in the plurality of iterations is capable of being executed ahead-of-time, the main thread is signaled to skip the subset of the set of uncompleted iterations and the set of assist threads is signaled to execute the subset of the set of uncompleted iterations. | 11-11-2010 |
20100287551 | Establishment of Task Automation Guidelines - A system and method of linking task automation guidelines to the tasks and activities of a process provide for receiving a definition of a process from a user. A task automation input may also be received from the user, where the task automation input corresponds to a task included in the process. Task automation guidelines can be added to the definition of the process based on the task automation input. | 11-11-2010 |
20100287552 | METHOD FOR PROVIDING INTEGRATED APPLICATION MANAGEMENT - The present invention relates to a method for providing integrated application management, and more particularly, to a method which can provide a convenient usage environment by integrately managing various types of contents and application programs. To achieve these, the method for providing integrated application management using an integrated application service module and a plurality of execution/management engines includes: integratedly managing a plurality of applications driven in different application deployments, and providing a single user interface irrespective of the application deployments; and executing at least one of the applications. The present invention enables various types of application driven in different application deployment to be integratedly executed and managed by a single common interface in a same manner irrespective of each application deployment. | 11-11-2010 |
20100293546 | METHOD AND SYSTEM FOR WEB PAGE BREADCRUMB - A breadcrumb method, system and computer program product for a website. In response to a request for visiting the website, a breadcrumb root node is generated in a tree structure. In response to receiving a request for visiting a first web task associated with the website, a first task node is generated in the tree structure at the breadcrumb root node. In response to sequentially receiving requests for multiple subtasks of the first web task, multiple subtask nodes of the first task node are sequentially established in the tree structure. The subtask nodes of the multiple subtask nodes of the first task node are sequentially connected to the first task node according to a sequential order of the sequentially received requests for the multiple subtasks of the first web task. The multiple subtask nodes of the first task node are processed based a policy of the first web task. | 11-18-2010 |
20100306775 | ROLE BASED DELEGATED ADMINISTRATION MODEL - Embodiments disclosed herein extend to the use of administrative roles in a multi-tenant environment. The administrative roles define administrative tasks defining privileged operations that may be performed on the resources or data of a particular tenant. In some embodiments, the administrative tasks are a subset of administrative tasks. The administrative role also defines target objects which may be subjected to the administrative tasks. In some embodiments, the target objects are a subset of target objects. An administrator may associate a user or group of users of the particular tenant with a given administrative role. In this way, the user or group of users are delegated permission to perform the subset of administrative tasks on the subset of target objects without having to be given permission to perform all administrative tasks on all target objects. | 12-02-2010 |
20100313202 | AUTOMATICALLY CORRELATING TRANSACTION EVENTS - An API can be extended to automatically correlate events based on context. Started events for each context (e.g. threads of execution) are maintained on independent stacks. When an instrumented application starts a new transaction, the API generates a started event. A transaction correlation unit within the API can determine if the new transaction started during a previous transaction. If there is a previous started event on the stack, the new transaction started during the previous transaction. The transaction correlation unit can insert an outbound indicator into the new started event to associate the new transaction and the previous transaction. Then, the new started event can be pushed on the stack. | 12-09-2010 |
20100318994 | Load Balanced Profiling - A conventional load regulator ( | 12-16-2010 |
20100333091 | HIGH PERFORMANCE IMPLEMENTATION OF THE OPENMP TASKING FEATURE - A method and system for creating and executing tasks within a multithreaded application composed according to the OpenMP application programming interface (API). The method includes generating threads within a parallel region of the application, and setting a counter equal to the quantity of the threads. The method also includes, for each one of the plurality of threads, assigning an implicit task, and executing the implicit task. Further, the method includes, upon encountering a task construct, during execution of the implicit tack, for an explicit asynchronous task generating the explicit asynchronous task, adding the explicit asynchronous task to a first task queue, where the first task queue corresponds to the one of the plurality of threads; and incrementing the counter by one. | 12-30-2010 |
20100333092 | DYNAMIC DEFINITION FOR CONCURRENT COMPUTING ENVIRONMENTS - Exemplary embodiments allow a user to create configurations for use in distributed computing environments. Configurations can be arranged in hierarchies in which elements of the hierarchy can inherit characteristics from elements in other layers of the hierarchy. Embodiments also allow a user to flatten a hierarchical configuration to remove hierarchical dependencies and/or inheriting capabilities of elements in the hierarchy. Exemplary embodiments further allow users to deploy a distributed computing configuration on their desktop to evaluate performance of the configuration and then deploy the configuration in a distributed computing environment without having to change programming code run on the desktop/distributed computing environment. | 12-30-2010 |
20110010715 | Multi-Thread Runtime System - A runtime system implemented in accordance with the present invention provides an application platform for parallel-processing computer systems. Such a runtime system enables users to leverage the computational power of parallel-processing computer systems to accelerate/optimize numeric and array-intensive computations in their application programs. This enables greatly increased performance of high-performance computing (HPC) applications. | 01-13-2011 |
20110016469 | DELETING DATA STREAM OVERLOAD - A system and method to delete overload in a data stream are described. | 01-20-2011 |
20110023032 | Processor-Implemented Systems And Methods For Event Handling - Processor-implemented systems and methods are provided for synchronization of a thread, wherein the thread waits for one or more events to occur before continuing execution. A processor-implemented system and method can include a wait data structure which stores event conditions in order to determine when the thread should continue execution. Event objects, executing on one or more data processors, allow for thread synchronization. A pointer is stored with respect to a wait data structure in order to provide visibility of event conditions to the event objects. The thread continues execution when the stored event conditions are satisfied. | 01-27-2011 |
20110023033 | SCHEDULING OF THREADS BY BATCH SCHEDULING - In accordance with the disclosed subject matter there is provided a method for segregating threads running in a computer system, and executing the threads according to this categorization. | 01-27-2011 |
20110023034 | REDUCING PROCESSING OVERHEAD AND STORAGE COST BY BATCHING TASK RECORDS AND CONVERTING TO AUDIT RECORDS - Systems, methods and articles of manufacture are disclosed for processing documents for electronic discovery. A request may be received to perform a task on documents, each document having a distinct document identifier. A task record may be generated to represent the requested task. The task record may include information specific to the request task. However, the task record need not include any document identifiers. At least one batch record may be generated that includes the document identifier for each of the documents. The task record may be associated with the at least one batch record. The requested task may be performed according to the task record and the at least one batch record. An audit record may be generated for the performed task. The audit record may be associated with the at least one batch record. | 01-27-2011 |
20110023035 | Command Synchronisation - The order in which commands issued by a process to one or more hardware processing units should be executed is determined based upon whether the commands are issued to just one hardware processing unit, or to more than one. When the commands are issued to just the one hardware processing unit, the hardware processing unit it allowed to determined their order of execution itself. However, when the commands are issued to more than one hardware processing unit, their order of execution is determined externally to the hardware processing units. This is of particular use in scheduling the execution of commands issued by multi-threaded processes. | 01-27-2011 |
20110023036 | SWITCHING PROCESS TYPES IN A PROCESS ENGINE - A method and system are provided for switching process types in a process engine. The system includes a process engine for running a process, wherein the process includes invoking one or more external services. An event manager is provided having one or more defined policies and including: a comparison component for comparing runtime metrics of the process with the one or more defined policies and determining if the process infringes one or more defined policies; and a switching component for switching the process from an uninterruptible process to a long running process including copying state information on the process from in-memory storage to persistent storage. The system also includes a storage mechanism that acts as a storage façade to ensure the copying of state information on the process from in-memory storage to persistent storage is transparent to the process engine and a connection manager through which exchanges with clients and external services takes place. | 01-27-2011 |
20110029975 | COORDINATION OF TASKS EXECUTED BY A PLURALITY OF THREADS - To coordinate tasks executed by a plurality of threads that each includes plural task sections, a call of a mark primitive to mark a first point after a first of the plural task sections is provided. Also, a call of a second primitive is provided to indicate that a second of the plural task sections is not allowed to begin until after the plurality of threads have each reached the first point. | 02-03-2011 |
20110035746 | JOB NETWORK AUTO-GENERATION APPARATUS, A METHOD AND A PROGRAM RECORDING MEDIUM - In conventional arts, migration of jobs that are described in JCL language used in mainframes and the like into various open systems can not be supported. | 02-10-2011 |
20110035747 | VIRTUAL MACHINE PACKAGE GENERATION SYSTEM, VIRTUAL MACHINE PACKAGE GENERATION METHOD, AND VIRTUAL MACHINE PACKAGE GENERATION PROGRAM - Provided is a virtual machine package generation method that decreases the dependency relationships between virtual machine packages when generating a virtual machine package from a distributed system. | 02-10-2011 |
20110041127 | Apparatus and Method for Efficient Data Processing - Efficient data processing apparatus and methods include hardware components which are pre-programmed by software. Each hardware component triggers the other to complete its tasks. After the final pre-programmed hardware task is complete, the hardware component issues a software interrupt. | 02-17-2011 |
20110041128 | Apparatus and Method for Distributed Data Processing - An apparatus and method for distributed data processing is described herein. A main processor programs a mini-processor to process an incoming data stream. The mini-processor is located in close proximity to hardware components operating on the input data stream. A copy engine is also provided for copying data from multiple protocol data units in a single copy operation. | 02-17-2011 |
20110041129 | PROCESS MANAGEMENT APPARATUS, TERMINAL APPARATUS, PROCESS MANAGEMENT SYSTEM, COMPUTER READABLE MEDIUM AND PROCESS MANAGEMENT METHOD - A process management apparatus includes a receiving unit and a processing unit. The receiving unit receives identification information read by plural readers corresponding to plural terminal apparatuses when a medium is received, a first notification notifying that processing in a process where the identification information is read is completed, and a second notification notifying that the medium is received from a preceding process, from the plural terminal apparatuses. The processing unit executes predetermined processing, when the identification information and the second notification are not received from a terminal apparatus of a next process, within a predetermined time period after the first notification is received. | 02-17-2011 |
20110041130 | INFORMATION PROCESSING APPARTUS, INFORMATION PROCESSING METHOD AND COMPUTER READABLE MEDIUM - An information processing apparatus includes: a reliability decision unit that decides reliability, which is required in order to process a processing object, based on the processing object; a processing determination unit that determines whether or not to make a processing subject process the processing object by comparing the reliability determined by the reliability decision unit with reliability of the processing subject; and a processing request unit that requests processing of the processing object to the processing subject when the processing determination unit determines that the processing object is to be processed by the processing subject. | 02-17-2011 |
20110047549 | Manipulating a spin bit within the wait primitive - A method of avoiding unnecessary context switching in a multithreaded environment. A thread of execution of a process waiting on a lock protecting access to a shared resource may wait for the lock to be released by executing in a loop, or “spin”. The waiting thread may continuously check, in a user mode of an operating system, an indicator of whether the lock has been released. After a certain time period, the thread may stop spinning and enter a kernel mode of the operating system. Subsequently, before going to sleep which entails costly context switching, the thread may perform an additional check of the indicator to determine whether the lock has been released. If this is the case, the thread returns to user mode and the unnecessary context switching is avoided. | 02-24-2011 |
20110047550 | SOFTWARE PROGRAM EXECUTION DEVICE, SOFTWARE PROGRAM EXECUTION METHOD, AND PROGRAM - The present invention provides a software program execution device which can accept a processing request even during a setting change of a software program without interrupting the processing in-execution. An execution monitoring unit monitors, in processing units and based on an instruction of a software program control unit, the operation of a server application unit which executes a software program in a process, and notifies a software program control unit that the processing being executed in a first process is completed. Upon receiving the notification, the software program control unit stops and then restarts the first process, and the first process is restarted while reflecting the setting change recorded in a setting storage unit during the restart step. | 02-24-2011 |
20110047551 | PARALLELIZATION OF ELECTRONIC DISCOVERY DOCUMENT INDEXING - A system and method for parallelizing document indexing in a data processing system. The data processing system includes a primary processor for receiving a list of data having embedded data associated therewith, at least one secondary processor to process the data as provided by the primary processor, a data processor to determine a characteristic of the embedded data and process the embedded data based upon the characteristic, and a messaging module to exchange at least one status message between the primary processor and the at least one secondary processor. | 02-24-2011 |
20110055831 | PROGRAM EXECUTION WITH IMPROVED POWER EFFICIENCY - Program execution with improved power efficiency including a computer program that for performing a method that includes determining a current power state of a processor. Low power state instructions of an application are executed on the processor in response to determining that the current power state of the processor is a low power state. Executing the low power state instructions includes collecting hardware state data, storing the hardware state data, and performing a task. High power state instructions of the application are executed on the processor in response to determining that the current power state of the processor is a high power state. Executing the high power state instructions includes performing the task using the stored hardware state data as an input. | 03-03-2011 |
20110055832 | HOST DEVICE, WORKFORM PERFORMING DEVICE, METHOD FOR GENERATING WORKFORM, AND METHOD FOR PERFORMING WORKFORM - A host device includes a communication interface unit to be connected to a workform performing device, a workform generation unit to perform a job and generate a workform to which a universal plug-in is applied, a storage unit to store at least one of the workform generated by the workform generation unit and a workform transmitted from an external device, and a control unit to control the communication interface unit to transmit the at least one stored workform to the workform performing device according to a command to perform a job. | 03-03-2011 |
20110055833 | CO-PROCESSOR SYSTEM AND METHOD FOR LOADING AN APPLICATION TO A LOCAL MEMORY - A co-processor system and a method for loading an application to a local memory of a co-processor system. In the method, re-locatable code and descriptive data are copied from a loading region to a non-loading region. An executable image to be loaded is loaded using the re-locatable code copied to the non-loading region according to the descriptive data. The local memory includes a loading region and a non-loading region, the loading region stories a loader and descriptive data of an executable image to be loaded of the application, and the loader includes re-locatable code. A system is provided for carrying out the steps of the method. In accordance with the system and method of the present invention, flexibility of co-processor system application development is improved without occupying additional storage space. | 03-03-2011 |
20110061051 | Dynamic Recommendation Framework for Information Technology Management - A method, system, and article are provided for managing performance of a computer system. Both implicit and explicit recommendations for processing of tasks are provided. System performance is tracked and evaluated based upon the actions associated with the task. Future recommendations of the same or other tasks are provided based upon implicit feedback pertaining to system performance, and explicit feedback solicited from a system administrator. | 03-10-2011 |
20110061052 | METHOD AND SYSTEM USING A TEMPORARY OBJECT HANDLE - A method and system are provided for using a temporary object handle. A method at a resource manager includes: receiving an open temporary handle request from an application for a resource object, wherein a temporary handle can by asynchronously invalidated by the resource manager at any time; and creating a handle control block at the resource manager for the object, including an indication that the handle is a temporary handle. The method then includes: responsive to receiving a request from an application to use a handle, which has been invalidated by the resource manager, sending a response to the application that the handle is invalidated. | 03-10-2011 |
20110067025 | AUTOMATICALLY GENERATING COMPOUND COMMANDS IN A COMPUTER SYSTEM - A computer system provides a way to automatically generate compound commands that perform tasks made up of multiple simple commands. A compound command generation mechanism monitors consecutive user commands and compares the consecutive commands a user has taken to a command sequence identification policy. If the user's consecutive commands satisfy the command sequence identification policy the user's consecutive commands become a command sequence. If the command sequence satisfies the compound command policy, the compound generation mechanism can generate a compound command for the command sequence automatically or prompt an administrator to allow the compound command to be generated. Generating a compound command can be done on a user by user basis or on a system wide basis. The compound command can then be displayed to the user to execute so that the command sequence is performed by the user selecting the compound command for execution. | 03-17-2011 |
20110067026 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, UTILIZATION CONSTRAINT METHOD, UTILIZATION CONSTRAINT PROGRAM, AND RECORDING MEDIUM STORING THE PROGRAM - A disclosed information processing system includes an information processing apparatus configured to carry out a function related to printing, an authentication management apparatus configured to carry out a user authentication for enabling a user to request to carry out the function of the information processing apparatus, and a predetermined data transmission path configured to connect the information processing apparatus to the authentication management apparatus, wherein the information processing apparatus carries out a process of the function in response to the request from the user and is enabled to report operation logs to an outside of the information processing apparatus, and any one of the information processing apparatus and the authentication management apparatus determines whether an execution mode of the process required by the user is to be changed based on the operation logs. | 03-17-2011 |
20110067027 | System and method of tracking and communicating computer states - The invention relates to a system and method of tracking and communicating computing states of a first computer device for registering said computing states by a second computer device. The first computer device is connected to the second computer device and configured for assuming a plurality of successive computing states. Jobs are assigned to a different set of jobs each time a state transition has been detected. New sets are defined only when a state transition has been detected and typically not when a snapshot is made resulting in saving storage space. | 03-17-2011 |
20110078683 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM - This invention provides an information processing apparatus which obtains screen information via a network from an external apparatus and displays an operation screen based on information registered in association with a specific application, when a predetermined key is operated while the specific application is in progress. To accomplish this, an MFP obtains screen information from a Web server based on URL information registered in association with a Web application, and displays the initial screen of the Web application, when a reset key is pressed while the Web application is in progress. | 03-31-2011 |
20110078684 | Method and System for Facilitating Memory Analysis - A method and system for facilitating runtime memory analysis. The method includes: assigning a unique ID for each task in a running program; recording memory access events occurring during the running program, including the IDs of the task performing the memory accesses; issuing a task termination notification in response to a task terminating, the task termination notification including the ID of the terminating task; and releasing all the memory access events having the ID of the terminating task in the memory, in response to the task termination notification. This method and system can ensure that the memory access events stored in the memory will not increase unlimitedly, so that the memory overhead is reduced remarkably and dynamic memory analysis can be faster and more efficient. | 03-31-2011 |
20110083133 | METHOD OF STREAMING REMOTE PROCEDURE INVOCATION FOR MULTI-CORE SYSTEMS - A method of streaming remote procedure invocation for multi-core systems to execute a transmitting thread and an aggregating thread of a multi-core system comprises the steps of: temporarily storing data to be transmitted; activating the aggregating thread if the amount of the temporarily stored data is equal to or greater than a threshold and the aggregating thread is at pause status; pausing the transmitting thread if there is no space to temporarily store the data to be transmitted; retrieving data to be aggregated; activating the transmitting thread if the amount of the data to be aggregated is less than a threshold and the transmitting thread is at pause status; and pausing the aggregating thread if there is no data to be retrieved. | 04-07-2011 |
20110088033 | PROVIDING THREAD SPECIFIC PROTECTION LEVELS - A method, system and computer program product is disclosed for providing thread specific protection levels in a multithreaded processing environment. The method comprises generating a group of threads in a process, one of the group of threads opening a thread entity, and that one of the group of threads specifying one or more levels of access to the thread entity for the other threads. In one embodiment, when a first of the threads attempts to perform a specified operation on the thread entity, the method of this invention determines whether that first thread is the one of the group of threads that opened the thread entity. When the first thread is not that one of the group of threads, the first thread is allowed to perform the specified operation if and only if that operation is permitted by the specified one or more levels of access. | 04-14-2011 |
20110088034 | METHOD AND SYSTEM FOR MANAGING RESOURCES - A method, computer program product, and computer system for generating a timing sequence for activating resources linked through time dependency relationships. A Direct Acyclic Graph (DAG) includes nodes and directed edges. Each node represents a unique resource and is a predefined Recovery Time Objective (RTO) node or an undefined RTO node. Each directed edge directly connects two nodes and represents a time delay between the two nodes. The nodes are topologically sorted to order the nodes in a dependency sequence of ordered nodes. A corrected RTO is computed for each ordered node after which an estimated RTO is calculated as a calculated RTO for each remaining undefined RTO node. The ordered nodes in the dependency sequence are reordered according to an ascending order of the corrected RTO of the ordered nodes to form a timing sequence for activating the unique resources represented by the multiple nodes. | 04-14-2011 |
20110093851 | LOW SYNCHRONIZATION MEANS OF SCHEDULER FINALIZATION - Shutting down a computer work scheduler. The work scheduler includes a number of virtual processors, each of which is either active or inactive. An active processor executes work, searches for work, or is idle. An inactive has no context running atop it. The method includes determining that all processors controlled by the scheduler are idle. As a result of determining that all controlled by the scheduler are idle, the method proceeds to a first phase of a shutdown operation, which when successful, includes: performing a sweep of all collections searching for any work in the scheduler and determining that no work is found in the scheduler. As a result of determining that no work is found in the scheduler, the method proceeds to a second phase of a shutdown operation, which when successful includes messaging all contexts in the scheduler and telling them to exit. | 04-21-2011 |
20110093852 | CALIBRATION OF RESOURCE ALLOCATION DURING PARALLEL PROCESSING - A first performance measurement of an executing task may be determined, while the task is executed by a first number of nodes operating in parallel. A second performance measurement of the executing task may be determined, while the task is being executed by a second number of nodes operating in parallel. An overhead factor characterizing a change of a parallelism overhead of executing the task with nodes executing in parallel may then be calculated, relative to a change in a number of the nodes, based on the first performance measurement and the second performance measurement. Then, an optimal number of nodes to operate in parallel to continue executing the task may be determined, based on the overhead factor. | 04-21-2011 |
20110093853 | REAL-TIME INFORMATION TECHNOLOGY ENVIRONMENTS - Real-time data of business applications of an Information Technology environment is monitored to obtain information to be used in managing the environment. A business application includes processing collectively performed by a plurality of components of the environment. A component includes one or more resources, and therefore, in one example, the real-time data being monitored is associated with those resources. | 04-21-2011 |
20110099549 | METHODS, SYSTEMS AND COMPUTER PROGRAM PRODUCTS FOR A REMINDER MANAGER FOR PROJECT DEVELOPMENT - This disclosure details the implementation of apparatuses, methods and systems of a reminder manager for project development (hereinafter, “R-Manager”). In one embodiment, a R-Manager system may implement a daemon application to monitor a plurality of code development entities, maintain a list of reminders and associated tasks and send reminders to users. In one embodiment, the R-Manager allows a user to directly write a reminder in a segment of source code, and then locate and add the embedded reminder to the system by automatically scanning the body of the source file. The R-Manager system can also enforce the completion of a task if the reminder of the task has expired and the task has not been marked as completed. | 04-28-2011 |
20110107333 | POST FACTO IDENTIFICATION AND PRIORITIZATION OF CAUSES OF BUFFER CONSUMPTION - Some embodiments of the present invention provide systems and techniques for collecting task status information. During operation, the system can receive a status update for a task from a task manager through a GUI. Next, the system can determine whether the first status update for the task indicates that the task is delayed. If the status update indicates that the task is delayed, the system can request the task manager to indicate the help needed to resolve the task delay. Next, the system can receive a help needed descriptor from the task manager. Subsequently, the system can receive another status update for the task from the task manager, wherein the status update indicates that the help specified in the help needed descriptor is no longer required. Next, the system can determine an amount of delay associated with the help needed descriptor. | 05-05-2011 |
20110107334 | POST FACTO IDENTIFICATION AND PRIORITIZATION OF CAUSES OF BUFFER CONSUMPTION - Some embodiments of the present invention provide systems and techniques for determining a start delay and an execution delay for a task. During operation, the system can receive a status update for the task which indicates that the task has started execution. Next, the system can receive a second status update for the task which indicates that the task has completed execution. The system can then determine the start delay for the task by: determining an actual start time using the first status update; and determining a difference between the actual start time and the task's suggested start time. Next, the system can determine the execution delay for the task by: determining an actual execution duration using the first status update and the second status update; and determining a difference between the actual execution duration and the task's planned execution duration. | 05-05-2011 |
20110107335 | METHOD AND APPARATUS FOR ESTIMATING A TASK PROCESS STRUCTURE - A process-structure estimating method, includes counting a number of the events executed in parallel and are added-numbers; generating virtual route data including a start point and an end point for the events for which no added-number attribute is set, a branch point coupled to the start point and branched into branch routes for corresponding pairs of the added-number attributes and the attribute values of the added-number attributes, and a merge point at which the branch routes are merged together, the merge point being coupled to the end point; determining, the branch route on which the added-number-attribute-set event is to be placed on the virtual route data based on the added-number attributes, values thereof, processing times associated therewith, and updating the virtual route data based on the branch route. | 05-05-2011 |
20110107336 | Microprocessor - A microprocessor executes programs in a pipeline architecture that includes a task register management unit that switches a value of a task register to second register information that is used when a second task is executed after the execution of a first task is completed, if a switch instruction to the second task is issued when a plurality of units executes the first task, and a task manager that switches a value of a task identification information register to a second task identifier after the value is switched to the second register information, and grants each of the plurality of units permission to execute the second task. | 05-05-2011 |
20110126199 | Method and Apparatus for Communicating During Automated Data Processing - A number of items of data from a data source ( | 05-26-2011 |
20110131578 | SYSTEMS AND METHODS FOR CHANGING COMPUTATIONAL TASKS ON COMPUTATION NODES TO MINIMIZE PROCESSING TIME VARIATION - Systems and methods are disclosed to process streaming data units (tuples) for an application using a plurality of processing units, the application have a predetermined processing time requirement, by changing an operator-set applied to the tuple by a processing unit, on a tuple-by-tuple basis; estimating code requirement for potential operators based on processing unit capability; and assigning the potential operators to the processing units. | 06-02-2011 |
20110138388 | METHODS AND APPARATUSES TO IMPROVE TURBO PERFORMANCE FOR EVENTS HANDLING - Embodiments of an apparatus for improving performance for events handling are presented. In one embodiment, the apparatus includes a number of processing elements and task routing logic. If at least one of the processing elements is in a turbo mode, the task routing logic selects a processing element for executing a task based at least on a comparison of performance losses. | 06-09-2011 |
20110138389 | OBTAINING APPLICATION PERFORMANCE DATA FOR DIFFERENT PERFORMANCE EVENTS VIA A UNIFIED CHANNEL - A system for obtaining performance data for different performance events includes a first application monitoring performance of a second application executing on a computing system. The first application identifies the type of event to be measured with respect to the second application, issues a first system call identifying the type of event, receives an identifier corresponding to the event type, and causes the second application to begin execution. After the execution of the second application is completed, the first application issues a second system call including the identifier corresponding to the event type, and receives a value of a hardware counter corresponding to the event type from an operating system. | 06-09-2011 |
20110138390 | Information processing device, information processing method and program - There is provided an information processing device including a receiving unit for receiving a command to be input to a first operating system and a command to be input to a second operating system different from the first operating system, a storage unit for storing a table in which given information included in the given command received by the receiving unit and information for identifying an application are related to each other, a generation unit for generating an application selection command for selectively executing the application based on the given command received by the receiving unit and the table stored in the storage unit, and an execution unit for executing the application selection command generated by the generation unit to selectively execute the application. | 06-09-2011 |
20110145822 | GENERATING AND RECOMMENDING TASK SOLUTIONS - Systems and methods of the present invention provide for receiving an action item communication by a user, which is then imported by a task management engine into a task list of an electronic organizer. Keywords from the task may be used to search a user's preferences, partnerships or the Internet for suggested solutions to complete the task. The system may then be customized to the user's preferences. | 06-16-2011 |
20110145823 | TASK MANAGEMENT ENGINE - Systems and methods of the present invention provide for receiving an action item communication by a user, which is then imported by a task management engine into a task list of an electronic organizer. Keywords from the task may be used to search a user's preferences, partnerships or the Internet for suggested solutions to complete the task. The system may then be customized to the user's preferences. | 06-16-2011 |
20110145824 | SYSTEM AND METHOD FOR CONTROLLING CENTRAL PROCESSING UNIT POWER WITH REDUCED FREQUENCY OSCILLATIONS - A method of dynamically controlling power within a central processing unit is disclosed and may include entering an idle state, reviewing a previous busy cycle immediately prior to the idle state, and based on the previous busy cycle determining a CPU frequency for a next busy cycle. | 06-16-2011 |
20110145825 | INFORMATION PROCESSING APPARATUS, COMPUTER-READABLE RECORDING MEDIUM CONFIGURED TO STORE COMMAND EXECUTION DETERMINATION PROGRAM, AND COMMAND EXECUTION DETERMINATION METHOD - An information processing apparatus includes a memory that stores command execution right information including execution right information indicating whether a command is executable, and a command determination unit that determines whether an entered command is a target of a command execution determination where it is determined that whether a command is executable based on whether the entered command is invoked by a user command or a system command, and determines whether the entered command is executable with reference to the command execution right information stored in the memory when the entered command is determined as the target of the command execution determination. | 06-16-2011 |
20110154334 | METHOD AND SYSTEM FOR OFFLOADING PROCESSING TASKS TO A FOREIGN COMPUTING ENVIRONMENT - A method and apparatus for offloading processing tasks from a first computing environment to a second computing environment, such as from a first interpreter emulation environment to a second native operating system within which the interpreter is running. The offloading method uses memory queues in the first computing environment that are accessible by the first computing environment and one or more offload engines residing in the second computing environment. Using the queues, the first computing environment can allocate and queue a control block for access by a corresponding offload engine. Once the offload engine dequeues the control block and performs the processing task in the control block, the control block is returned for interrogation into the success or failure of the requested processing task. The offload engine is a separate process in a separate computing environment, and does not execute as part of any portion of the first computing environment. | 06-23-2011 |
20110154335 | Content Associated Tasks With Automated Completion Detection - An apparatus for scheduling a task with associated stored defining at least one relevant characteristic is provided. A detected content which defines at least one detected characteristic may be detected and then compared to the relevant characteristic of the stored content in the form of a similarity factor. It may then be determined whether the task has been completed based at least in part on the similarity factor. Information relating to the status of the task may be shared with other devices. A corresponding method and computer program product are also provided. | 06-23-2011 |
20110154336 | CONSISTENT UNDEPLOYMENT SUPPORT AS PART OF LIFECYCLE MANAGEMENT FOR BUSINESS PROCESSES IN A CLUSTER-ENABLED BPM RUNTIME - A system, computer-implemented method, and computer program product for undeployment of a business process definition in a cluster-enabled business process management runtime environment are presented. A BPMS server executes, through a deployment container executing one or more business processes instances of a business process definition running across a cluster of nodes, a stop operation of a running process instance of the business process application. The BPMS server further executes a remove operation of the stopped running process instance from the deployment container. | 06-23-2011 |
20110154337 | Relational Modeling for Performance Analysis of Multi-Core Processors Using Virtual Tasks - A relational model may be used to encode primitives for each of a plurality of threads in a multi-core processor. The primitives may include tasks and parameters, such as buffers. Implicitly created tasks, like set render target, may be visualized by associating those implicitly created tasks with actual coded tasks. | 06-23-2011 |
20110154338 | TASK MANAGEMENT USING ELECTRONIC MAIL - A mail server based approach to task management. In an embodiment, a first user sends a task assignment email indicating a task sought to be assigned, a list of assignees and a list of recipients. The mail server forwards the email message to all the recipients, while maintaining information of a current status of the task. The assignees may send status updates and the current status is accordingly updated. The status information on the server can be accessed by various users. | 06-23-2011 |
20110154339 | INCREMENTAL MAPREDUCE-BASED DISTRIBUTED PARALLEL PROCESSING SYSTEM AND METHOD FOR PROCESSING STREAM DATA - Disclosed herein is a system for processing large-capacity data in a distributed parallel processing manner based on MapReduce using a plurality of computing nodes. The distributed parallel processing system is configured to provide an incremental MapReduce-based distributed parallel processing function for large-capacity stream data which is being continuously collected even during the performance of the distributed parallel processing, as well as for large-capacity stored data which has been previously collected. | 06-23-2011 |
20110154340 | RECORDING MEDIUM STORING OPERATION MANAGEMENT PROGRAM, OPERATION MANAGEMENT APPARATUS AND METHOD - An operation management apparatus obtains a value Xi indicating the number of process requests being processed by an information processing apparatus during each sampling operation, from N samplings acquired during a specific time period from the information processing apparatus, wherein N is an integer satisfying a condition of 1≦N, and i is an integer satisfying a condition of 1≦i≦N. The apparatus determines, for a plurality of information processing apparatuses, a ratio of the sum of values Xi, each value Xi having a difference, from a maximum value of the values Xi, falling within a specific range, to the total sum of the values Xi. The apparatus detects an information processing apparatus having the ratio equal to or higher than a specific value. | 06-23-2011 |
20110161958 | Method and system for managing business calculations using multi-dimensional data - A method and system for managing and executing business formulas using a single platform includes a storage function for storing a plurality of business formulas in a single repository, where each business formula is defined using multi-dimensional data. The multi-dimensional data is accessible by employee-users of the business through a server communicatively connected to the repository. The employee-user may alter the multi-dimensional data of a business formula to incorporate modifications, as well as to create new business formulas. The server functions to execute business formulas contained within the repository. By allowing employee-users to modify existing business formulas and to input new business formulas directly into the system without requiring a programmer to write new software code, the efficiency of executing business formulas and implementing modifications is increased. | 06-30-2011 |
20110173617 | SYSTEM AND METHOD OF DYNAMICALLY CONTROLLING A PROCESSOR - A method of executing a dynamic clock and voltage scaling (DCVS) algorithm in a central processing unit (CPU) is disclosed and may include monitoring CPU activity and determining whether a workload is designated as a special workload when the workload is added to the CPU activity. | 07-14-2011 |
20110173618 | METHOD AND APPARATUS FOR MOVING PROCESSES BETWEEN ISOLATION ENVIRONMENTS - A method for moving an executing process from a source isolation scope to a target isolation scope includes the step of determining that the process is in a state suitable for moving. The association of the process changes from a source isolation scope to a target isolation scope. A rule loads in association with the target isolation scope. | 07-14-2011 |
20110179419 | DEPENDENCY ON A RESOURCE TYPE - A clusterware manager on a cluster of nodes interprets a resource profile. The resource profile defines resource profile attributes. The attributes include at least one attribute that defines a cluster dependency based on resource type. The attribute does not identify any particular resource of that resource type. Dependencies between resources are managed based on the attribute that specifies the cluster dependency. | 07-21-2011 |
20110185358 | PARALLEL QUERY ENGINE WITH DYNAMIC NUMBER OF WORKERS - Partitioning query execution work of a sequence including a plurality of elements. A method includes a worker core requesting work from a work queue. In response, the worker core receives a task from the work queue. The task is a replicable sequence-processing task including two distinct steps: scheduling a copy of the task on the scheduler queue and processing a sequence. The worker core processes the task by: creating a replica of the task and placing the replica of the task on the work queue, and beginning processing the sequence. The acts are repeated for one or more additional worker cores, where receiving a task from the work queue is performed by receiving one or more replicas of tasks placed on the task queue by earlier performances of creating a replica of the task and placing the replica of the task on the work queue by a different worker core. | 07-28-2011 |
20110191773 | System and Method for Datacenter Power Management - A system and method for datacenter power management is disclosed. In particular embodiments, the method includes receiving, with a processor, a request for execution of an application. The method also includes for each of a plurality of datacenters, determining an amount of electricity required to execute the application at the respective datacenter. The method also includes, for each of the plurality of datacenters, determining a cost associated with executing the application at the respective datacenter based, at least in part, on the amount of electricity required to execute the application at the respective datacenter. The method further includes selecting one of the plurality of datacenters to execute the application based, at least in part, on the cost associated with executing the application at the respective datacenter and executing the application at the selected datacenter. | 08-04-2011 |
20110191774 | NOC-CENTRIC SYSTEM EXPLORATION PLATFORM AND PARALLEL APPLICATION COMMUNICATION MECHANISM DESCRIPTION FORMAT USED BY THE SAME - Network-on-Chip (NoC) is to solve the performance bottleneck of communication in System-on-Chip, and the performance of the NoC significantly depends on the application traffic. The present invention establishes a system framework across multiple layers, and defines the interface function behaviors and the traffic patterns of layers. The present invention provides an application modeling in which the task-graph of parallel applications is described in a text method, called Parallel Application Communication Mechanism Description Format. The present invention further provides a system level NoC simulation framework, called NoC-centric System Exploration Platform, which defines the service spaces of layers in order to separate the traffic patterns and enable the independent designs of layers. Accordingly, the present invention can simulate a new design without modifying the framework of simulator or interface designs. Therefore, the present invention increases the design spaces of NoC simulators, and provides a modeling to evaluate the performance of NoC. | 08-04-2011 |
20110197193 | DEVICE AND METHOD FOR CONTROLLING COMMUNICATION BETWEEN BIOS AND BMC - A communication control device controls communication between a BIOS (Basic Input/Output System) and a BMC (Baseboard Management Controller). The device includes a processor that performs a process of an OS (Operating System) and a process of the BIOS. When a communication request for communication with the BMC occurs from the BIOS or OS, the processor performs a process associated with the communication request by dividing it into a first process and a second process. The first process is configured to store contents of the communication request and make the OS restart a process without performing the communication between the BIOS and the BMC in response to the communication request. The second process is configured to actually perform the communication between the BIOS and the BMC in response to the stored communication request. | 08-11-2011 |
20110202922 | SYSTEM RESOURCE INFLUENCED STAGED SHUTDOWN - The present invention provides a method for shutting down a computing device on which multiple software components are running. More specifically, the present invention provides a method for prioritising the shutdown of the software components according to whether the device will experience significant problems when restarting if the components are not provisioned sufficient resources to complete specific shutdown operations. | 08-18-2011 |
20110202923 | INFORMATION PROCESSING APPARATUS, COMPUTER-READABLE MEDIUM STORING INFORMATION PROCESSING PROGRAM, AND INFORMATION PROCESSING METHOD - A processing function determination section included in an information processing apparatus determines a differential processing function which is a processing function that is realized by a first program and that is not realized by a second program. A control information determination section reads out relation information which associates a plurality of processing functions with one or more pieces of control information related to each of the plurality of processing functions from a storage section and determines a piece of control information related to the differential processing function on the basis of the relation information. | 08-18-2011 |
20110209149 | OPTIMIZATION OF INTEGRATION FLOW PLANS - Computer-based methods, computer-readable storage media and computer systems are provided for optimizing integration flow plans. An initial integration flow plan, one or more objectives and/or an objective function related to the one or more objectives may be received as input. A computing cost of the initial integration flow plan may be compared with the objective function. Using one or more heuristics, a set of close-to-optimal integration flow plans may be identified from all possible integration flow plans that are functionally equivalent to the initial integration flow plan. A close-to-optimal integration flow plan with a lowest computing cost may be selected from the set as a replacement for the initial integration flow plan. | 08-25-2011 |
20110209150 | AUTOMATIC METHOD AND SYSTEM FOR FORMULATING AND TRANSFORMING REPRESENTATIONS OF CONTEXT USED BY INFORMATION SERVICES - An information retrieval system for automatically retrieving information related to the context of an active task being manipulated by a user. The system observes the operation of the active task and user interactions, and utilizes predetermined criteria to generate a context representation of the active task that are relevant to the context of the active task. The information retrieval system then processes the context representation to generate queries or search terms for conducting an information search. The information retrieval system reorders the terms in a query so that they occur in a meaningful order as they naturally occur in a document or active task being manipulated by the user. Furthermore, the information retrieval system may access a user profile to retrieve information related to the user, and the select information sources or transform search terms based on attributes related to the user, such as the user's occupation, position in a company, major in school, etc. | 08-25-2011 |
20110214125 | TASK MANAGEMENT CONTROL APPARATUS AND METHOD HAVING REDUNDANT PROCESSING COMPARISON - An input/output control apparatus including: a unit that controls input/output of data relating to a computation of a plurality of processors in response to an access request from a second input/output unit and an access request from a first input/output unit which requires higher reliability than said second input/output unit, and orders at least one of a plurality of processors to perform a computation relating to the access request from said first input/output unit away from the computation relating to the access request from said second input/output unit in case of that said first input/output unit issued an access request, so that a same computation is made by said plurality of processors; a unit that compares the results of said computations relative to the access request from said first input/output unit provided from said plurality of processors; and a unit that allows the data associated with said computations of said processors to be output on the basis of said compared results. | 09-01-2011 |
20110214126 | BIDIRECTIONAL DYNAMIC OFFLOADING OF TASKS BETWEEN A HOST AND A MOBILE DEVICE - One or more functions are exposed by a mobile device to a host connected to the mobile device. A function of the one or more functions is executed at the mobile device in response to a request from the host, wherein the function is associated with a host task. The result of the function is returned to the host. | 09-01-2011 |
20110219375 | ENHANCED WORK-FLOW MODEL CAPABLE OF HANDLING EXCEPTIONS - A system and method for augmenting a work-flow model to handle all expected and unexpected exceptions during run-time. The system includes an Exception Handling Knowledge Base (EHKB), a Work-flow Manager for managing the execution of the work-flow model and automatically adding exception transitions from the EHKB to the model except those forbidden, and a Work-flow Monitor for monitoring the model execution. The Monitor generating alerts to a business manager when the exceptions are encountered. At build-time, a process analyst could define a process schema in a Process Schema Repository, specify a forbidden exception or modify a schema to handle an exception based on guidance from the EHKB. At run-time, a user may initiate a forbidden exception with approval from the business manager. | 09-08-2011 |
20110219376 | Method, apparatus and trace module for generating timestamps - The present invention relates to the field of data processing, in particular, a method, apparatus | 09-08-2011 |
20110225584 | MANAGING MODEL BUILDING COMPONENTS OF DATA ANALYSIS APPLICATIONS - Data analysis applications include model building components and stream processing components. To increase utility of the data analysis application, in one embodiment, the model building component of the data analysis application is managed. Management includes resource allocation and/or configuration adaptation of the model building component, as examples. | 09-15-2011 |
20110225585 | RETOOLING LOCK INTERFACES FOR USING A DUAL MODE READER WRITER LOCK - A method, system, and computer usable program product for retooling lock interfaces for using a dual mode reader writer lock. An invocation of a method is received using an interface. The method is configured to operate on a lock associated with a resource in a data processing system. A determination is made whether the lock is an upgraded lock. The upgraded lock is the DML operating in an upgraded mode. An operation corresponding to the method is executed on the DML, if the lock is the upgraded lock. | 09-15-2011 |
20110231845 | I/O AGENT ASSIGNMENT FOR JOBS USING AN MPI LIBRARY - An MPI library including selective I/O agent assignment from among executing tasks, provides improved performance. An MPI job is made up of a number of tasks. I/O operations in an MPI job are performed by tasks assigned as I/O agents. I/O agents are assigned such that the number of tasks assigned as I/O agents are less than the total number of tasks that make up the MPI job. In a dynamic MPI job, I/O agents may be selected from among tasks executing on a lead world or may be spread across multiple worlds. To perform I/O operations initiated by any tasks of an MPI job, including tasks not assigned as I/O agents, the MPI library instantiates worker threads within the tasks assigned as I/O agents. Once the tasks are assigned as I/O agents, identity information of the I/O agents may be stored so that a repeat assignment is not necessary. | 09-22-2011 |
20110231846 | TECHNIQUES FOR MANAGING SERVICE DEFINITIONS IN AN INTELLIGENT WORKLOAD MANAGEMENT SYSTEM - Techniques for managing service definitions in an intelligent workload management system are provided. Workloads and software products are assembled as a single unit with custom configuration settings. The single unit represents a recallable and reusable service definition for a service that can be custom deployed within designated cloud processing environments. | 09-22-2011 |
20110231847 | MANAGEMENT OF MULTIPLE INSTANCES OF LEGACY APPLICATION TASKS - Methods, systems, and techniques for supporting access to multiple copies of a legacy task are provided. When there are multiple copies of a task present, then instead of showing the output from a single task, the task workspace area displays task representation pictograms that represent the state and inform the user regarding each particular instance of that legacy task running on the host. The user can use the interface to perform various operations, including to start a new copy of the task, to end a copy of the task, and to select one of the copies for viewing. Example embodiments provide a Role-Based Modernization System (“RBMS”), which uses these enhanced modernization techniques to provide role-based modernization of menu-based legacy applications. | 09-22-2011 |
20110239217 | PERFORMING A WAIT OPERATION TO WAIT FOR ONE OR MORE TASKS TO COMPLETE - A method of performing a wait operation includes creating a first plurality of tasks and a continuation task. The continuation task represents a second plurality of tasks. The continuation task and each of the tasks in the first plurality have an associated wait handle. The wait handles for the first plurality of tasks and the continuation task are stored in an array. A wait operation is performed on the array, thereby waiting for at least one of the tasks in the first and second pluralities to complete. | 09-29-2011 |
20110246991 | METHOD AND SYSTEM TO EFFECTUATE RECOVERY FOR DYNAMIC WORKFLOWS - A computer-implemented smart recovery system for dynamic workflows addresses a change to a data object during execution of an instance of a workflow by selectively re-executing workflow tasks that are affected by the change, without cancelling the instance and restarting a new instance of the workflow. A determination of whether a task is to be re-executed during the smart recovery process may include examining a re-evaluation label assigned to the task. | 10-06-2011 |
20110246992 | Administration Of Virtual Machine Affinity In A Cloud Computing Environment - Administration of virtual machine affinity in a cloud computing environment, where the cloud computing environment includes a plurality of virtual machines (‘VMs’), the VMs composed of modules of automated computing machinery installed upon cloud computers disposed within a data center, the cloud computing environment also including a cloud operating system and a data center administration server operably coupled to the VMs, including installing, by the cloud operating system on at least one VM, an indicator that at least two of the VMs have an affinity requirement to be installed upon separate cloud computers; communicating, by at least one of the VMs, the affinity requirement to the data center administration server; and moving by the data center administration server the VMs having the affinity requirement to separate cloud computers in the cloud computing environment. | 10-06-2011 |
20110252422 | Opportunistic Multitasking - Services for a personal electronic device are provided through which a form of background processing or multitasking is supported. The disclosed services permit user applications to take advantage of background processing without significant negative consequences to a user's experience of the foreground process or the personal electronic device's power resources. To effect the disclosed multitasking, one or more operational restrictions may be enforced. A consequence of such restrictions, a process may not be able to do in the background state, what it may be able to do if it were in the foreground state. In one embodiment, while a background task may be permitted to complete a first task, it may not be permitted start a new task—being suspended after completion of the first task. Implementation of the disclosed services may be substantially transparent to the executing user applications. | 10-13-2011 |
20110252423 | Opportunistic Multitasking - Services for a personal electronic device are provided through which a form of background processing or multitasking is supported. The disclosed services permit user applications to take advantage of background processing without significant negative consequences to a user's experience of the foreground process or the personal electronic device's power resources. To effect the disclosed multitasking, one or more of a number of operational restrictions may be enforced. A consequence of such restrictions may be that a process will not be able to do in the background state, what it may be able to do if it were in the foreground state. By way of example, network-based applications may be suspended until a message is received for them. At that time, the suspended application may be moved into the background state where it is permitted to respond to the message. In a similar fashion, audio application may be permitted to execute in background until suspended by user action. At that time, the application is suspended. | 10-13-2011 |
20110252424 | SYSTEM AND METHOD FOR DETECTING DEADLOCK IN A MULTITHREAD PROGRAM - A system and method for detecting deadlock in multithread program is provided. The method includes: selecting the thread to be detected; initiating a tracking program to track the thread running in a kernel; initiating a target multithread program; determining whether the selected thread is running; dynamically inserting a probe in the database in order to detect the selected thread through the instrument function. The instrument function records the detected data, and when the recorded data goes beyond the threshold value of the kernel, the data is transmitted to the user space which stores the data, and analyzing the data stored in the user space to judge whether deadlock has been generated. Accordingly, it is possible to detect deadlock efficiently, without the source code of the target program. This is beneficial to a debug task of the multithread and is beneficial to analysis of the usage of the source by the multithread program. | 10-13-2011 |
20110252425 | EXECUTING OPERATIONS VIA ASYNCHRONOUS PROGRAMMING MODEL - A method and a system execute operations, called jobs, via an APM model, in a MES system. The job execution is requested in an application defining an abstract Job class. The abstract Job class includes: an abstract method for job execution, called Execute, wherein a set of jobs to be executed is implemented, at engineering time, within the Execute method, when implementing a set of classes derived from the abstract Job class; a method for executing the job in asynchronous mode, called ExecuteAsync, the ExecuteAsync method runs the Execute method by following APM rules; and a method for executing the job in synchronous mode, called WaitForExecution, the WaitForExecution method runs the ExecuteAsync method waiting for its completion. At run time, by the application, requests the job execution in asynchronous mode by invoking directly the ExecuteAsync method or in synchronous mode by invoking the WaitForExecution method. | 10-13-2011 |
20110258627 | Runtime Optimization Of An Application Executing On A Parallel Computer - Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session. | 10-20-2011 |
20110258628 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR TRANSPORTING A TASK TO A HANDLER, UTILIZING A QUEUE - In accordance with embodiments, there are provided mechanisms and methods for transporting a task to a handler, utilizing a queue. These mechanisms and methods for transporting a task to a handler, utilizing a queue can enable improved task management, increased efficiency, dynamic task processing, etc. | 10-20-2011 |
20110258629 | METHODS AND SYSTEMS FOR COORDINATED TRANSACTIONS IN DISTRIBUTED AND PARALLEL ENVIRONMENTS - Automated techniques are disclosed for minimizing communication between nodes in a system comprising multiple nodes for executing requests in which a request type is associated with a particular node. For example, a technique comprises the following steps. Information is maintained about frequencies of compound requests received and individual requests comprising the compound requests. For a plurality of request types which frequently occur in a compound request, the plurality of request types is associated to a same node. | 10-20-2011 |
20110271281 | REDUCING FEEDBACK LATENCY - A latency between an input and its corresponding feedback can be reduced by generating the feedback in a lower-layer software component instead of in an upper-layer software component. The lower-layer component generates the feedback based on one or more parameters associated with a given input type. The parameters were previously created based on, for example, one or more previous inputs. Generating feedback in a lower-layer component reduces the number of software layer boundaries that the input and feedback pass through, thus reducing the latency between the feedback and input. | 11-03-2011 |
20110276966 | Managing task dependency within a data processing system - A processing apparatus includes task manager circuitry | 11-10-2011 |
20110276967 | Method and System for Enabling Computer Systems to Be Responsive to Environmental Changes - The present invention discloses a method and system for automatically enabling a computer to preserve data in response to an environmental change, respond to motion or sound, or be responsive to the arrival or actions of a person. In one embodiment, the computer monitoring system comprises at least one sensor for determining the existence of motion external to a computer where, upon detecting motion, the sensor communicates a signal to a receiver, which is in data communication with the computing device, and a program coupled to the receiver where the program comprises routines for receiving user input defining what programs or files to close, open, play, minimize, or maximize upon receiving a signal from the receiver. | 11-10-2011 |
20110283281 | SYSTEM AND METHOD FOR PROVIDING COMPLEX ACCESS CONTROL IN WORKFLOWS - A system for providing complex access control in workflows. The system comprises a computer, including a computer readable storage medium and processor operating thereon. The system also comprises at least one business process which includes a plurality of tasks. Each task is associated with a task state which changes during execution of the task. The system further comprises a plurality of logical roles. Each logical role defines a responsibility based on the task state and a member of that logical role. Additionally, the system comprises a configurable matrix of access controls that is used to control access to the plurality of tasks based on the plurality of logical roles. | 11-17-2011 |
20110283282 | IMAGE FORMING APPARATUS, METHOD OF ACQUIRING IDENTIFICATION INFORMATION, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An image forming apparatus includes: a first processing section that, in an environment where a first operating system is running, executes a process for a first application program, and performs a generation process for generating identification information for identifying the process for the first application program; and a second processing section that, in an environment where a second operating system is running, executes a process for a second application program, and when the process for the second application program is instructed to be executed, performs an identification information acquisition process for acquiring identification information newly generated through the generation process of the first processing section as identification information for identifying the process for the second application program. | 11-17-2011 |
20110296414 | UPGRADING ROLES IN A ROLE-BASED ACCESS-BASED CONTROL MODEL - Management roles in a role-based framework may be upgraded by updating existing management roles, updating derived roles, and deprecating or reducing existing and derived roles in the role-based framework. The existing management roles may include a set of existing role entries for defining an action using parameters, scripts, application program interface calls, and a special permission for enabling performance of tasks defined by the management roles. The derived roles may include custom management roles derived from the existing management roles in the role-based framework. | 12-01-2011 |
20110296415 | TASKING SYSTEM INTERFACE METHODS AND APPARATUSES FOR USE IN WIRELESS DEVICES - Techniques are provided which may be implemented in various methods and/or apparatuses that to provide a tasking system buffer interface capability to interface with a plurality of shared processes/engines. | 12-01-2011 |
20110296416 | METHOD AND APPARATUS FOR MANAGING AN APPLICATION BEING EXECUTED IN A PORTABLE TERMINAL - A method and an apparatus are provided for preventing battery power consumption and degradation of system performance due to the system resources being utilized by applications being executed, while providing a multi-tasking function through a plurality of applications. In the method, when a plurality of applications are executed, such execution of the plurality of applications is reported to the user, so as to enable the user to terminate one or more applications, thereby preventing unnecessary consumption of battery power. | 12-01-2011 |
20110296417 | METHOD AND APPARATUS FOR MANAGING AN APPLICATION BEING EXECUTED IN A PORTABLE TERMINAL - A method and an apparatus are provided for preventing battery power consumption and degradation of system performance due to the system resources being utilized by applications being executed, while providing a multi-tasking function through a plurality of applications. In the method, when a plurality of applications are executed, such execution of the plurality of applications is reported to the user, so as to enable the user to terminate one or more applications, thereby preventing unnecessary consumption of battery power. | 12-01-2011 |
20110296418 | METHOD AND APPARATUS FOR MANAGING AN APPLICATION BEING EXECUTED IN A PORTABLE TERMINAL - A method and an apparatus are provided for preventing battery power consumption and degradation of system performance due to the system resources being utilized by applications being executed, while providing a multi-tasking function through a plurality of applications. In the method, when a plurality of applications are executed, such execution of the plurality of applications is reported to the user, so as to enable the user to terminate one or more applications, thereby preventing unnecessary consumption of battery power. | 12-01-2011 |
20110307890 | UTILIZATION OF SPECIAL PURPOSE ACCELERATORS USING GENERAL PURPOSE PROCESSORS - A novel and useful system and method of improving the utilization of a special purpose accelerator in a system incorporating a general purpose processor. In some embodiments, the current queue status of the special purpose accelerator is periodically monitored using a background monitoring process/thread and the current queue status is stored in a shared memory. A shim redirection layer added a priori to a library function task determines at runtime and in user space whether to execute the library function task on the special purpose accelerator or the general purpose processor. At runtime, using the shim redirection layer and based on the current queue status, it is determined whether to execute the library function task on the special purpose accelerator or on the general purpose processor. | 12-15-2011 |
20110307891 | METHOD AND DEVICE FOR ACTIVATION OF COMPONENTS - A method and electronic device for activating components based on predicted device activity. The method and device include maintaining a set of device activity information storing data collected from components in the device. The device activity information may be maintained over a predetermined time period and may include times associated with the collected component data. The device activity information may include data regarding scheduled events. Device activity and the appropriate activation state of a component on the device may be predicted based on the current time, current data collected from components in the device and data in the device activity information. | 12-15-2011 |
20110307892 | PORTABLE INFORMATION TERMINAL, COMPUTER-READABLE STORAGE MEDIUM HAVING STORED THEREON PORTABLE INFORMATION TERMINAL CONTROL PROGRAM, PORTABLE INFORMATION SYSTEM, AND PORTABLE INFORMATION TERMINAL CONTROL METHOD - First, one or more tasks each representing process contents of transmission or reception of data are set. Next, an access point is searched for, and a connection to the access point is performed if the access point is found. Then, before the one or more tasks are executed via the access point, an execution control parameter, for at least one of the one or more tasks, which indicates at least one of an execution instruction, whether or not to execute the task, and an execution priority, is obtained in accordance with an access point identifier. Then, transmission or reception of data is performed via the access point by executing at least one of the one or more tasks on the basis of the execution control parameter. | 12-15-2011 |
20110321046 | PROCESS INFORMATION MANAGEMENT APPARATUS AND METHOD, IMAGE FORMING APPARATUS, AND COMPUTER READABLE MEDIUM STORING PROGRAM THEREFOR - A process information management apparatus includes a first processor, a management unit, a second processor, a generator, and a third processor. The first processor executes a first process based on a first application program in an environment where a first system is operating. The management unit manages information regarding a process executed in an environment where a second system is operating, based on a second application program in the environment. The second processor executes a second process based on the second application program in the environment. The generator generates, when the first process is executed, an execution instruction for a third process corresponding to the first process to regard the first process is executed in the environment, and, when the first process is completed, generates a completion instruction for the third process. Based on the second application program in the environment, the third processor executes and completes the third process. | 12-29-2011 |
20110321047 | APPLICATION PRE-LAUNCH TO REDUCE USER INTERFACE LATENCY - A device stores a plurality of applications and a list of associations for those applications. The applications are preferably stored within a secondary memory of the device, and once launched each application is loaded into RAM. Each application is preferably associated to one or more of the other applications. Preferably, no applications are launched when the device is powered on. A user selects an application, which is then launched by the device, thereby loading the application from the secondary memory to RAM. Whenever an application is determined to be associated with a currently active state application, and that associated application has yet to be loaded from secondary memory to RAM, the associated application is pre-launched such that the associated application is loaded into RAM, but is set to an inactive state. | 12-29-2011 |
20120005679 | APPARATUS AND METHOD FOR THREAD PROGRESS TRACKING USING DETERMINISTIC PROGRESS INDEX - Provided is a method and apparatus for measuring a performance or a progress state of an application program to perform data processing and execute particular functions in a computing environment using a micro architecture. A thread progress tracking apparatus may include a selector to select at least one thread constituting an application program; a determination unit to determine, based on a predetermined criterion, whether an instruction execution scheme corresponds to a deterministic execution scheme having a regular cycle or a nondeterministic execution scheme having an irregular delay cycle with respect to each of at least one instruction constituting a corresponding thread; and a deterministic progress counter to generate a deterministic progress index with respect to an instruction that is executed by the deterministic execution scheme, excluding an instruction that is executed by the nondeterministic execution scheme. | 01-05-2012 |
20120011511 | METHODS FOR SUPPORTING USERS WITH TASK CONTINUITY AND COMPLETION ACROSS DEVICES AND TIME - Concepts and technologies are described herein for providing task continuity and supporting task completion across devices and time. A task management application is configured to monitor one or more interactions between a user and a device. The interactions can include the use of the device, the use of one or more applications, and/or other tasks, subtasks, or other operations. Predictive models constructed from data or logical models can be used to predict the attention resources available or allocated to a task or subtask as well as the attention and affordances available within a context for addressing the task and these inferences can be used to mark or route the task for later reminding and display. In some embodiments, the task management application is configured to remind or execute a follow-up action when a session is resumed. Embodiments include providing users with easy to use gestures and mechanisms for providing input about desired follow up on the same or other devices. | 01-12-2012 |
20120011512 | MINIMIZING OVERHEAD IN RESOLVING OPERATING SYSTEM SYMBOLS - A symbol resolution unit can be configured for resolving conflicting operating system symbols. A default symbol resolution data structure can be accessed to resolve a symbol associated with a client of an operating system. A first data entry that corresponds to the symbol is located in the default symbol resolution data structure. It is determined that the first data entry indicates that the symbol is marked special (e.g., as a conflicting operating system symbol). A secondary symbol resolution data structure is accessed in response to determining that the first data entry indicates that the symbol is marked special. A second data entry that corresponds to the symbol is located in the secondary symbol resolution data structure based, at least in part, on an identifier of the client. A memory location indicated in the second data entry that corresponds to the symbol is provided to the client. | 01-12-2012 |
20120011513 | IMPLEMENTING A VERSIONED VIRTUALIZED APPLICATION RUNTIME ENVIRONMENT - A workload partition, associated with a legacy operating system, is created on an instance of a base operating system implemented on a machine. The legacy operating system is an earlier version of the base operating system. An indication is detected to execute a first command associated with a process of the workload partition associated with the legacy operating system. It is determined that the first command associated with the process of the workload partition associated with the legacy operating system was overlaid with a reference to a runtime execution wrapper associated with the base operating system. The runtime execution wrapper is executed to access a runtime execution environment associated with the base operating system. The first command is executed using the runtime execution environment associated with the base operating system. | 01-12-2012 |
20120011514 | GENERATING AN ADVANCED FUNCTION USAGE PLANNING REPORT - An apparatus, system, and method for generating an advanced function usage planning report. One embodiment of the apparatus includes a detection module, a monitoring module, and a planning report module. The detection module detects use of an advanced function on a storage controller. The advanced function includes an optional storage function beyond a standard function set. The monitoring module monitors the use of the advanced function on the storage controller. The planning report module generates a planning report based at least in part on use information from the monitored use of the advanced function. | 01-12-2012 |
20120017213 | ULTRA-LOW COST SANDBOXING FOR APPLICATION APPLIANCES - The disclosed architecture facilitates the sandboxing of applications by taking core operating system components that normally run in the operating system kernel or otherwise outside the application process and on which a sandboxed application depends on to run, and converting these core operating components to run within the application process. The architecture takes the abstractions already provided by the host operating system and converts these abstractions for use by the sandbox environment. More specifically, new operating system APIs (application program interfaces) are created that include only the basic computation services, thus, separating the basic services from rich application APIs. The code providing the rich application APIs is copied out of the operating system and into the application environment—the application process. | 01-19-2012 |
20120017214 | SYSTEM AND METHOD TO ALLOCATE PORTIONS OF A SHARED STACK - A system and method of managing a stack shared by multiple threads of a processor includes allocating a first portion of a shared stack to a first thread and allocating a second portion of the shared stack to a second thread. | 01-19-2012 |
20120017215 | TASK ENVIRONMENT GENERATION SYSTEM, TASK ENVIRONMENT GENERATION METHOD, AND STORAGE MEDIUM - In generating a task environment by a thin client system, it is desired to reduce the man-hour of the operation of the system construction and setting. Specifically, a task environment setting table stores task environment conditions for every projects or task forces. A task environment setting section automatically performs the settings required for task at the timing when the desktop environment generation section generates the desktop environment in accordance with the setting in the task environment setting table. Before a user makes a connection with a desktop environment via a session management section to start a task, the session management section performs setting to a task environment generation agent for each user. As a result, not only a simple desktop environment, but also the environment for the task is automatically set. | 01-19-2012 |
20120042313 | SYSTEM HAVING TUNABLE PERFORMANCE, AND ASSOCIATED METHOD - A system having tunable performance includes: a plurality of units, wherein at least one unit includes a hardware circuit; at least one global/local busy level detector including at least one global busy level detector and/or at least one local busy level detector, wherein each global/local busy level detector is arranged to detect a global/local busy level of at least one portion of the units; and a global/local system performance manger arranged to tune the performance of the system according to at least one global/local busy level detected by the at least one global/local busy level detector, wherein based upon the at least one global/local busy level and at least one policy associated with the performance of the system, the global/local system performance manger adjusts at least one parameter of the system when needed, and the parameter corresponds to the performance of the system. An associated method is also provided. | 02-16-2012 |
20120047504 | METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR MAINTAINING A RESOURCE BASED ON A COST OF ENERGY - Methods and systems are described for maintaining a resource based on a cost of energy. In one aspect, a first energy cost, for accessing a resource via an energy consuming component in an execution environment, is detected. The first energy cost is measured according to a specified metric. A determination is made that a configured maintenance condition is met based on the energy cost. A maintenance operation is identified for configuring at least one of the resource and the energy consuming component for accessing the resource at a second energy cost that is less than the first energy cost. An indication to start the maintenance operation is sent in response to determining that the maintenance condition is met. | 02-23-2012 |
20120047505 | PREDICTIVE REMOVAL OF RUNTIME DATA USING ATTRIBUTE CHARACTERIZING - Techniques are described for selectively removing runtime data from a stream-based application in a manner that reduces the impact of any delay caused by the processing of the data in the stream-based application. In addition to removing the data from a primary processing path of the stream-based application, the data may be processed in an alternate manner, either using alternate processing resources, or by delaying the processing of the data. | 02-23-2012 |
20120047506 | RESOURCE ABSTRACTION VIA ENABLER AND METADATA - Embodiments of the invention provide systems and methods for managing an enabler and dependencies of the enabler. According to one embodiment, a method of managing an enabler can comprise requesting a management function via a management interface of the enabler. The management interface can provide an abstraction of one or more management functions for managing the enabler and/or dependencies of the enabler. In some cases, prior to requesting the management function metadata associated with the management interface can be read and a determination can be made as to whether the management function is available or unavailable. Requesting the management function via the management interface of the enabler can be performed in response to determining the management function is available. In response to determining the management function is unavailable, one or more alternative functions can be identified based on the metadata and the one or more alternative functions can be requested. | 02-23-2012 |
20120060155 | METHOD, SYSTEM, AND COMPUTER READABLE MEDIUM FOR WORKFLOW COMMUNICATION WHEREIN INSTRUCTIONS TO A WORKFLOW APPLICATION ARE WRITTEN BY THE WORKFLOW APPLICATION - A method, apparatus, system, and computer readable medium for communicating between an apparatus hosting a workflow application and a device, by generating a template including a placeholder. The instruction template is then sent to the device and a device output is received from the device, the device output including device data generated by the device and inserted into the placeholder of the template. | 03-08-2012 |
20120060156 | METHOD, APPARATUS, SYSTEM, AND COMPUTER READABLE MEDIUM FOR UNIVERSAL DEVICE PARTICIPATION IN BUSINESS PROCESS AND WORKFLOW APPLICATION - A method, apparatus, system, and computer readable medium for hosting an application participating in a workflow communication which generates a workflow step and translates the workflow step into a device instruction, the device instruction being based on device specifications and instructing the device to perform a function. The device instruction is sent to the device and feedback is received from the device, the feedback corresponding to the device instruction. The feedback is then processed and the workflow communication is updated based on the processed feedback. | 03-08-2012 |
20120060157 | METHODS AND STRUCTURE FOR UTILIZING DYNAMIC CAPABILITIES IN CLIENT/SERVER SOFTWARE INTERACTION - Methods and structure for improved client/server program communication by transmitting dynamically maintained service capabilities information from the server program to the client program. The client program generates a service request based on the received service capabilities information. Since the service capabilities information is retrieved from the server program and is dynamically maintained by the server program, the client program need not be updated when available services from the server program are modified. In one exemplary embodiment, the client program may be a print application client program and the server program may be a print server program. The print client program retrieves the current printer device capabilities (service capabilities information) and generates a print job ticket (service request) based on the retrieved, dynamically maintained printer device capability information. The job ticket is then transmitted to the server program to cause the printing of the document specified by the job ticket. | 03-08-2012 |
20120060158 | COEXISTENCE MANAGER HARDWARE/SOFTWARE IMPLEMENTATION - A method of wireless communication includes partitioning coexistence tasks between short term policy setting tasks and policy implementing tasks, processing the short term policy setting tasks using a first set of computing resources, and processing the policy implementing tasks using a second set of computing resources. The first set may be software resources configured for slower execution of tasks and the second set may be hardware resources configured for just-in-time execution of tasks. The policy may determine a time after which a first radio event is not to be interrupted and granting or denying later events based on whether they would begin before or after the do-not-interrupt time. The do-not-interrupt time may be based on a weighted priority of the first radio event. | 03-08-2012 |
20120060159 | METHOD AND APPARATUS FOR SCHEDULING THE PROCESSING OF COMMANDS FOR EXECUTION BY CRYPTOGRAPHIC ALGORITHM CORES IN A PROGRAMMABLE NETWORK PROCESSOR - A method and apparatus for scheduling the processing of commands by a plurality of cryptographic algorithm cores in a network processor. | 03-08-2012 |
20120066682 | VIRTUAL AND PHYSICAL ENTERPRISE SYSTEM IMAGING - A web page behavior enhancement (WPBE) control element is provided on a rendered web page enabling a user to perform actions on at least a portion of the web page content such as customizing, editing, analyzing, forwarding, and/or annotating the content. The processed content may be presented on the original web page, on a locally stored version of the web page, or archived for subsequent use, where any changes to the original web page content may be tracked and the user notified about the changes. The WPBE control element(s) may be embedded into the web page at the source web application or at the local browser based on factors like web application capabilities, browser capabilities, user preferences, usage pattern, and comparable ones. | 03-15-2012 |
20120072912 | PERFORMING A COMPUTERIZED TASK WITH DIVERSE DEVICES - A main computer runs a primary program performing an ongoing task, the primary program being optimized for performance on a desktop computer. A computerized device remote from the main computer runs an adjunct program which is a modified version of the primary program and is optimized for performance in a hands free mode. Communication means provides communication between the main computer and computerized device, and the main computer and computerized device interact through the communication means so that each influences the operation of the other. | 03-22-2012 |
20120072913 | METHOD AND APPARATUS FOR PROCESS ALLOCATION WITHIN A MOBILE DEVICE - An approach is provided for managing processes for enabling execution of applications within a user device. One or more characteristics of an application are determined by a process monitor module. A process management module then determines a process of the device for execution the application based, at least in part, on the one or more characteristics. A process allocation policy is executed for enabling process allocation decisions. | 03-22-2012 |
20120072914 | CLOUD COMPUTING SYSTEM AND METHOD FOR CONTROLLING SAME - A management application refers to an application management table, and acquires the operation states of all VMs included in an additional application of which the priority related to the execution of a job that has requested from the image forming apparatus. The management application detects the additional application including only VM that is not executing processing based on the operation states of the acquired VMs, and deletes the VM included in the detected additional application. | 03-22-2012 |
20120079482 | COORDINATING DEVICE AND APPLICATION BREAK EVENTS FOR PLATFORM POWER SAVING - Systems and methods of managing break events may provide for detecting a first break event from a first event source and detecting a second break event from a second event source. In one example, the event sources can include devices coupled to a platform as well as active applications on the platform. Issuance of the first and second break events to the platform can be coordinated based on at least in part runtime information associated with the platform. | 03-29-2012 |
20120079483 | Computer Implemented Automatic Lock Insertion in Concurrent Programs - Method provides a fully automatic lock insertion procedure to enforce critical sections that guarantees deadlock freedom and tries to minimize the lengths of the resulting critical sections. Method encapsulates regions of code meant to be executed atomically in a critical section induced by a pair of lock unlock statements and enlarges the critical section of the first thread by propagating the newly introduced lock statement backwards till it no longer participates in a deadlock. If the newly introduced lock statement participates in a deadlock, the process terminates. If lock statement of the second thread participates in a deadlock the method enlarges the critical section of the second thread by propagating the newly introduced lock statement backwards until it no longer participates in a deadlock. | 03-29-2012 |
20120079484 | SYSTEM, METHODS, AND MEDIA FOR PROVIDING IN-MEMORY NON-RELATIONAL DATABASES - Providing a first control process that executes in a hardware processor; providing a first server process that executes in a hardware processor, that responds to write requests by storing objects in in-memory, non-relational data store, and that responds to read requests by providing objects from in-memory, non-relational data store, wherein the objects each have an object size; forming a plurality of persistent connections between the first control process and the first server process; using the first control process, pipelining, using a pipeline having a pipeline size, requests that include the read requests and the write requests over at least one of the plurality of persistent connections; using the first control process, adjusting the number of plurality of persistent connections and the pipeline size based on an average of the object sizes; and using the first control process, prioritizing requests by request type based on anticipated load from the requests. | 03-29-2012 |
20120079485 | PROCESS DESIGN APPARATUS, AND PROCESS DESIGN METHOD - A non-transitory computer readable medium storing therein a program causing a computer to execute setting, in accordance with allocation of a task in a process which has at least two tasks as a predetermined processing unit, a constraint which is related to the process or the task or a combination thereof. The program causing a computer to execute generating a process that satisfies the constraint set, on the basis of constraint definition information that defines the constraint. | 03-29-2012 |
20120084779 | TRACKING REQUESTS THAT FLOW BETWEEN SUBSYSTEMS - The present invention extends to methods, systems, and computer program products for tracking requests that flow between subsystems. Embodiments of the invention facilitate following a user interaction/transaction from the point of entry through any subsystems that are called until the interaction/transaction is fulfilled. Generated information (e.g., log data) regarding a transaction can be aggregated across all subsystems, such as, for example, in a central repository. When failures occur, log and trace levels can be automatically increased for subsequent calls. | 04-05-2012 |
20120084780 | Mechanism for Customized Monitoring of System Activities - A mechanism for performing monitoring system activities using a performance monitor. A method of embodiments of the invention includes identifying a plurality of monitoring tools to monitor activities of a plurality of system components at the computer system, and each monitoring tool monitors activities of at least one system component of the plurality of system components. The method further includes generating a monitoring template to include monitoring capabilities of each of the plurality of monitoring tools, and customizing, via the monitoring template, the performance monitor to serve as a universal monitoring tool to facilitate the plurality of monitoring tools to monitor the activities of the plurality of system components. | 04-05-2012 |
20120084781 | JOB DISTRIBUTION PROCESSING SYSTEM, INFORMATION PROCESSING DEVICE AND COMPUTER-READABLE MEDIUM - An information processing device includes an image forming device, an execution sub-job determining unit, a sub-job execution destination determining unit and an execution instructing unit. The image forming device is designated as a destination and executes at least a sub-job executed N-th as the last sub-job out of N sub-jobs in a case where a job is divided into N sub-jobs. The execution sub-job determining unit determines a sub-job other than the executed N-th sub-job within a range of the occupiable time of the image forming device in a case where there is an available capacity in the image forming device. The sub-job execution destination determining unit determines the execution destination of each sub job. The execution instructing unit instructs a calculation resource to execute the first to (i−1)-th sub-jobs and instructs the image forming device to execute the i-th to N-th sub-jobs. | 04-05-2012 |
20120089983 | ASSESSING PROCESS DEPLOYMENT - System and methods for assessing process deployment are described. In one implementation, the method includes collecting at least one metric value associated with at least one operating unit within an organization. Further, the method describes normalizing the at least one collected metric value to a common scale to obtain normalized metric values. The method further describes analyzing the metric value to calculate a process deployment index which indicates the extent of deployment of the one or more processes within the organization. | 04-12-2012 |
20120096463 | System and Method for Integrated Workflow Scaling - A system is provided. The system comprises a first computer located in a first plant, a first memory, and a first object based process management application stored in the first memory. The system further comprises a second computer located in a location separate from the first plant, a second memory, and a second object based process management application stored in the second memory. When executed on the first computer, the first application invokes scripts in response to events and the scripts launch tasks. When executed on the second computer, the second computer invokes scripts in response to events and the scripts launch tasks, one of the events acted on by the second application is a message received from the first application. | 04-19-2012 |
20120096464 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - An information processing apparatus includes an application and functional modules configured to collaborate with each other to provide an application function of the application. Each of the functional modules operates as a requester that requests a provider function and a provider that provides the provider function requested by the requester. Each of the functional modules includes a function availability query unit that, at the requester, queries the provider about whether the requested provider function is available, a function availability response unit that, at the provider, sends a response indicating whether the requested provider function is available to the requester, and a function execution determining unit that, at the requester, controls execution of a requester function of the requester based on the response sent from the function availability response unit. | 04-19-2012 |
20120096465 | IMAGE FORMING APPARATUS, LOG MANAGEMENT METHOD, AND STORAGE MEDIUM - An image forming apparatus includes application programs that generate logs, an interface information storing unit configured to store interface information for the respective application programs, and a log management unit. The interface information is used to obtain the logs generated by the corresponding application programs. The log management unit is configured to receive a log acquisition request, to obtain the logs of one or more of the application programs specified in the log acquisition request based on the corresponding interface information stored in the interface information storing unit, and to output the obtained logs as a response to the log acquisition request. | 04-19-2012 |
20120110579 | ENTERPRISE RESOURCE PLANNING ORIENTED CONTEXT-AWARE ENVIRONMENT - An Enterprise Resource Planning (ERP) context-aware environment may be provided. Upon receipt of an action request, a context state may be updated. The context state may be analyzed to determine whether the context state is associated with at least one predicted objective. If so, a suggested next action associated with the at least one predicted objective may be provided. | 05-03-2012 |
20120110580 | DYNAMIC AND INTELLIGENT PARTIAL COMPUTATION MANAGEMENT FOR EFFICIENT PARALLELIZATION OF SOFTWARE ANALYSIS IN A DISTRIBUTED COMPUTING ENVIRONMENT - A method for verifying software includes determining the result of a bounding function, and using the result of the bounding function to apply one or more policies to the execution of the received job. The bounding function evaluates the execution of a received job, the received job indicating a portion of software to be verified. The result of the bounding function is based upon the present execution of the received job, one or more historical parameters, and an evaluation of the number of idle nodes available to process other jobs. | 05-03-2012 |
20120110581 | TASK CANCELLATION GRACE PERIODS - A command to perform a task can be received and the task can be started. A command to cancel the task can also be received. The task can be provided with a warning signal and a predetermined grace period of time before cancelling the task, which can allow the task to prepare for cancellation, such as by shutting down cleanly. If the task has not shut down within the grace period, then the task can be cancelled after the grace period expires. | 05-03-2012 |
20120117568 | Enforced Unitasking in Multitasking Systems - A computer system includes one or more devices that are capable of multitasking (performing at least two tasks in parallel or substantially in parallel). In response to detecting that one of the devices is performing a first one of the tasks, the system prevents the devices from performing at least one of the tasks other than the first task (such as all of the tasks other than the first task). In response to detecting that one of the devices is performing a second one of the tasks, the system prevents the devices from performing at least one of the tasks other than the second task (such as all of the tasks other than the first task). | 05-10-2012 |
20120124582 | Calculating Processor Load - A method, computer system, and computer program product for identifying a transient thread. A thread of a process is placed in a run queue associated with a processor. Data is added to the thread indicating a time that the thread was placed into the run queue. | 05-17-2012 |
20120124583 | APPARATUS AND METHOD FOR PARALLEL PROCESSING FLOW BASED DATA - Disclosed is an apparatus for parallel processing flow based data that may generate a first flow and a second flow based on data classified into lower layer information and upper layer information, may determine whether processing of the lower layer information or the upper layer information is required by analyzing the first flow or the second flow, and may process and output the lower layer information or the upper layer information using a flow unit based on the determination result. | 05-17-2012 |
20120131580 | APPLICATION TASK REQUEST AND FULFILLMENT PROCESSING - Techniques for fulfilling requests and providing results to a requesting entity. Embodiments may poll a flag to determine when a fulfillment request from a requesting entity is pending. Upon detecting the fulfillment request is pending, embodiments may retrieve the request and perform one or more actions associated with the request to produce a fulfillment result. Embodiments may then store the fulfillment result in a storage location and transmit a notification to the requesting entity to indicate the fulfillment request has been fulfilled. | 05-24-2012 |
20120131581 | Controlled Sharing of Information in Virtual Organizations - In one embodiment, a method for extracting data items for a task requesting a set of data items in a virtual organization including a plurality of members is provided. A set of confidentiality sub-policies associated with the set of data items and an information utility sub-policy associated with the task are retrieved. At least a portion of the set of data items for the task are retrieved based on an analysis that optimally balances confidentiality and information utility using the set of confidentiality sub-policies and the information utility sub-policy. | 05-24-2012 |
20120137294 | Data Communications In A Parallel Active Messaging Interface Of A Parallel Computer - Data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a SEND instruction, the SEND instruction specifying a transmission of transfer data from the origin endpoint to a first target endpoint; transmitting from the origin endpoint to the first target endpoint a Request-To-Send (‘RTS’) message advising the first target endpoint of the location and size of the transfer data; assigning by the first target endpoint to each of a plurality of target endpoints separate portions of the transfer data; and receiving by the plurality of target endpoints the transfer data. | 05-31-2012 |
20120137295 | METHOD FOR DISPLAYING CPU UTILIZATION IN A MULTI-PROCESSING SYSTEM - Various exemplary embodiments relate to a method of measuring CPU utilization. The method may include: executing at least one task on a multi-processing system having at least two processors; determining that a task is blocked because a resource is unavailable; starting a first timer for the task that measures the time the task is blocked; determining that the resource is available; resuming processing the task; stopping the first timer for the task; and storing the time interval that the task was blocked. The method may determine that a task is blocked when the task requires access to a resource, and a semaphore indicates that the resource is in use. The method may also include measuring the utilization time of each task, an idle time for each processor, and an interrupt request time for each processor. Various exemplary embodiments relate the above method encoded as instructions on a machine-readable medium. | 05-31-2012 |
20120137296 | PORTABLE COMMUNICATION DEVICE OPERATING METHOD - A portable communication device operating method includes the following steps: receiving a first software opening command to open a first software. Then, a portable communication device opens the first software. The portable communication device stores several pre-load relations, wherein each of the pre-load relations records at least one pre-load software to be pre-loaded after a preset software is opened. At least one second software to be pre-loaded after the first software is opened is obtained by inquiring the pre-load relations according to the first software. The portable communication device pre-loads the second software. A second software opening command to open the second software is received. The portable communication device opens the pre-loaded second software. | 05-31-2012 |
20120144392 | Resource Manager for Managing Hardware Resources - A resource manager is provided, which is configured to manage a plurality of hardware resources in a computing device. The resources are managed in dependence on a record of each of the plurality of hardware resources, and an indication of dependencies between the plurality of hardware resousrces. | 06-07-2012 |
20120151485 | Data Communications In A Parallel Active Messaging Interface Of A Parallel Computer - Data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint. | 06-14-2012 |
20120151486 | UTILIZING USER-DEFINED WORKFLOW POLICIES TO AUTOMATE CHANGES MADE TO COMPOSITE WORKFLOWS - Automating changes to a composite workflow using user-defined workflow policies can begin with the detection of a state change by a workflow policy handler for a record of an instance of a composite workflow running within a composite workflow system. User-defined workflow policies can be identified for the composite workflow in which the change was detected. A user-defined workflow policy can define policy actions to be performed if policy conditions are satisfied. For each identified user-defined workflow policy, the applicability to the instance of the composite workflow can be determined. If an identified user-defined workflow policy is determined to be applicable, the policy actions can be automatically performed on the instance of the composite workflow. | 06-14-2012 |
20120151487 | VIRTUAL PROCESSOR METHODS AND APPARATUS WITH UNIFIED EVENT NOTIFICATION AND CONSUMER-PRODUCED MEMORY OPERATIONS OR - The invention provides, in one aspect, a virtual processor that includes one or more virtual processing units. These virtual processing units execute on one or more processors, and each virtual processing unit executes one or more processes or threads (collectively, “threads”). While the threads may be constrained to executing throughout their respective lifetimes on the same virtual processing units, they need not be. The invention provides, in other aspects, virtual and/or digital data processors with improved dataflow-based synchronization. A process or thread (collectively, again, “thread”) executing within such processor can execute a memory instruction (e.g., and “Empty” or other memory-consumer instruction) that permits the thread to wait on the availability of data generated, e.g., by another thread and to transparently wake up when that other thread makes the data available (e.g, by execution of a “Fill” or other memory-producer instruction). | 06-14-2012 |
20120159487 | IDENTIFYING THREADS THAT WAIT FOR A MUTEX - In an embodiment, a first thread of a plurality of threads of a program is halted. A subset of the plurality of threads are determined that are waiting for a mutex that is locked by the first thread while the first thread is halted. Identifiers of the subset of the plurality of threads are presented. The subset of the plurality of threads may have their execution directly blocked and/or indirectly blocked by a lock on the mutex by the first thread. In embodiment, the first thread is halted in response to the first thread encountering a breakpoint, and the subset of the plurality of threads do not halt in response to the first thread encountering the breakpoint. | 06-21-2012 |
20120159488 | MANAGING TASKS AND INFORMATION - A system, method, and computer program product are disclosed. These relate to monitoring a plurality of information sources for messages; storing the messages in a database; and transmitting a list of the messages, the list having a first message and including a set of actions associated with the first message, the set of actions including a create-task action and a create-and-delegate-task action. | 06-21-2012 |
20120159489 | Systems and Methods for Generating a Cross-Product Matrix In a Single Pass Through Data Using Single Pass Levelization - Systems and methods are provided for a data processing system having multiple executable threads that is configured to generate a cross-product matrix in a single pass through data to be analyzed. An example system comprises memory for receiving the data to be analyzed, a processor having a plurality of executable threads for executing code to analyze data, and software code for generating a cross-product matrix in a single pass through data to be analyzed. The software code includes threaded variable levelization code for generating a plurality of thread specific binary trees for a plurality of classification variables, variable tree merge code for combining a plurality of the thread-specific trees into a plurality of overall trees for the plurality of classification variables, effect levelization code for generating a plurality of sub-matrices of the cross-product matrix using the plurality of the overall trees for the plurality of classification variables, and cross-product matrix generation code for generating the cross-product matrix by storing and ordering the elements of the sub-matrices in contiguous memory space. | 06-21-2012 |
20120159490 | DYNAMIC SCENARIOS FOR MANAGING COMPUTER SYSTEM ENTITIES - Systems and methods for dynamic generation of scenarios for managing computer system entities are described herein. A number of management programs are deployed in an administrator framework as embedded plug-ins. One or more management descriptors are provided for the plug-ins. The management descriptors include a number of relationships between the deployed programs and a number of computer system entities. The relationships indicate that the management applications can administer one or more aspects of the corresponding entities. A first management program is selected from the number of deployed management programs to administer a related computer system entity. One or more other management programs are dynamically identified and presented to the user as possible management scenarios. The identification of the other management programs is based on correspondence defined in the management descriptors to the aspects or the types of the computer system entity. | 06-21-2012 |
20120159491 | DATA DRIVEN DYNAMIC WORKFLOW - A method, system and article of manufacture for workflow processing and, more particularly, for managing creation and execution of data driven dynamic workflows. One embodiment provides a computer-implemented method for managing execution of workflow instances. The method comprises providing a parent process template and providing a child process template. The child process template is configured to implement an arbitrary number of workflow operations for a given workflow instance, and the parent process template is configured to instantiate child processes on the basis of the child process template to implement a desired workflow. The method further comprises receiving a workflow configuration and instantiating an instance of the workflow on the basis of the workflow configuration. The instantiating comprises instantiating a parent process on the basis of the parent process template and instantiating, by the parent process template, one or more child processes on the basis of the child process template. | 06-21-2012 |
20120167091 | Invasion Analysis to Identify Open Types - The automated identification of open types of a multi-function input program. The automated identification of open types is performed without annotations in the input program, but rather by identifying a set of invading types of the program, with each of the invading types being an open type. The identification of invading types may be performed iteratively until the set of invading types no longer grows. The set of open types may be used for any purpose such as perhaps the de-virtualization of an input program during compilation. | 06-28-2012 |
20120167092 | SYSTEM AND METHOD FOR IMPROVED SERVICE ORIENTED ARCHITECTURE - Certain embodiments enable improved execution of service-oriented tasks by coordinating service providers that access service-input values from other service providers and generate service-output values that are accessible by other service providers. Improved performance results from distributed operations of service providers that do not require centralized exchange of all information. | 06-28-2012 |
20120167093 | WEATHER ADAPTIVE ENVIRONMENTALLY HARDENED APPLIANCES - Embodiments of the present invention provide a method, system and computer program product for weather adaptive environmentally hardened appliances. In an embodiment of the invention, a method for weather adaptation of an environmentally hardened computing appliance includes determining a location of an environmentally hardened computing appliance. Thereafter, a weather forecast including a temperature forecast can be retrieved for a block of time at the location. As a result, a cache policy for a cache of the environmentally hardened computing appliance can be adjusted to account for the weather forecast. | 06-28-2012 |
20120167094 | PERFORMING PREDICTIVE MODELING OF VIRTUAL MACHINE RELATIONSHIPS - An exemplary method may include collecting performance data of present operating conditions of network components operating in an enterprise network, extracting ontological component data of the network components from the collected performance data, comparing the collected performance data with predefined service tier threshold parameters, and determining if the ontological component data represents operational relationships between the network components, and establishing direct and indirect relationships between the network components based on the determined operational relationships and establishing a business application service group based on the ontological component data. | 06-28-2012 |
20120167095 | UTILIZING USER-DEFINED WORKFLOW POLICIES TO AUTOMATE CHANGES MADE TO COMPOSITE WORKFLOWS - Automating changes to a composite workflow using user-defined workflow policies can begin with the detection of a state change by a workflow policy handler for a record of an instance of a composite workflow running within a composite workflow system. User-defined workflow policies can be identified for the composite workflow in which the change was detected. A user-defined workflow policy can define policy actions to be performed if policy conditions are satisfied. For each identified user-defined workflow policy, the applicability to the instance of the composite workflow can be determined. If an identified user-defined workflow policy is determined to be applicable, the policy actions can be automatically performed on the instance of the composite workflow. | 06-28-2012 |
20120167096 | Managing the Processing of Processing Requests in a Data Processing System Comprising a Plurality of Processing Environments - Processing requests may be routed between a plurality of runtime environments, based on whether or not program(s) required for completion of the processing requests is/are loaded in a given runtime environment. Cost measures may be used to compare costs of processing a request in a local runtime environment and of processing the request at a non-local runtime environment. | 06-28-2012 |
20120174105 | Locality Mapping In A Distributed Processing System - Topology mapping in a distributed processing system that includes a plurality of compute nodes, including: initiating a message passing operation; including in a message generated by the message passing operation, topological information for the sending task; mapping the topological information for the sending task; determining whether the sending task and the receiving task reside on the same topological unit; if the sending task and the receiving task reside on the same topological unit, using an optimal local network pattern for subsequent message passing operations between the sending task and the receiving task; otherwise, using a data communications network between the topological unit of the sending task and the topological unit of the receiving task for subsequent message passing operations between the sending task and the receiving task. | 07-05-2012 |
20120174106 | MOBILE TERMINAL AND METHOD FOR MANAGING TASKS AT A PLATFORM LEVEL - A mobile terminal includes an execution unit to execute tasks, a determination unit to determine whether to manage the executed tasks and to generate a termination processing signal for terminating a first executed tasks if the executed tasks are to be managed, and a control unit to terminate the first executed task according to the termination processing signal generated by the determination unit. A method for managing tasks includes executing tasks, determining whether to manage the executed tasks, generating a termination processing signal for terminating a first executed task according to the determination result, and terminating the first executed task according to the termination processing signal. | 07-05-2012 |
20120174107 | ELECTRONIC DEVICE AND CONTROL METHOD FOR RUNNING APPLICATION - An electronic device capable of selecting an appropriate running mode for a to-be-run application is provided. The device is powered by a battery and runs a number of applications, which can be run in different running modes. The device includes a storage unit and a processor. The storage unit stores a relationship among the number of applications, running modes, and power consumption speeds. The processor detects current battery capacity of the battery, obtains the power consumption speeds corresponding to a to-be-run application being run in different modes respectively, and determines running times of the application in the different modes. The processor further compares each determined running time with a preset running time of the application and controls the application to be run in one mode according to the comparison result. A related control method is also provided. | 07-05-2012 |
20120174108 | INTELLIGENT PRE-STARTED JOB AFFINITY FOR NON-UNIFORM MEMORY ACCESS COMPUTER SYSTEM - A method, apparatus, and program product select a pre-started job from among a plurality of pre-started jobs in which to perform a task in a computer system with a NUMA configuration. An attempt to perform a task is received as a connection. Information associated with the connection is compared to information associated with a plurality of pre-started jobs. In response to comparing the information, it is determined either that a pre-started job was previously used to perform the task or that no pre-started job was previously used to perform the task. In response to either determination, another pre-started job is determined in which to perform the task. The other pre-started job is determined based on affinity with the task, and may be reallocated to perform the task. | 07-05-2012 |
20120185857 | TECHNIQUES TO AUTOMATICALLY CLASSIFY PROCESSES - Techniques for automatically classifying processes are presented. Processes executing on a multicore processor machine are evaluated to determine shared resources between the processes, excluding shared system resources. A determination is then made based on the evaluation to group the processes as a single managed resource within an operating system of the multicore processor machine. | 07-19-2012 |
20120185858 | PROCESSOR OPERATION MONITORING SYSTEM AND MONITORING METHOD THEREOF - A processor includes a computation unit; a storage unit storing a program; and a data transmission circuit that transmits to an operation monitoring unit a signal corresponding to an instruction for reporting the execution stage of the program. The operation monitoring unit: includes a transition operation identification. circuit and a loop processing identification circuit. The transition operation identification circuit receives a start ID instruction with an attached ID that identifies a task; a termination ID instruction that identifies termination of task operation; and if the task is execution of loop processing, a loop instruction that reports the maximum value of the number of times of this loop processing. The transition operation identification circuit identifies success of the transition operations of the tasks of the program, based on the ID instructions. The loop processing identification circuit identifies abnormality of the number of times of loop processing. | 07-19-2012 |
20120185859 | METHODS AND SYSTEMS FOR PROGRAM ANALYSIS AND PROGRAM CONVERSION - History memory | 07-19-2012 |
20120192186 | Computing Platform with Resource Constraint Negotiation - Various techniques are described for resource management on a computing platform. A computing platform can receive a query message that specifies an amount of a resource proposed for allocation. The computing platform can select a selected recommendation level from a plurality of recommendation levels, based on an evaluation of a request for the amount of the resource proposed for allocation. The computing platform can generate a resource allocation recommendation that includes the selected recommendation level with respect to the amount of the resource proposed for allocation. The computing platform can send the resource allocation recommendation. | 07-26-2012 |
20120198454 | ADAPTIVE SPINNING OF COMPUTER PROGRAM THREADS ACQUIRING LOCKS ON RESOURCE OBJECTS BY SELECTIVE SAMPLING OF THE LOCKS - In the dynamic sampling or collection of data relative to locks for which threads attempting to acquire the lock may be spinning so as to adaptively adjust the spinning of threads for a lock, an implementation for monitoring a set of parameters relative to the sampling of data of particular locks and selectively terminating the sampling when certain parameter values or conditions are met. | 08-02-2012 |
20120198455 | SYSTEM AND METHOD FOR SUPPORTING SERVICE LEVEL QUORUM IN A DATA GRID CLUSTER - A system and method is described for use with a data grid cluster, for supporting service level quorum in the data grid cluster. The data grid cluster includes a plurality of cluster nodes that support performing at least one service action. A quorum policy, defined in a cache configuration file associated with the data grid cluster, can specify a minimum number of service members that are required in the data grid cluster for performing the service action. The data grid cluster uses the quorum policy to determine whether the service action is allowed to be performed, based on a present state of the plurality of cluster nodes in the data grid cluster. | 08-02-2012 |
20120204178 | MANAGEMENT OF COMPUTER SYSTEMS BY USING A HIERARCHY OF AUTONOMIC MANAGEMENT ELEMENTS - A method and system for managing a computing system by using a hierarchy of autonomic management elements are described. The autonomic management elements operate in a master-slave mode and negotiate a division of management responsibilities regarding various components of the computing system. | 08-09-2012 |
20120204179 | Method and apparatus for executing software applications - Consumer electronic devices, such as e.g. high-definition movie players for removable storage media such as optical discs, may provide possibilities for advanced interactivity for the user, implemented as software applications. A question arising generally with such software applications is what the life cycle of such an application is, and who may control it. The invention provides a method for executing software applications within a playback device for audio-video data, wherein data from a first removable storage medium are read for a software application to be executed within said playback device, and the data comprise an indication defining a termination condition for the application. Based on said termination code and depending on how the medium holding the application is ejected, the application is terminated or may survive. | 08-09-2012 |
20120210320 | PREVENTING UNSAFE SHARING THROUGH CONFINEMENT OF MUTABLE CAPTURED VARIABLES - The disclosed embodiments provide a system that facilitates the development and execution of a software program. During operation, the system provides a mechanism for restricting a variable to a runtime context in the software program. Next, the system identifies the runtime context during execution of the software program. Finally, the system uses the mechanism to prevent incorrect execution of the software program by ensuring that a closure capturing the variable executes within the identified runtime context. | 08-16-2012 |
20120210321 | Dormant Background Applications on Mobile Devices - The subject disclosure is directed towards a technology in which a mobile device maintains an application in a dormant state in which the application's process is not terminated and remains in memory, but the application cannot execute code. Further, state and execution context are maintained for the application, allowing the application to be quickly and efficiently resumed into the running state. To prevent the application from executing code while dormant, thread activity is suspended, requests canceled, completed or paused, resources detached, and so forth. Resource usage may be monitored for dormant applications, to remove a misbehaving dormant application process from memory if improperly using resources. | 08-16-2012 |
20120210322 | METHODS FOR SINGLE-OWNER MULTI-CONSUMER WORK QUEUES FOR REPEATABLE TASKS - There are provided methods for single-owner multi-consumer work queues for repeatable tasks. A method includes permitting a single owner thread of a single owner, multi-consumer, work queue to access the work queue using atomic instructions limited to only a single access and using non-atomic operations. The method further includes restricting the single owner thread from accessing the work queue using atomic instructions involving more than one access. The method also includes synchronizing amongst other threads with respect to their respective accesses to the work queue. | 08-16-2012 |
20120216200 | DYNAMIC POWER AND TEMPERATURE CAPPING THROUGH TELEMETRY DATA ANALYSIS - The disclosed embodiments provide a system that analyzes telemetry data from a computer system. During operation, the system obtains the telemetry data as a set of telemetric signals using a set of sensors in the computer system. Next, the system analyzes the telemetry data to estimate a value of a parameter associated with the computer system, wherein the parameter is at least one of a power utilization and a temperature. Finally, the system controls a subsequent value of the parameter by modulating a virtual duty cycle of a processor in the computer system based on the estimated value. | 08-23-2012 |
20120216201 | STATE MANAGEMENT OF OPERATING SYSTEM AND APPLICATIONS - A method and a processing device may be provided for state management of an operating system and applications. A framework may be provided for separating behaviorless state information from code or instructions for executing a method. Applications may have instances of state information derived from, or completely different from, instances of state information of an operating system. Instances of state information for an application may be layered over corresponding instances of state information of the operating system, such that the application and the operating system may have different views of the instances of the state information. At least one policy may be defined, which may include rules for resolving conflicts, information for providing a merged view of data from multiple repositories, default values for instances of data, as well as other information. In various implementations, referential integrity of state information may be guaranteed. | 08-23-2012 |
20120222030 | LAZY RESOURCE MANAGEMENT - The present application relates in general to a method for processing an application in general. The method comprises processing an application which uses at least one resource ( | 08-30-2012 |
20120222031 | METHOD AND DEVICE FOR OPTIMIZING EXECUTION OF SOFTWARE APPLICATIONS IN A MULTIPROCESSOR ARCHITECTURE COMPRISING SEVERAL INPUT/OUTPUT CONTROLLERS AND SECONDARY COMPUTING UNITS - The invention relates in particular to the optimisation of the execution of a software application in a system having multiprocessor architecture including a plurality of input/output controllers and secondary processing units. After determining ( | 08-30-2012 |
20120227043 | Optimization of Data Processing Parameters - Described are computer-based methods and apparatuses, including computer program products, for optimizing data processing parameters. A data set is received that represents a plurality of samples. The data set is processed using a data processing algorithm that includes one or more processing stages, each stage using a respective first set of data processing parameters to generate processed data. A design of experiment model is generated for the data processing algorithm based on the processed data and a set of response values. For each stage of the data processing algorithm, a second set of data processing parameters is calculated based on at least the design of experiment model. | 09-06-2012 |
20120227044 | AUTOMATED WORKFLOW MANAGER - The present application relates to a workflow system and method that automates workflow processes across an enterprise application by using a predefined routing rule-based workflow engine to automate the processes and a highly user-friendly execution platform to realize the workflow configuration. The workflow automation system of the present application is highly streamlined, scalable, and agile. The workflow automation system also easily integrates with existing application servers without requiring any re-deployment of the enterprise application as workflow patterns or flows change in real time. | 09-06-2012 |
20120227045 | METHOD, APPARATUS, AND SYSTEM FOR SPECULATIVE EXECUTION EVENT COUNTER CHECKPOINTING AND RESTORING - An apparatus, method, and system are described herein for providing programmable control of performance/event counters. An event counter is programmable to track different events, as well as to be checkpointed when speculative code regions are encountered. So when a speculative code region is aborted, the event counter is able to be restored to it pre-speculation value. Moreover, the difference between a cumulative event count of committed and uncommitted execution and the committed execution, represents an event count/contribution for uncommitted execution. From information on the uncommitted execution, hardware/software may be tuned to enhance future execution to avoid wasted execution cycles. | 09-06-2012 |
20120227046 | METHOD AND APPARARTUS FOR MONITORING USAGE PATTERNS BY UTILIZING WINDOW TITLES - Disclosed are a method and apparatus for monitoring usage patterns by utilizing window titles. A method for monitoring a usage pattern by utilizing window titles according to an aspect of the invention, where the method is to be performed by a computing apparatus for monitoring a usage pattern of application programs, includes: acquiring a title of a window when an application program is executed to generate the window; recognizing window detail information associated with the title by referencing comparative data, and generating or renewing usage history information of the application program to correspond to the window detail information; and monitoring a usage pattern according to a set criterion by using the usage history information stored in correspondence to each application program. | 09-06-2012 |
20120233614 | MEASURING COUPLING BETWEEN COVERAGE TASKS AND USE THEREOF - Test coverage is enhanced by measuring various types of coupling between coverage tasks. The coupling measurements may be implicit coupling measurements, explicit coupling measurements, coding coupling measurements, performance coupling measurements, resource coupling measurements or the like. Based on the coupling measurements, different coverage tasks may be grouped together. For example, closely coupled coverage tasks may be grouped together. The groups may also be determined based on an initial distribution of groups, by combining groups having closely coupled member coverage tasks. The groups may be ordered and prioritized, such as based on the size of the groups and the number of uncovered tasks in each group. The groups may also be ordered, such as based on coupling score which aggregate the coupling measurements of the member coverage tasks. | 09-13-2012 |
20120233615 | AUTOMATICALLY PERFORMING AN ACTION UPON A LOGIN - Techniques for automatically performing one or more actions responsive to a successful login. In one embodiment, an action automatically performed responsive to the login uses content created prior to the login. | 09-13-2012 |
20120233616 | STREAM DATA PROCESSING METHOD AND STREAM PROCESSOR - A stream data processing method is provided, which includes the steps as follows: obtaining from data a program pointer indicating a task to which the pointer belongs, and configures a thread processing engine according to the program pointer; processing simultaneously the data of the different durations of the task or the data of different tasks by a plurality of thread engines; decides whether there is data still not processed, and if yes, returns to the first step; and if no, exits this data processing. A processor for processing a stream data is also provided. | 09-13-2012 |
20120233617 | METHOD AND SYSTEM DIGITAL FOR PROCESSING DIGITAL CONTENT ACCORDING TO A WORKFLOW - The invention relates to a method of processing content according to a workflow, where a digital content is processed on one of a plurality of processing devices according to process definition associated to the content, the method comprising the steps, iterated at the processing device, of:
| 09-13-2012 |
20120240119 | Method and device for file transfer protocol deadlock detection and self recovery - A method and a device for file transfer protocol (FTP) deadlock detection and self recovery are provided by the disclosure in order to solve the sudden deadlock problem in the FTP upload operation. The method includes: if a daemon determines that a deadlock occurs in an FTP upload task by a heartbeat detection mechanism, the socket resources used by the FTP upload operation is recorded at the storage location in a socket resource cycle queue, and an FTP upload task end process is started; determining whether the socket resource cycle queue is full, if it is not full, the socket resource information occupied by the current deadlock is put into the socket resource cycle queue, otherwise, the earliest socket resources in the socket resource cycle queue are released, and the socket resource information occupied by the current deadlock is put into the socket resource cycle queue. | 09-20-2012 |
20120246648 | MANAGING A PORTAL APPLICATION - A method of managing a portal application includes, in a device comprising at least one processor that executes a portal application, establishing a trigger for preserving resources for the device; determining in the device that the trigger has occurred; and pausing operations of a portlet module within the portal application executed by the device. | 09-27-2012 |
20120246649 | Synchronizing Access To Resources In A Hybrid Computing Environment - Synchronizing access to resources in a hybrid computing environment that includes a host computer, a plurality of accelerators, the host computer and the accelerators adapted to one another for data communications by a system level message passing module, where synchronizing access to resources includes providing in a registry, to processes executing on the accelerators and the host computer, a key associated with a resource, the key having a value; attempting, by a process, to access the resource including determining whether a current value of the key represents an unlocked state for the resource; if the current value represents an unlocked state, attempting to lock access to the resource including setting the value to a unique identification of the process; determining whether the current value is the unique identification of the process; if the current value is the unique identification accessing the resource by the process. | 09-27-2012 |
20120246650 | METHOD FOR PROCESSING INFORMATION AND ACTIVITIES IN A CONTROL AND/OR REGULATING SYSTEM WITH THE AID OF A MULTI-CORE PROCESSOR - A method for processing information and activities in a control and/or regulating system in which the control and/or regulating tasks are performed by a microcontroller, the control/regulating system including different components and the microcontroller receiving information which is evaluated and processed thereby, and at least one output signal being output as the result of control/regulating calculations. In a method for processing information and activities in a control and/or regulating system which may be implemented cost-effectively and nevertheless permits high computing power, the control and regulating tasks of the system are divided into component-specific task complexes, a first component-specific task complex being processed by a first processor core of the microcontroller and a second component-specific task complex being processed by a second processor core of the microcontroller. | 09-27-2012 |
20120254869 | Computer-Based System Management by a Separate Managing System - In an embodiment, a method is presented for providing managerial access to a managed system. In this method, a definition of a procedure to be performed on the managed system is received into a managing system. A request to perform the procedure is received into the managing system from a user of the managing system. The procedure is performed in response to the request. The performing of the procedure includes initiating a plurality of functions resident in the managed system. Results indicative of the performing of the procedure are presented to the user of the managing system. | 10-04-2012 |
20120254870 | INFORMATION PROCESSING APPARATUS, WORKFLOW SETTING METHOD, AND PROGRAM THEREFOR - The present invention is directed to facilitation of setting of workflow parameter values by storing a parameter value corresponding to a parameter used in processing in a process mode identified by definition information in association with parameter and information of a communication regulation as parameter information into a memory unit, from setting information of a workflow including information of the communication regulation with a cooperation apparatus, information of the process mode of the workflow, parameters in the processing in the process mode, and parameter values of the parameters. | 10-04-2012 |
20120254871 | COMBINATORIAL COMPUTING - A combinational computing apparatus and method. The combinational computing method includes the steps of: receiving a first setting relating to multiple groups of input data and a second setting relating to a combinatorial mode among multiple groups of input data, obtaining the data combination of the multiple groups of input data according to the first setting and the second setting, and performing a desired calculating operation on the data combination. | 10-04-2012 |
20120254872 | Content Items for Scientific Data Information Systems - Data information systems include memory for storing a first content item representing a predefined workflow. The first content item has a data format that complies with a unified data structure. The data information system further comprises a scientific instrument configured to acquire data in accordance with the predefined workflow, and a server in communication with the scientific instrument to obtain the data acquired by the scientific instrument. The server has a processor that executes program code to convert the acquired data into a second content item with a data format that complies with the unified data structure. The unified data structure can include an instance data element with fields that are specific to a type of content item, and a catalog data element having a copy of data extracted from the type-specific fields of the instance data element. | 10-04-2012 |
20120260251 | PREVENTION OF EVENT FLOODING - An apparatus and method for preventing event flooding in an event processing system, the apparatus comprising: responsive to receiving, by an analysis component, monitored activity data, an analysis component for analysing the monitored activity data, to determine a potential event; responsive to determining a potential event, an analysis component identifying a set of threshold values and determining whether the potential event has met a threshold value of the set of threshold values; responsive to a positive determination, an analysis component for determining if the met threshold value is an identical threshold value met by a previous potential event; and responsive to a second positive determination, a disregard component for disregarding the potential event. | 10-11-2012 |
20120272245 | WEB SERVICE MANAGEMENT - A web service management process includes receiving, by a job server, a request for a web service, sending a request to register a job corresponding to the web service to an administrative service application, and creating, via the administrative service application, a job proxy resource for the job. The job proxy resource is configured to monitor execution of the job. A uniform resource identifier of the job proxy resource is sent to the job server. The process also includes sending, by the job server, information about job lifecycle events, progress, and a request for a current state of administrator actions on a job proxy of the job to the administrative service application. The administrative service application modifies the current state of the job proxy via commands received from an administrative client. The process further includes transmitting the current state of the job proxy to the job server. | 10-25-2012 |
20120278808 | PEER TO PEER COMPONENT DISTRIBUTION - A method, apparatus, and system are provided for assigning tasks and/or providing resources in a distributed system. An indication of a task being available for processing is provided to one or more remote systems in a distributed system based on a distribution list. At least one response from one of the remote systems capable of performing the task is received in response to the indication. The response includes a request for a resource for performing the task. The resource for performing the task is provided to the remote systems. | 11-01-2012 |
20120291032 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR DYNAMICALLY MEASURING PROPERTIES OF OBJECTS RENDERED AND/OR REFERENCED BY AN APPLICATION EXECUTING ON A COMPUTING DEVICE - A system, method and computer program product for dynamically measuring attributes of objects rendered and/or referenced by an executing software application without having to change and recompile the original application code. The system includes a staging environment that monitors the execution of the application and indexes items of graphical and/or audio information generated by the application into a first database. A second database is populated with one or more business rules, wherein each business rule is associated with one or more of the indexed objects. The system further includes a run-time environment that identifies items of graphics and/or audio information as they are generated by the application during run-time, uses the second database to determine if an identified item is associated with a business rule, and, responsive to a determination that an identified item is associated with a business rule, measures the object and its related attributes. | 11-15-2012 |
20120297385 | INTERACTIVE SERVICE MANAGEMENT - In one implementation, an interactive service management system includes a performance profile module and a performance evaluation module. The performance profile modules defines a performance measure of an interactive service based on a quality assessment associated with the interactive service. The performance evaluation module compares the performance measure with performance target associated with the interactive service, and modifies the performance target associated with the interactive service based on the comparison of the performance measure and the performance target. | 11-22-2012 |
20120297386 | Application Hibernation - Operating a data processing system comprises defining a plurality of profiles, each profile comprising a list of one or more applications; receiving a defined user input requesting a switch from a first profile to a second profile; hibernating the (or each) application listed in the first profile; and recalling from hibernation the (or each) application listed in the second profile. Preferably, a graphical user interface is adjusted to reflect a change in status of each application that has been hibernated or recalled from hibernation. | 11-22-2012 |
20120297387 | Application Hibernation - Operating a data processing system comprises defining a plurality of profiles, each profile comprising a list of one or more applications; receiving a defined user input requesting a switch from a first profile to a second profile; hibernating the (or each) application listed in the first profile; and recalling from hibernation the (or each) application listed in the second profile. Preferably, a graphical user interface is adjusted to reflect a change in status of each application that has been hibernated or recalled from hibernation. | 11-22-2012 |
20120297388 | SUB-DISPATCHING APPLICATION SERVER - Multiple sub-dispatched application server threads are provided in a single local process, where the multiple sub-dispatched application server threads carry out their own task dispatching. The multiple sub-dispatched application server threads are linked in the single local process using a distributed programming model. | 11-22-2012 |
20120304177 | PROGRAMMATICALLY DETERMINING AN EXECUTION MODE FOR A REQUEST DISPATCH UTILIZING HISTORIC METRICS - A request dispatcher can automatically switch between processing request dispatches in a synchronous mode and an asynchronous mode. Each dispatch can be associated with a unique identification value such as a process ID or Uniform Resource Identifier (URI), historic metrics, and a ruleset. With each execution of the request dispatch, historic metrics can be collected. Metrics can include, but is not limited to, execution duration and/or execution frequency, processor load, memory usage, network input/output, number of dependent dispatches, and the like. Utilizing historic metrics, rules can be constructed for determining which mode to execute the subsequent execution of the dispatch. As such, runtime optimization of Web applications can be further improved. | 11-29-2012 |
20120311580 | BLOCKING FILE SYSTEM FOR ON-THE-FLY MIGRATION OF A CONTAINER WITH AN NFS MOUNT - This invention relates to a method, system and computer program product for performing on-the-fly migration of a virtual server from one network node to another node on the network. All active processes executing on a virtual server are frozen and the state of these processes, including virtual server network connectivity information, are saved into a dump file. The dump file is transferred to the destination network node. Using the information stored in the dump file, the execution state of all active processes and the state of network connections of the virtual server are restored at the destination node to the state existing immediately prior to on-the-fly migration. | 12-06-2012 |
20120311581 | ADAPTIVE PARALLEL DATA PROCESSING - Described herein are methods, systems, apparatuses and products for adaptive parallel data processing. An aspect provides providing a map phase in which at least one map function is applied in parallel on different partitions of input data at different mappers in a parallel data processing system; providing a communication channel between mappers using a distributed meta-data store, wherein said map phase comprises mapper data processing adapted responsive to communication with said distributed meta-data store; and providing data accessible by at least one reduce phase node in which at least one reduce function is applied. Other embodiments are disclosed. | 12-06-2012 |
20120311582 | NOTIFICATION BARRIER - The disclosed embodiments provide a system which implements a notification barrier. During operation, the system receives a call to the notification barrier installed on a sender object, wherein the call originates from a receiver object which receives notifications posted by the sender object. In response to the call, the system acquires a notification lock, wherein the notification lock is held whenever the sender is posting a notification. The system then releases the notification lock, wherein releasing the lock indicates to the receiver object that the sender object has no pending posted notifications. | 12-06-2012 |
20120311583 | GENERATING AND PROCESSING TASK ITEMS THAT REPRESENT TASKS TO PERFORM - Techniques for processing task items are provided. A task item is electronic data that represents a task to be performed, whether manually or automatically. A task item includes one or more details about its corresponding task, such as a description of the task and a location of the task. Specifically, techniques for generating task items, organizing task items, triggering notifications of task items, and consuming task items are described. In one approach, a task item is generated based on input from a user and context of the input. In another approach, different attributes of task items are used to organize the task items intelligently into multiple lists. In another approach, one or more criteria, such as location, are used to determine when to notify a user of a task. In another approach, actions other than generating notifications are enabled or automatically performed, actions such as emailing, calling, and searching. | 12-06-2012 |
20120311584 | PERFORMING ACTIONS ASSOCIATED WITH TASK ITEMS THAT REPRESENT TASKS TO PERFORM - Techniques for processing task items are provided. A task item is electronic data that represents a task to be performed, whether manually or automatically. A task item includes one or more details about its corresponding task, such as a description of the task and a location of the task. Specifically, techniques for generating task items, organizing task items, triggering notifications of task items, and consuming task items are described. In one approach, a task item is generated based on input from a user and context of the input. In another approach, different attributes of task items are used to organize the task items intelligently into multiple lists. In another approach, one or more criteria, such as location, are used to determine when to notify a user of a task. In another approach, actions other than generating notifications are enabled or automatically performed, actions such as emailing, calling, and searching. | 12-06-2012 |
20120311585 | ORGANIZING TASK ITEMS THAT REPRESENT TASKS TO PERFORM - Techniques for processing task items are provided. A task item is electronic data that represents a task to be performed, whether manually or automatically. A task item includes one or more details about its corresponding task, such as a description of the task and a location of the task. Specifically, techniques for generating task items, organizing task items, triggering notifications of task items, and consuming task items are described. In one approach, a task item is generated based on input from a user and context of the input. In another approach, different attributes of task items are used to organize the task items intelligently into multiple lists. In another approach, one or more criteria, such as location, are used to determine when to notify a user of a task. In another approach, actions other than generating notifications are enabled or automatically performed, actions such as emailing, calling, and searching. | 12-06-2012 |
20120311586 | APPARATUS AND METHOD FOR PREDICTING A PROCESSING TIME OF A COMPUTER - Load information and a first processing time are provided in association with each of a plurality of first time segments that each have a fixed duration time and are included in a first time period, where the load information indicates a load condition of a target computer that executed a target job during the each first time segment, and the first processing time indicates a running time of the target job during the each first time segment. One or more first time segments each having a predetermined analogous relationship with a second time segment in a second time period during which the target job is expected to be executed by the target computer are selected to predict a second processing time indicating a running time of the target job during the second time period based on the first processing times associated with the selected one or more first time segments. | 12-06-2012 |
20120311587 | COMBINATORIAL COMPUTING - A combinational computing apparatus and method. The combinational computing method includes the steps of: receiving a first setting relating to multiple groups of input data and a second setting relating to a combinatorial mode among multiple groups of input data, obtaining the data combination of the multiple groups of input data according to the first setting and the second setting, and performing a desired calculating operation on the data combination. | 12-06-2012 |
20120317574 | METHOD FOR OPERATING AN AUTOMATION DEVICE - In a method for operating an automation device having an internal finite state machine, a mapping unit, an internal data interface operatively connected for flow of information between the internal finite state machine and the mapping unit, and the mapping unit operatively connected for flow of the same information between the internal data interface and an external data interface of a communication module, state information of the internal finite state machine is routed to the mapping unit via the internal data interface, separate state information is derived from the state information received by the mapping unit, and the mapping unit then provides the separate state information to a communication unit of the communication module. | 12-13-2012 |
20120324453 | EFFICIENT LOGICAL MERGING OVER PHYSICALLY DIVERGENT STREAMS - A logical merge module is described herein for producing an output stream which is logically compatible with two or more physically divergent input streams. Representative applications of the logical merge module are also set forth herein. | 12-20-2012 |
20120324454 | Control Flow Graph Driven Operating System - An operating system may be reconfigured during execution by adding new components to a control flow graph defining a system's executable flow. The operating system may use a control flow graph that defines executable elements and relationships between those elements. The operating system may traverse the control flow graph during execution to monitor execution flow and prepare executable elements for processing. By placing new components in memory then modifying the control flow graph, the operating system functionality may be updated or changed. In some embodiments, a lightweight version of an operating system may be deployed, then additional features or capabilities may be added. | 12-20-2012 |
20120331469 | GRACEFULLY SHUTTING DOWN A COMPUTER PROCESS - A “buoy” process is associated with another “real” (i.e., non-buoy) process or application. Priorities are arranged so that, in a resource crisis, the buoy process is preferentially killed before its parent. When, during a crisis, the buoy is killed, its death is a signal to the parent process that it may be time to shut down gracefully. In some embodiments, when the parent process starts up, it launches a buoy process as its child. When the buoy process dies, the operating system sends a signal to the parent process. This signal warns the parent of the resource crisis. In other embodiments, a separate “guardian” process notes the existence of a new “parent” process, launches a buoy process, and associates the buoy with the “parent” process. The operating system informs the guardian if the buoy process is killed, and the guardian in turn informs the associated parent. | 12-27-2012 |
20130007746 | Working sets of sub-application programs of application programs currently running on computing system - A pattern corresponds to a task that a computing system can perform. The pattern at least indirectly identifies one or more sub-application programs of one or more application programs that the computing system can run and that are relevant to the task. Application of the pattern to sub-application programs of application programs currently running on the computing system identifies a working set of one or more sub-application programs of one or more application programs currently running on the computing system and that are relevant to the task. The computing system hides, within a graphical user interface that the computing system presents, the sub-application programs of the application programs currently running on the computing system that are not part of the working set, and the application programs currently running on the computing system that do not include any sub-application program that is part of the working set. | 01-03-2013 |
20130007747 | METHOD AND APPARATUS FOR MANAGING A WORKING TASK BASED ON A COMMUNICATION MESSAGE - A method for managing a working task based on a communication message. The method may include the steps of: in response to receiving a communication message, matching the communication message using a matching rule; determining an application managing a working task associated with the communication message according to the matching result; prompting the user to perform an operation on the application managing the working task. | 01-03-2013 |
20130007748 | TEN-LEVEL ENTERPRISE ARCHITECTURE SYSTEMS AND TOOLS - This disclosure describes sets of systems and tools that drive complex enterprise execution logic top to bottom, end to end and site to site through the discrete execution and control of ten levels of mission-critical enterprise structure:
| 01-03-2013 |
20130007749 | METHOD AND APPARATUS FOR MANAGING A WORKING TASK BASED ON A COMMUNICATION MESSAGE - Disclosed is an apparatus for managing a working task based on a communication message. The apparatus may include a rule matching module configured to, in response to receiving a communication message, match the communication message using a matching rule. An application determining module is configured to determine an application managing a working task associated with the communication message according to the matching result. A prompting module is configured to prompt the user to perform an operation on the application managing the working task. | 01-03-2013 |
20130014112 | INFORMATION PROCESSING APPARATUS AND DATA MANAGEMENT SYSTEM - An information processing apparatus is connected to both plural data accumulation devices configured to accumulate job data in a predetermined memory area and plural electronic devices configured to execute the accumulated job data through a predetermined data transmission line. The data accumulation devices execute a deletion process based on a deletion control value included in management information and manage the accumulated job data. The apparatus includes a control unit configured to control a communication process of the job data performed between the electronic devices and the data accumulation devices. The control unit is configured to transmit a control value for extending an accumulation period of the accumulated job data to at least one of the data accumulation devices, and to update the deletion control value of the management information retained in the predetermined memory area in the at least one of the data accumulation devices. | 01-10-2013 |
20130014113 | MACHINE OPERATION PLAN CREATION DEVICE, MACHINE OPERATION PLAN CREATION METHOD AND MACHINE OPERATION PLAN CREATION PROGRAM - An operation plan of a minimum number of servers (processing nodes) required for satisfying performance requirement is created, and servers that need not to be operated are powered off to save power and computing resources. Specifically, a machine operation plan creation device compares a machine ID numbered in order starting from 1 with an hourly required machine count and, if the machine ID is not larger, determines that the machine needs to be operated. If each machine has plural hours, the plural hours are merged to reduce the number of hours. Here, if two hours are temporally continuous, the two hours are merged into a new hour. Alternatively, a start time of the first hour and an end time of the last hour are extracted from the plural hours, and a time period from the start time to the end time is set as a new hour. Moreover, a maximum required machine count is calculated from an hourly required machine count list. | 01-10-2013 |
20130019244 | PREVENTION OF EVENT FLOODING - Techniques for preventing event flooding in an event processing system, comprising: responsive to receiving, by an analysis component, monitored activity data, an analysis component for analysing the monitored activity data, to determine a potential event; responsive to determining a potential event, an analysis component identifying a set of threshold values and determining whether the potential event has met a threshold value of the set of threshold values; responsive to a positive determination, an analysis component for determining if the met threshold value is an identical threshold value met by a previous potential event; and responsive to a second positive determination, a disregard component for disregarding the potential event. | 01-17-2013 |
20130031553 | HARDWARE ACCELERATION - Provided is a hardware accelerator, central processing unit, and computing device. A hardware accelerator includes a task accelerating unit configured to, in response to a request for a new task issued by a hardware thread, accelerate the processing of the new task and produce a processing result for the task; a task time prediction unit configured to predict the total waiting time of the new task for returning to a specified address associated with the hardware thread. One aspect of this disclosure makes the hardware thread aware of the time to be waited for before getting a processing result, facilitating its task planning accordingly. | 01-31-2013 |
20130031554 | HARDWARE ACCELERATION - Provided is a hardware accelerator and method, central processing unit, and computing device. A hardware accelerating method includes, in response to a request for a new task issued by a hardware thread, accelerating processing of the new task and producing a processing result for the task. A predicting step predicts total waiting time of the new task for returning to a specified address associated with the hardware thread. | 01-31-2013 |
20130036419 | DYNAMICALLY CONFIGURABLE COMMAND AND CONTROL SYSTEMS AND METHODS - A method and system in a Service Orchestration Architecture environment that provides rules engine-based service orchestration, task, and alert management for collaboration between one or more nodes of operation. The system provides multiple levels of configurability. In one aspect, the system includes a rules engine to define the command and control (C2) service orchestration. | 02-07-2013 |
20130036420 | IMAGE FORMING APPARATUS FOR DETERMINING THE AVAILABILITY OF APPLICATION PROGRAM INTERFACES - An image forming apparatus has a plurality of programs of which an interface is open to public so that an application created according to the interface is executable. An amount condition providing part provides information indicating a mount condition of a group constituted by a plurality of the programs in accordance with a request made by the application. | 02-07-2013 |
20130042243 | INFORMATION PROCESSING APPARATUS - A procedure includes: receiving a request to monitor a target monitored item of the computer from a first information processing apparatus; inquiring the first information processing apparatus for a first monitoring condition; and determining a second information processing apparatus by referring to monitoring information. The monitoring information includes information indicating a specific monitored item and a monitoring condition for monitoring the specific monitored item in association with an identifier for identifying an information processing apparatus. The procedure further includes: instructing one of the first and second information processing apparatuses to monitor the target monitored item of the computer in accordance with the first monitoring condition for monitoring the target monitored item by the other one of the first and second information processing apparatuses; and instructing the other one of the first and second information processing apparatuses not to transmit a request to monitor the target monitored item. | 02-14-2013 |
20130042244 | METHOD AND SYSTEM FOR IMPLEMENTING INTERNET OF THINGS SERVICE - Disclosed in the present invention are a method and system for implementing an Internet of Things service. In the present invention: a service generation module generates a description script and a flow script according to a required service, sends the description script and the flow script to an application generation module and a control module, respectively; the application generation module generates an application according to the description script and sends the same to an access module; the access module receives an input of an Internet of Things terminal, processes the input of the Internet of Things terminal using the application, and sends the processed data to a control module; the control module runs the flow script and invokes an execution module to execute an operation according to the data sent by an access module; and the execution module executes an operation according to the invocation of the control module. | 02-14-2013 |
20130047161 | SELECTING PROCESSING TECHNIQUES FOR A DATA FLOW TASK - A method for data flow processing includes determining values for each of a set of parameters associated with a task within a data flow processing job, and applying a set of rules to determine one of a set of processing techniques that will be used to execute the task. The set of rules is determined through a set of benchmark tests for the task using each of the set of processing techniques while varying the set of parameters. | 02-21-2013 |
20130055264 | SYSTEM AND METHOD FOR PARAMETERIZING DOCUMENTS FOR AUTOMATIC WORKFLOW GENERATION - One embodiment of the present invention sets forth a method for generating a new workflow for an application. The method includes generating a parameter tree related to a current workflow, wherein the parameter tree includes a different node corresponding to each parameter included in one or more documents associated with the current workflow, modifying a value associated with a first node included in the parameter tree based on an input, wherein the first node corresponds to a first parameter included in a first document associated with the current workflow, evaluating a second document associated with the current workflow based on the modified value associated with the first node, and generating the new workflow based on the evaluated second document. | 02-28-2013 |
20130055265 | TECHNIQUES FOR WORKLOAD TOXIC MAPPING - Techniques for toxic workload mapping are provided. A state of a target workload is recorded along with a configuration and state of an environment that is processing the workload. Micro valuations are taken, via statistical sampling, for metrics associated with the workload and for different combinations of resources within the environment. The sampling taken at micro second intervals. The valuations are aggregated to form an index representing a toxic mapping for the workload within the environment. The toxic mapping is mined, in view of policy, to provide conditions and scenarios that may be deemed problematic within the workload and/or environment. | 02-28-2013 |
20130055266 | Cancellable Command Application Programming Interface (API) Framework - Embodiments are provided that include the use of a cancelable command application programming interface (API) framework that provides cooperative multitasking for synchronous and asynchronous operations based in part on a command timing sequence and a cancelable command API definition. A method of an embodiment enables a user or programmer to use a cancelable command API definition as part of implementing a responsive application interface using a command timing sequence to control execution of active tasks. A cancelable command API framework of an embodiment includes a command block including a command function, a task engine to monitor the command function, and a timer component to control execution of asynchronous and synchronous tasks based in part on first and second control timing intervals associated with a command timing sequence. Other embodiments are also disclosed. | 02-28-2013 |
20130055267 | INFORMATION PROCESSING APPARATUS, COMPUTER-READABLE RECORDING MEDIUM, AND METHOD FOR CONTROLLING INFORMATION - An information processing apparatus includes a processor that executes a plurality of application programs, a display that displays results of the execution of the plurality of application programs, and a storage that stores a first table in which the plurality of application programs and a plurality of pieces of operation information corresponding to the plurality of application programs are associated with each other and recorded, and a second table in which the plurality of application programs and order determined on the basis of power to be consumed by the processing unit to execute the plurality of application programs are associated with each other and recorded. | 02-28-2013 |
20130061228 | OPERATING SYSTEM IMAGE MANAGEMENT - Some embodiments of the invention enable multiple operating system images to be concurrently serviced. For example, a user may “mount” a group comprising multiple images to be serviced, alter the group of images in some fashion, and then re-seal each image in the group. Some embodiments of the invention provide a programmatic interface which may be employed to enhance image servicing functionality. For example, some embodiments provide an application programming interface (API) which exposes functionality that is call-able by external software components, enabling use of a custom-developed interface, code providing additional image servicing functionality, and/or any of numerous other types of image servicing-related functionality. | 03-07-2013 |
20130061229 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - An information processing apparatus makes a plurality of threads concurrently execute tasks stored in a task queue associated with the thread a prescribed number of times of execution. The information processing apparatus includes a processor that executes the plurality of threads that executes a procedure. The procedure includes generating a task from among a plurality of tasks into which a serial program processing corresponding to a processing request is divided, selecting the task queue associated with one of the plurality of threads, enqueuing the generated task to the selected task queue, dequeuing the enqueued task to the task queue associated with the thread, and executing the dequeued task. | 03-07-2013 |
20130067473 | Modes for Applications - Techniques for modes for applications are described. In one or more implementations, multiple operational modes are provided for an application. The operational modes can be associated with different resource access permissions, trust statuses, graphical user interfaces, and so on. An application can be launched in a particular one of the operational modes based on a context in which a request to launch the application is received. In one or more implementations, correlations between launch request contexts for an application and operational modes can be configured to enable different launch requests to cause an application to launch into different operational modes. | 03-14-2013 |
20130067474 | LANGUAGE INDEPENDENT APPLICATION OBJECT - Applications are managed on a computing device using a language independent application object. The computing device receives an indication that an application is to begin execution. Responsive to every indication that an application is to begin execution, a multi-thread aware singleton application object is instantiated within that application. The multi-thread aware singleton application object is configured to create a first application thread and a first application window for that application. The first application thread is associated with the first application window. The multi-thread aware singleton application object is configured to instantiate within an application regardless of a programming language or user interface framework utilized by that application. | 03-14-2013 |
20130067475 | MANAGING PROCESSES WITHIN SUSPEND STATES AND EXECUTION STATES - One or more techniques and/or systems are provided for suspending logically related processes associated with an application, determining whether to resume a suspended process based upon a wake policy, and/or managing an application state of an application, such as timer and/or system message data. That is, logically related processes associated with an application, such as child processes, may be identified and suspended based upon logical relationships between the processes (e.g., a logical container hierarchy may be traversed to identify logically related processes). A suspended process may be resumed based upon a wake policy. For example, a suspended process may be resumed based upon an inter-process communication call policy that may be triggered by an application attempting to communicate with the suspended process. Application data may be managed while an application is suspended so that the application may be resumed in a current and/or relevant state. | 03-14-2013 |
20130067476 | AUTOMATIC TRANSCODING AND SEMANTIC ADAPTATION BETWEEN SCRIPTING AND WORKFLOW SYSTEMS - A workflow scripting system is described herein that combines the features of workflows and scripts by automatically translating between the two models. Using the system, a script author can create workflows on the fly using familiar scripting language, and a workflow author can use scripting steps to perform actions. Workflows run in this manner can be setup to execute in their own process to improve robustness or efficiency. Operations in an enterprise environment frequently take a long time and are subject to interruptions. By adding reliability concepts of workflows to a shell environment, users of the system can write scripts to address common needs of large-scale computing environments. Thus, the workflow scripting system blends the available resources provided by workflow and scripting environments to provide a host of powerful, advanced capabilities to IT personnel. | 03-14-2013 |
20130067477 | COMPUTER SYSTEM AND CONTROL METHOD THEREOF - A computer system and a control method thereof are provided, wherein the computer system comprises an embedded controller (EC), a basic input/output system (BIOS), and an operating system (OS). In the method, when the computer system is rotated, the EC makes the BIOS identify a present rotation state of the computer system by an interrupt signal and an internal communication scheme. Then, the BIOS establishes a data structure in accordance with a virtual scan code and the rotation state, and then transmits the data structure to the OS. After that, the OS controls a program installed in the computer system to execute a related operation of the rotation state according to the data structure. | 03-14-2013 |
20130074074 | SYSTEM FOR SCALABLE CONFIGURATION AND CONTEXT - Instance properties are defined for instances of an application. During episodes of the instances, the values of the instance properties are populated. Other instances read the values of the instance properties without requiring the instance to run. If the value of an instance property is not populated, then a new episode of the instance is executed to populate the missing values. Instance properties may be grouped into property bags. An instance may populate the values of instance properties in a property bag atomically during one episode using a multi-set message. Other instances may read the values of the property bag instance properties using a multi-get request. | 03-21-2013 |
20130074075 | Summary-Based Task Monitoring - A method, an apparatus and an article of manufacture for task monitoring. The method includes receiving at least one task action indication from a first user module, generating a summary of the at least one task action indication, and outputting the summary of the at least one task action indication to a second user module for monitoring. | 03-21-2013 |
20130074076 | AUTOMATIC TASK MANAGEMENT AND RESOLUTION SYSTEMS AND METHODS - An automatic task management and resolution system including semantic task analysis functionality operative to provide a semantic analysis of at least part of a textual description of a task and task-resolution functionality operative to employ the semantic analysis to recommend at least one resolution for the task. The task-resolution functionality is also operative to prompt an owner of the task to select one of at least one resolution for the task and to execute the one of at least one resolution for the task. | 03-21-2013 |
20130074077 | Methods and Apparatuses for Load Balancing Between Multiple Processing Units - Exemplary embodiments of methods and apparatuses to dynamically redistribute computational processes in a system that includes a plurality of processing units are described. The power consumption, the performance, and the power/performance value are determined for various computational processes between a plurality of subsystems where each of the subsystems is capable of performing the computational processes. The computational processes are exemplarily graphics rendering process, image processing process, signal processing process, Bayer decoding process, or video decoding process, which can be performed by a central processing unit, a graphics processing units or a digital signal processing unit. In one embodiment, the distribution of computational processes between capable subsystems is based on a power setting, a performance setting, a dynamic setting or a value setting. | 03-21-2013 |
20130074078 | CALL STACK AGGREGATION AND DISPLAY - A call stack aggregation mechanism aggregates call stacks from multiple threads of execution and displays the aggregated call stack to a user in a manner that visually distinguishes between the different call stacks in the aggregated call stack. The multiple threads of execution may be on the same computer system or on separate computer systems. | 03-21-2013 |
20130081017 | DYNAMICALLY REDIRECTING A FILE DESCRIPTOR - The method includes identifying a first executing process using a second executing process. The first executing process may include a file descriptor and the first executing process may be independent of the second executing process. The method includes disassociating the file descriptor from a first data stream using the second executing process without involvement of the first executing process. The method includes associating the file descriptor with a second data stream using the second executing process without involvement of the first executing process in response to disassociating the file descriptor from the first data stream. | 03-28-2013 |
20130081018 | Acquiring, presenting and transmitting tasks and subtasks to interface devices - A system includes an ignorant interface device subtask acquiring module configured to acquire one or more subtasks that are configured to be carried out by two or more discrete interface devices and which correspond to portions of one or more tasks, wherein at least one of the one or more tasks and an requestor of the one or more tasks are undisclosed to the two or more discrete interface devices, a corresponding subtask representation presentation module configured to present representations corresponding to the one or more subtasks, and a selected subtask data transmission module configured to transmit a subtask of the one or more subtasks corresponding to a selected representation. | 03-28-2013 |
20130081019 | Receiving subtask representations, and obtaining and communicating subtask result data - Computationally implemented methods and systems include receiving one or more representations of one or more subtasks that correspond to at least one portion of at least one task of acquiring data requested by a task requestor, wherein the one or more subtasks are configured to be carried out by at least two discrete interface devices, obtaining subtask result data in an absence of information regarding the at least one task and/or the task requestor, and communicating the result data comprising a result of carrying out the one or more subtasks. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081020 | Receiving discrete interface device subtask result data and acquiring task result data - Computationally implemented methods and systems include transmitting one or more subtasks corresponding to at least a portion of one or more tasks of acquiring data requested by a task requestor to a plurality of discrete interface devices, obtaining subtask result data corresponding to a result of the one or more subtasks carried out by two or more discrete interface devices of the plurality of discrete interface devices in an absence of information regarding the task of acquiring data and/or the task requestor, and acquiring task result data corresponding to a result of the task of acquiring data using the obtained subtask result data and information regarding the two or more discrete interface devices from which the subtask result data is obtained. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081021 | Acquiring and transmitting tasks and subtasks to interface devices, and obtaining results of executed subtasks - Computationally implemented methods and systems include receiving a request to carry out a task of acquiring data requested by a task requestor, acquiring one or more subtasks related to the task of acquiring data and configured to be carried out by discrete interface devices in an absence of information regarding the at least one task and/or the task requestor, and obtaining a result of one or more executed subtasks executed by at least two of the discrete interface devices in the absence of information regarding the at least one task and/or the task requestor. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081022 | CONFIGURING INTERFACE DEVICES WITH RESPECT TO TASKS AND SUBTASKS - Computationally implemented methods and systems include configuring a device to acquire one or more subtasks configured to be carried out by at least two discrete interface devices, said one or more subtasks corresponding to portions of one or more tasks of acquiring data requested by a task requestor, facilitating execution of the received one or more subtasks, and controlling access to at least one feature of the device unrelated to the execution of the one or more subtasks, based on successful execution of the one or more subtasks. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081023 | PROCESSING TECHNIQUES FOR SERVERS HANDLING CLIENT/SERVER TRAFFIC AND COMMUNICATIONS - A system for handling client/server traffic and communications pertaining to the delivery of hypertext information to a client. The system includes a central server which processes a request for a web page from a client. The system operates by receiving a request for a web page from a client. Relevant information is then processed by an annotator to generate additional relevant computer information that can be incorporated to create an annotated version of the requested web page which includes additional displayable hypertext information. The central server then relays the additional relevant computer information to the client so as to allow the annotated version of the requested web page to be displayed. The central server can also interact with different servers to collect and maintain statistical usage information. | 03-28-2013 |
20130081024 | COMPOSITE TASK FRAMEWORK - A primary task manager, which is a local task manager, can perform a distributed task on a local server. If the performing of the task with the local task manager succeeds, the distributed task can then be propagated to at least one secondary task manager, which is a remote task manager. The remote task manager is capable of performing the distributed task. If the performing of the task with the local task manager fails, an undo task that is associated with the distributed task can be performed. | 03-28-2013 |
20130086586 | Issuing Requests To A Fabric - In one embodiment, a method includes determining whether producer-consumer ordering rules have been met for a first transaction to be sent from a source agent to a target agent via a fabric, and if so a first request for the first transaction is sent from the source agent to the fabric in a first clock cycle. Then a second request can be sent from the source agent to the fabric for a second transaction in a pipelined manner. Other embodiments are described and claimed. | 04-04-2013 |
20130086587 | DYNAMIC EVOCATIONS FOR COMPUTER EVENT MANAGEMENT - According to an example implementation, a computer-readable storage medium, computer-implemented method and a system are provided to detect a plurality of computer events, determine an event severity for each event, select a set of the events having a highest severity of the plurality of events, determine an event category for each event in the set of events, display an event management console including an entry for each event of the set of events, each entry in the event management console including at least an event description and an event severity indicator that indicates event severity, and wherein the displayed event management console also includes one or more evocations for each event category of the set of events, each evocation providing a suggested course of action to address events of the event category. | 04-04-2013 |
20130091503 | DATA FUSION IN HIGH COMPUTATIONAL LOAD ENVIRONMENTS - An input handler receives a plurality of observations from a plurality of sensors, the plurality of observations corresponding to a plurality of targets observed by the sensors. A correlation engine correlates, using a data fusion algorithm, observations of the plurality of observations with individual targets of the plurality of targets. A load monitor detects that a computational load associated with the correlating exceeds a threshold, and a bypass manager continues the correlating including bypassing at least a portion of the data fusion algorithm, in response to the detecting. | 04-11-2013 |
20130097604 | INFORMATION INTEGRATION FLOW FRESHNESS COST - A computer implemented method and apparatus calculate a freshness cost for each of a plurality of information integration flow graphs and select one of the plurality of information integration flow graphs based upon the calculated freshness cost. | 04-18-2013 |
20130097605 | COMPUTER PROCESS WITH UTILIZATION REDUCTION - A system includes computer-readable storage media encoded with code defining a computer process. The computer process is configured to monitor its own resource utilization so that it can detect a resource-utilization condition. In response to a detection of the utilization condition, the computer process causes its own resource utilization to be reduced. | 04-18-2013 |
20130104130 | Method and Apparatus for Power Control - Embodiments of the present invention relate to limiting maximum power dissipation occurred in a processor. Therefore, when an application that requires excessive amounts of power is being executed, the execution of the application may be prevented to reduce dissipated or consumed power. | 04-25-2013 |
20130111479 | Performance of Scheduled Tasks via Behavior Analysis and Dynamic Optimization | 05-02-2013 |
20130111481 | PROGRAMMATIC IDENTIFICATION OF ROOT METHOD | 05-02-2013 |
20130111482 | ESTABLISHING A GROUP OF ENDPOINTS IN A PARALLEL COMPUTER | 05-02-2013 |
20130117746 | Replacement of Virtual Functions - Techniques are described for replacement of virtual functions. In one or more implementations, a call to a virtual function is intercepted and redirected to a shim module associated with a replacement function. The shim module is configured to adjust a pointer (e.g., a “this” pointer) for the virtual function. In at least some embodiments, the pointer can be adjusted based on information retrieved from symbol data for the virtual function. The replacement function can utilize the adjusted pointer to access an object instance associated with the virtual function. For example, the replacement function can use the adjusted pointer to access data and/or functionalities of the object instance. | 05-09-2013 |
20130117747 | TRANSACTION LOAD REDUCTION FOR PROCESS COMPLETION - The present disclosure involves systems, software, and computer implemented methods for reducing transaction load for process instance completion. One process includes identifying an end event triggered by an initial token of a process instance, determining a type of the end event, performing a search for additional tokens associated with the process instance that are distinct from the initial token, and performing a termination action based on the type of end event and a number of additional tokens identified in the search. The end event type may be non-terminating or terminating, and the end event type can determine the termination action to be performed. If the end event is non-terminating, then the termination action includes joining each finalization action for each process instance variable to a completion transaction if no additional tokens are found and executing the completion transaction to terminate the process instance. | 05-09-2013 |
20130117748 | SCALABLE GROUP SYNTHESIS - An illustrative embodiment of a computer-implemented process for scalable group synthesis receives a group definition, applies a sub-set of conditions to the group definition to form a conditioned group definition, receives a set of entities and populates group membership using the received set of entities and the conditioned group definition, wherein each member responds in the affirmative to the sub-set of conditions. | 05-09-2013 |
20130132958 | WORK DISTRIBUTION AND MANAGEMENT IN HIGH AVAILABILITY CLUSTER ENVIRONMENT OF RESOURCE ADAPTERS - A high availability environment of resource adapters implements processes to manage and to distribute work among the adapters or adapter instances. An input resource, such as a file, is received and tasks are created to distribute the content to the different instances of the adapters that are configured in the cluster. A resource adapter instance switches to manage the creation of the task based on task-definitions of the adapter. The task-definitions are rules specified in the adapter on chunks of data. The tasks are created such that chunks of data are independently locked and processed without duplication. In order to distribute the work, the tasks are persisted into a table/xml on a persistent disk. The remaining instances interact with the table to access the tasks specified by the entries in the table, thus executing the tasks. | 05-23-2013 |
20130132959 | SYSTEM FOR GENERATING OR USING QUESTS - Example methods, apparatuses, or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more computing devices to facilitate or otherwise support one or more processes or operations associated with a system for creating or using one or more quests. | 05-23-2013 |
20130139162 | SYSTEM AND METHOD OF PROVIDING SYSTEM JOBS WITHIN A COMPUTE ENVIRONMENT - The invention relates to systems, methods and computer-readable media for using system jobs for performing actions outside the constraints of batch compute jobs submitted to a compute environment such as a cluster or a grid. The method for modifying a compute environment from a system job comprises associating a system job to a queuable object, triggering the system job based on an event and performing arbitrary actions on resources outside of compute nodes in the compute environment. The queuable objects include objects such as batch compute jobs or job reservations. The events that trigger the system job may be time driven, such as ten minutes prior to completion of the batch compute job, or dependent on other actions associated with other system jobs. The system jobs may be utilized also to perform rolling maintenance on a node by node basis. | 05-30-2013 |
20130139163 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD FOR RECOMMENDING APPLICATION PROGRAMS - An information processing apparatus includes a communication section that acquires application programs from an external apparatus, a memory that stores an application program and information relevant to the application program, and an application execution section that executes the application program stored in the memory. The information processing apparatus also includes a control section that determines other application programs to be recommended, during execution of the application program by the application execution section. The other application programs to be recommended are determined based on the information relevant to the application program, which includes first relevant information and second relevant information. | 05-30-2013 |
20130145369 | Enhancing Performance in Multithreaded Systems - Systems and methods for enhancing performance in a multithreaded computing system are provided. The method comprises receiving a plurality of values associated with a performance characteristic common to a plurality of threads; clusterizing the plurality of threads based on the performance characteristic; analyzing an inter-thread communication between the plurality of threads for identifying a plurality of threads adversely affecting the performance of different parts of the multithreaded program; calculating a performance factor corresponding to the performance characteristic to determine a type of performance improvement activity to be performed on the plurality of threads. | 06-06-2013 |
20130145370 | TECHNIQUES TO AUTOMATICALLY CLASSIFY PROCESSES - Techniques for automatically classifying processes are presented. Processes executing on a multicore processor machine are evaluated to determine shared resources between the processes, excluding shared system resources. A determination is then made based on the evaluation to group the processes as a single managed resource within an operating system of the multicore processor machine. | 06-06-2013 |
20130152088 | Generating Filters Automatically From Data Processing Jobs - Methods of generating filters automatically from data processing jobs are described. In an embodiment, these filters are automatically generated from a compiled version of the data processing job using static analysis which is applied to a high-level representation of the job. The executable filter is arranged to suppress rows and/or columns within the data to which the job is applied and which do not affect the output of the job. The filters are generated by a filter generator and then stored and applied dynamically at a filtering proxy that may be co-located with the storage node that holds the data. In another embodiment, the filtered data may be cached close to a compute node which runs the job and data may be provided to the compute node from the local cache rather than from the filtering proxy. | 06-13-2013 |
20130152089 | JOB MANAGEMENT APPARATUS AND JOB MANAGEMENT METHOD - A job management apparatus that searches for an available node to which a job is allocatable in an n-dimensional mesh-connected or n-dimensional torus-connected computer network, includes: a one-dimensional search information generating unit that generates one-dimensional search information related to one dimension of n dimensions, which includes a plurality of bits and which indicates, using one-bit information, whether or not the job is allocatable for each of computation nodes belonging to the one dimension; a search information generating unit that generates a search mask pattern with as many bits as corresponds to the plurality of bits, which includes consecutive bits being set to a preset value and corresponding to a size required by the job in the one dimension; and an available node searching unit that searches for the available node by performing, for the one dimension, a preset logic operation with the one-dimensional search information and the search mask pattern. | 06-13-2013 |
20130160015 | AUTOMATICALLY GENERATING COMPOUND COMMANDS IN A COMPUTER SYSTEM - A computer system provides a way to automatically generate compound commands that perform tasks made up of multiple simple commands. A compound command generation mechanism monitors consecutive user commands and compares the consecutive commands a user has taken to a command sequence identification policy. If the user's consecutive commands satisfy the command sequence identification policy the user's consecutive commands become a command sequence. If the command sequence satisfies the compound command policy, the compound generation mechanism can generate a compound command for the command sequence automatically or prompt an administrator to allow the compound command to be generated. Generating a compound command can be done on a user by user basis or on a system wide basis. The compound command can then be displayed to the user to execute so that the command sequence is performed by the user selecting the compound command for execution. | 06-20-2013 |
20130167150 | Application Management - This specification describes technologies relating to execution of applications and the management of an application's access to other applications. In general, a method can include loading a first application, designated to a first isolation environment, including first instructions using the first isolation environment provided by an application execution environment. A second application including second instructions is loaded using the first isolation environment despite the second application being designated to a second isolation environment provided by the application execution environment. The first application is prevented from modifying the second instructions of the second application. Data is processed using the first instructions of the first application and the second instructions of the second application, where the first instructions reference the second instructions. Information based on results of the processing is outputted. | 06-27-2013 |
20130174160 | Aquiring and transmitting tasks and subtasks to interface devices, and obtaining results of executed subtasks - Computationally implemented methods and systems include receiving a request to carry out a task of acquiring data requested by a task requestor, acquiring one or more subtasks related to the task of acquiring data and configured to be carried out by discrete interface devices in an absence of information regarding the at least one task and/or the task requestor, and obtaining a result of one or more executed subtasks executed by at least two of the discrete interface devices in the absence of information regarding the at least one task and/or the task requestor. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 07-04-2013 |
20130174161 | INFORMATION PROCESSING APPARATUS AND METHOD OF CONTROLLING INFORMATION PROCESSING APPARATUS - A hardware thread causes a SleepID register of a WAKEUP signal generation unit to store a SleepID that identifies the hardware thread when suspending a process due to waiting for a process by another CPU. The WAKEUP signal generation unit causes the WAKEUP data register of the WAKEUP signal generation unit to store a SleepID notified by a node when a process that the hardware thread waits ends. The WAKEUP signal generation unit outputs a WAKEUP signal that cancels the stop of the hardware thread to the hardware thread when the SleepIDs of the SleepID register and the WAKEUP data register agree with each other. | 07-04-2013 |
20130174162 | METHOD FOR CONSTRUCTING DATA STRUCTURES AND METHOD FOR DESCRIBING RUNNING STATES OF COMPUTER AND STATE TRANSITIONS THEREOF - A method for constructing data structures and a method for describing running states of a computer and state transitions thereof are provided. The method for constructing the data structure, which describes the execution processes of computer codes, includes: when the computer is running, constructs the data structure using the code segment wherein lies a calling instruction as a node and using the calling relationship between the code segment initiating the calling instruction and the called code segment, which are both constructed by the calling instruction, as a calling path. The data structure includes every node and the calling path between every calling and called nodes. When a certain calling instruction is executed, it is possible to describe the running state of the computer when the calling instruction is executed with the data structure consisting of all nodes and calling paths before the calling instruction by constructing the above data structure. | 07-04-2013 |
20130174163 | AVAILABILITY EVALUATION DEVICE AND AVAILABILITY EVALUATION METHOD - Workload is mitigated when customizing adding a state transition resulting from a datacenter operation procedure, to an availability evaluation model of a server infrastructure, provided as a standard library. This involves: storing in a state transition storage unit (STSU) definitions of a plurality of state transitions corresponding to system configurations; storing in an additional STSU definitions of state transitions used in system operation, which are different from the plurality of state transitions; receiving definitions of state transitions and registering the definitions in the additional STSU; analyzing system availability based on the definitions of the state transitions stored in the STSU and the definitions of the state transitions stored in the additional STSU; analyzing common state transition patterns in at least part of definitions of the plurality of state transitions used when operating the system, and stored in the additional STSU; and outputting analysis results on the common state transition patterns. | 07-04-2013 |
20130179886 | PROVIDING LOGICAL PARTIONS WITH HARDWARE-THREAD SPECIFIC INFORMATION REFLECTIVE OF EXCLUSIVE USE OF A PROCESSOR CORE - Techniques for simulating exclusive use of a processor core amongst multiple logical partitions (LPARs) include providing hardware thread-dependent status information in response to access requests by the LPARs that is reflective of exclusive use of the processor by the LPAR accessing the hardware thread-dependent information. The information returned in response to the access requests is transformed if the requestor is a program executing at a privilege level lower than the hypervisor privilege level, so that each logical partition views the processor as though it has exclusive use of the processor. The techniques may be implemented by a logical circuit block within the processor core that transforms the hardware thread-specific information to a logical representation of the hardware thread-specific information or the transformation may be performed by program instructions of an interrupt handler that traps access to the physical register containing the information. | 07-11-2013 |
20130179887 | APPARATUS, SYSTEM, CONTROL METHOD AND PROGRAM FOR IMAGE PROCESSING - The present invention is intended for properly receiving set data in setting items of a series of processes. The present invention solves the problem by controlling to determine whether the setting item of the unique processing information and the setting item of the shared processing information are identical, to generate, when the setting items are determined to be identical and when a setting item for which set data is different in the unique and shared processing information is identified, template processing information including information indicating the identified setting item and the setting item of the shared processing information, and to store the template processing information, and by displaying a user interface receiving the set data of the identified setting item from among the setting items of the template processing information at the time of generating new unique processing information by using the template processing information. | 07-11-2013 |
20130185724 | NON-TRANSITORY RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD - An example information processing apparatus includes a duration acquisition unit to acquire a duration of a dormant state, and a processing unit to change a state of a predetermined program to be advantageous to a user when the duration of the dormant state is longer than a first time length. For example, the predetermined program is a game program and when the duration of the dormant state is longer than the first time length, the processing unit changes a parameter of the game program to be advantageous to the user. | 07-18-2013 |
20130191831 | TRANSPARENT HIGH AVAILABILITY FOR STATEFUL SERVICES - One embodiment of the present invention provides a system. The system includes a high availability module and a data transformation module. During operation, the high availability module identifies a modified object belonging to an application in a second system. A modification to the modified object is associated with a transaction identifier. The high availability module also identifies a local object corresponding to the modified object associated with a standby application corresponding to the application in the second system. The data transformation module automatically transforms the value of the modified object to a value assignable to the local object, including pointer conversion to point to equivalent object of the second system. The high availability module updates the current value of the local object with the transformed value. | 07-25-2013 |
20130212582 | DISCOVERING WORK-ITEM RELATIONS THROUGH FULL TEXT AND STANDARD METHOD ANALYSIS - Discovering work-item relations, in one aspect, may include identifying mappings of work-item elements to standardized specification elements, for instance, by analyzing a plurality of work-item elements and their relationships generated from a description of a collection of work-items, and a plurality of standardized specification elements and their relationships generated from a description of practice guidelines for completing the project. One or more missing relations may be discovered among the plurality of work-item elements based on the mappings. | 08-15-2013 |
20130212583 | RESILIENCY TRACKING FOR PROJECT TASK MANAGEMENT - Systems, devices, computer readable media and methods may be implemented to track the resiliency of an application. The systems and methods may include providing an interface for managing recovery information for an application. The systems and methods may also include requesting recovery information via a set of questions presented to a user at the interface. The systems and methods may further include receiving the recovery information from the user via the interface and associating the recovery information with the application. | 08-15-2013 |
20130219394 | SYSTEM AND METHOD FOR A MAP FLOW WORKER - Parallel data processing may include map and reduce processes. Map processes may include at least one input thread and at least one output thread. Input threads may apply map operations to produce key/value pairs from input data blocks. These pairs may be sent to internal shuffle units which distribute the pairs, sending specific pairs to particular output threads. Output threads may include multiblock accumulators to accumulate and/or multiblock combiners to combine values associated with common keys in the key/value pairs. Output threads can output intermediate pairs of keys and combined values. Likewise, reduce processes may access intermediate key/value pairs using multiple input threads. Reduce operations may be applied to the combined values associated with each key. Reduce processes may contain internal shuffle units that distributes key/value pairs and sends specific pairs to particular output threads. These threads then produce the final output. | 08-22-2013 |
20130227573 | MODEL-BASED DATA PIPELINE SYSTEM OPTIMIZATION - A computer-implemented method for optimizing a data pipeline system includes processing a data pipeline configuration manifest to generate a framework of the data pipeline system and a data flow logic package of the data pipeline system. The data pipeline configuration manifest includes an object-oriented metadata model of the data pipeline system. The computer-implemented method further includes monitoring performance of the data pipeline system during execution of the data flow logic package to obtain a performance metric for the data pipeline system, and modifying, with a processor, the framework of the data pipeline system based on the data pipeline configuration manifest and the performance metric. | 08-29-2013 |
20130227574 | INFORMATION PROCESSING DEVICE - An information processing device | 08-29-2013 |
20130232494 | AUTOMATING SEQUENTIAL CROSS-APPLICATION DATA TRANSFER OPERATIONS - Illustrative embodiments disclose performing a task between software components. A computer executed process identifies a first region of a source software component as a source location for the task. The computer also identifies a second region of a target software component as a target location for the task. The computer responsively identifies a set of data in the source location. The computer determines a set of actions to perform the task between the source and the target software components. The set of actions to perform the task includes at least a first action to select a portion of the set of data in the source location, a second action to perform on the selected portion of the set of data that generates new data, and a third action using the new data in the target location. The computer performs the set of actions for the task. | 09-05-2013 |
20130247049 | CONTROL APPARATUS AND METHOD OF STARTING CONTROL APPARATUS - A control apparatus may include a processor configured to execute one or more programs in order to control a control target, and an accepting unit configured to accept a user input. The processor may start a first program to cause the accepting unit to function at a time of starting the control apparatus, and thereafter start a second program for executing a function selected by the user input accepted by the accepting unit, amongst a plurality of functions executable by the control target, with preference over programs for executing other functions. | 09-19-2013 |
20130254770 | METHOD FOR SINGLETON PROCESS CONTROL - A method for singleton process control in a computer environment is provided. A process identification (PID) for a background process is stored in a first temporary file. The PID is stored by a parent process and subsequently accessed by the background process. The background process is exited if an active PID is determined to exist in a second, global temporary file. The PID from the first temporary file is stored into the second, global temporary file. A singleton code block is then executed. | 09-26-2013 |
20130263135 | CHARACTERIZATION OF REAL-TIME SOFTWARE BASE-STATION WORKLOADS AT FINE-GRAINED TIME-SCALES - Methods and arrangements for characterizing software base-station workloads. Input system parameters are mapped to work-determining parameters which act to determine computational requirements of a dynamic workload. A synthetic experiment is undertaken to measure the computational requirements determined by the work-determining parameters. | 10-03-2013 |
20130263136 | INFORMATION PROCESSING SYSTEM AND PROCESSING METHOD FOR USE THEREWITH - Information processing system including: a plurality of process control units storing process data including present value and time-series data or historical data, and a plurality of data collection units collecting and processing data from the process control units. The data collection units each include: a first dynamic management section managing access and exit of the data collection unit of interest to and from the information processing system and managing operating status of all data collection units currently accessing the information processing system; a second dynamic management section managing addition and removal of process control units to and from the information processing system; and a charge determination section determining process control units to be taken charge of by the data collection unit of interest based on first identification information allocated to each of the data collection units and on second identification information allocated to each of the process control units. | 10-03-2013 |
20130263137 | INFORMATION PROCESSING APPARATUS, APPLICATION ACTIVATION METHOD, AND PROGRAM - It is determined whether an instruction for initial activation of an application is issued by a user or an operating system (step S | 10-03-2013 |
20130268935 | ADAPTIVE ARCHITECTURE FOR A MOBILE APPLICATION BASED ON RICH APPLICATION, PROCESS, AND RESOURCE CONTEXTS AND DEPLOYED IN RESOURCE CONSTRAINED ENVIRONMENTS - A method for adapting execution of an application on a mobile device may be performed by a mobile device including a processor and a memory. The method may include receiving an application context, a process context, and one other context. The method also includes analyzing at least one of the application context or the process context together with the one other context. The method also includes dynamically adapting execution of the application on the mobile device based on the analysis. Adapting execution of the application may include transferring processing related to the application to a backend server for processing. | 10-10-2013 |
20130275981 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR MONITORING AN EXECUTION FLOW OF A FUNCTION - A system, method, and computer program product are provided for monitoring an execution flow of a function. In use, data associated with a function is identified within a call stack. Additionally, a call stack frame is determined from freed memory in the call stack. Further, an execution flow of the function is monitored, utilizing the call stack frame from the freed memory. | 10-17-2013 |
20130275982 | METHOD AND SYSTEM FOR DETECTING PROGRAM DEADLOCK - The present invention relates to a technology for deadlock detection in a program, and more particularly relates to a technology for detecting deadlock in a program through lock graph analysis. The present invention provides a method for detecting deadlock, comprising: obtaining lock information related to locking operation in a program; generating a first lock graph based on the obtained lock information, wherein each node in the first lock graph comprises a set of locks comprising at least one lock and a set of program locations comprising at least one lock location; extracting a strongly connected sub graph in the first lock graph; unfolding the strongly connected sub graph in the first lock graph to generate a second lock graph, wherein each node in the second lock graph comprises a single lock; and extracting a strongly connected sub graph in the second lock graph, the strongly connected sub graph in the second lock graph indicating a deadlock in the program. | 10-17-2013 |
20130275983 | Methods for Supporting Users with Task Continuity and Completion Across Devices and Time - Concepts and technologies are described herein for providing task continuity and supporting task completion across devices and time. A task management application is configured to monitor one or more interactions between a user and a device. The interactions can include the use of the device, the use of one or more applications, and/or other tasks, subtasks, or other operations. Predictive models constructed from data or logical models can be used to predict the attention resources available or allocated to a task or subtask as well as the attention and affordances available within a context for addressing the task and these inferences can be used to mark or route the task for later reminding and display. In some embodiments, the task management application is configured to remind or execute a follow-up action when a session is resumed. Embodiments include providing users with easy to use gestures and mechanisms for providing input about desired follow up on the same or other devices. | 10-17-2013 |
20130283274 | METHOD AND SYSTEM FOR DISCOVERING AND ACTIVATING AN APPLICATION IN A COMPUTER DEVICE - The present invention discloses a method for discovering and activating an application in a computer device. The method comprising the steps of: defining at least one application based on its functionality including at least one action which is enabled by the application, identifying a required action to be performed by the user and searching loading a relevant application for the identified action, wherein the process of defining and identifying are performed by at least one processor unit. | 10-24-2013 |
20130283275 | MOBILE TERMINAL AND CONTROL METHOD THEREOF - A mobile terminal according to one embodiment includes a display unit configured to output a setting screen for setting an enabled or disabled state of an application, and a controller configured to convert the state of the application from the enabled state into the disabled state to prohibit a user's access to the application based on a control command for disabling the application, the control command being received through the setting screen, and configured to control the display unit to output a pop-up window for changing the disabled state of the application, in response to selection of a function executable by the disabled application. | 10-24-2013 |
20130290963 | Workflow-Enhancing Device, System and Method - Systems and methods for performing task execution in a workflow are described. The system comprises at least one modular device ( | 10-31-2013 |
20130290964 | INFORMATION PROCESSING APPARATUS - An information processing apparatus is provided for preventing an operator from erroneously rewriting data, by which a process can be performed only by connecting an external storage device to a CPU unit without checking whether a user program in the CPU unit is newer or older than that in the external storage device. | 10-31-2013 |
20130298127 | LOAD-STORE DEPENDENCY PREDICTOR CONTENT MANAGEMENT - Methods and apparatuses for managing load-store dependencies in an out-of-order processor. A load store dependency predictor may include a table for storing entries for load-store pairs that have been found to be dependent and execute out of order. Each entry in the table includes a counter to indicate a strength of the dependency prediction. If the counter is above a threshold, a dependency is enforced for the load-store pair. If the counter is below the threshold, the dependency is not enforced for the load-store pair. When a store is dispatched, the table is searched, and any matching entries in the table are armed. If a load is dispatched, matches on an armed entry, and the counter is above the threshold, then the load will wait to issue until the corresponding store issues. | 11-07-2013 |
20130298128 | MANAGED CONTROL OF PROCESSES INCLUDING PRIVILEGE ESCALATION - Determining execution rights for a process. A user selects a process for execution. A driver intercepts the execution and communicates with a service or its remote agent. Configuration data is accessed to determine an execution role specifying whether the process should be denied execution or should execute with particular rights to access or modify system resources. The execution role is provided to the driver, and the driver allows or denies execution of the process in accordance with the provided execution role. | 11-07-2013 |
20130305248 | Task Performance - A method including: identifying, for a current user input state, a plurality of available next user input states; defining a set of putative next user input states comprising including one or more of the available next user input states; defining a set of advancing tasks, in anticipation of the current user input state becoming, next, any one of the one or more putative next user input states of the set of putative next user input states; redefining the set of putative next user input states, including one or more of the available next user input states, in response to a user movement signal that depends upon user movement; and redefining the set of advancing tasks, in anticipation of the current user input state becoming, next, any one of the one or more putative next user input states of the set of putative next user input states | 11-14-2013 |
20130305249 | ELECTRONIC INFORMATION TERMINAL AND ELECTRONIC INFORMATION SYSTEM - A disclosed electronic information terminal includes a main control unit to control an overall function of the terminal, a storage unit to store various data and programs, a display unit to display a fixed format including input items, an input unit to input information into the corresponding input items, a communication control unit to establish a communication network between the input unit and a server, an input completion detector unit to detect completion of input operations in which the information corresponds to the input items of the fixed format, and a process execution unit to execute a process subsequent to the input operations. When the input completion detector unit detects the completion of the input operations, the process execution unit executes the process subsequent to the input operations. | 11-14-2013 |
20130311994 | SYSTEMS AND METHODS FOR SELF-ADAPTIVE EPISODE MINING UNDER THE THRESHOLD USING DELAY ESTIMATION AND TEMPORAL DIVISION - Embodiments relate to systems and methods for self-adaptive episode mining under time threshold using delay estimation and temporal division. An episode mining engine can analyze a set of episodes captured from a set of network resources to detect all sequences of user-specified frequency within a supplied runtime budget or time threshold. The engine can achieve desired levels of completeness in the results by mining the input log file in multiple stages or steps, each having successively longer lengths of event sequences. After completion of each stage, the engine calculates a remaining amount of runtime budget, and updates the amount of time to be allocated for each of the remaining stages up to a generated maximum stage (or sequence length). The engine thus corrects the estimated remaining time in the runtime budget (or threshold) after each stage, and continues to the next stage until the runtime budget is consumed. | 11-21-2013 |
20130318529 | SYSTEMS AND METHODS FOR AUGMENTING THE FUNCTIONALITY OF A MONITORING NODE WITHOUT RECOMPILING - Systems, methods are provided for augmenting functions of a computing device by a controlling computing device. The method comprises receiving a command and a data matrix from the controlling computing device. The data matrix contains data that when installed enables the subordinate computing device to accomplish additional functions. The method further comprises calling a first SEAM by the computing device to receive the command and the data matrix, calling a second SEAM by the computing device to create a SDS extension in its volatile memory, and populating the one or more volatile extensions with the data from the data matrix. | 11-28-2013 |
20130326520 | MULTIPLE TOP LEVEL USER INTERFACE DISPLAYS - When a program invokes a synchronous user interface display, it is determined whether an asynchronous user interface (UI) display needs to be generated. If so, the user interface thread invoked by the synchronous program is blocked and the asynchronous UI display is generated and displayed so that it covers the synchronous display on the UI display screen. When the processing corresponding to the synchronous user interface display is complete, processing returns to the synchronous user interface display and the user interface thread invoked by the synchronous program is unblocked. | 12-05-2013 |
20130326521 | METHOD OF ASSOCIATING MULTIPLE APPLICATIONS - An example information-processing device includes: am acquisition unit configured to acquire application-related information relating to a first application program; and a presentation unit configured, when a second application program, by which a search can be performed, is activated after activation of the first application program, to present to a user the application-related information acquired by the acquisition unit as a candidate for an item to be searched for. | 12-05-2013 |
20130332928 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE MEDIUM - There is provided an information processing system including circuitry that acquires application definition information that defines a module used by an application, rewrites the application definition information depending on a circumstance of a device in which a process of the application is executed, and provides the rewritten application definition information to the device in which the process is executed. | 12-12-2013 |
20130332929 | Workflow Decision Management With Workflow Administration Capacities - Methods, systems, and products are provided for workflow decision management. Embodiments include maintaining a device state history; identifying a plurality of device usage patterns in dependence upon the device state history; identifying a plurality of workflow scenarios in dependence upon the device usage patterns; determining a workflow administration capacity in dependence upon the plurality of workflow scenarios; identifying a plurality of workflows in dependence upon the workflow scenario; executing the plurality of workflows in dependence upon the workflow administration capacity. | 12-12-2013 |
20130339959 | DYNAMIC MANAGEMENT OF A TRANSACTION RETRY INDICATION - Embodiments relate to dynamic management of a transaction retry indication. One aspect is a system that includes a transactional facility configured to support transactions that effectively delay committing stores to memory or results to an architectural state until transaction completion, and a processor configured to identify a transaction abort reason associated with an aborted transaction of an initiating program. Transaction success and transaction abort history are tracked. Based on determining by the processor that the transaction abort reason was caused by the initiating program, a retry indication is assigned based on a static mapping of the transaction abort reason. Based on determining by the processor that the transaction abort reason was not caused by the initiating program, the retry indication is assigned based on a retry process using the transaction abort reason, the transaction abort history, and a current processor configuration. | 12-19-2013 |
20130346979 | PROFILING APPLICATION CODE TO IDENTIFY CODE PORTIONS FOR FPGA IMPLEMENTATION - Application code is analyzed to determine if a hardware library could accelerate its execution. In particular, application code can be analyzed to identify calls to application programming interfaces (APIs) or other functions that have a hardware library implementation. The code can be analyzed to identify the frequency of such calls. Information from the hardware library can indicate characteristics of the library, such as its size, power consumption and FPGA resource usage. Information about the execution pattern of the application code also can be useful. This information, along with information about other concurrent processes using the FPGA resources, can be used to select a hardware library to implement functions called in the application code. | 12-26-2013 |
20130346980 | Migrated Application Performance Comparisons Using Log Mapping - Mechanisms are provided for comparing the performance of applications. An application log record associated with a first application is identified. Mappings between the application logs and underlying log record of environments are made for both the source and the target environments. Performance measurements are made based on both the application logs in the source and target environments are made and compared to each other by way of the mappings. A result of the comparison is output to thereby compare performance of the first application in the source environment with performance of a second application in the target environment. | 12-26-2013 |
20130346981 | TASK MANAGEMENT APPLICATION FOR MOBILE DEVICES - A task management application allows a user to organize tasks and display tasks to be completed by the user. In particular, the task management application allows a user to create a new task from a first application separate from the task management application, and provide information regarding the task to be completed. In addition, reference content is selected form the first application and included as part of the created task. The task, along with reference content selected from the first application, is displayed to the user for review. | 12-26-2013 |
20130346982 | GENERATING A PROGRAM - There is provided a method and system for generating a program. The method includes detecting a number of steps for performing a task on a computing device and detecting an example relating to each of the steps, wherein the example includes input data and corresponding output data relating to the step. The method also includes, for each example, determining a rule that transforms the input data to the corresponding output data based on cues including textual features within the input data and the corresponding output data. The method further includes generating a program for performing the task based on the rules. | 12-26-2013 |
20130346983 | COMPUTER SYSTEM, CONTROL SYSTEM, CONTROL METHOD AND CONTROL PROGRAM - A control system comprises a property estimating means for estimating a property of a task or data on a computer system to be controlled on the basis of property estimation source data, one or more control executing means for controlling to stop/operate system components of the computer system, changing a task arrangement, changing a data arrangement and changing a data structure according to issued control commands, a control strategy determining means for determining, as a control strategy to be executed, control processing contents of one or a combination of the four controls on the basis of an operation situation of the computer system in the future derived from the estimated property of the task or data, and a control command issuing means for issuing control commands to the control executing means according to the control processing contents determined by the control strategy determining means. | 12-26-2013 |
20140007102 | AUTOMATED UPDATE OF TIME BASED SELECTION | 01-02-2014 |
20140007103 | CONCURRENT EXECUTION OF A COMPUTER SOFTWARE APPLICATION ALONG MULTIPLE DECISION PATHS | 01-02-2014 |
20140007104 | Auto Detecting Shared Libraries and Creating A Virtual Scope Repository | 01-02-2014 |
20140007105 | METHOD AND APPARATUS FOR A TASK BASED OPERATING FRAMEWORK | 01-02-2014 |
20140007106 | Display and Terminate Running Applications | 01-02-2014 |
20140007107 | CONCURRENT EXECUTION OF A COMPUTER SOFTWARE APPLICATION ALONG MULTIPLE DECISION PATHS | 01-02-2014 |
20140007108 | METHOD AND APPARATUS FOR CONTROLLING POWER CONSUMPTION OF TURBO DECODER | 01-02-2014 |
20140007109 | STORAGE OF APPLICATION SPECIFIC PROFILES CORRELATING TO DOCUMENT VERSIONS | 01-02-2014 |
20140019975 | SERVICE TO RECOMMEND OPENING AN INFORMATION OBJECT BASED ON TASK SIMILARITY - The present description is directed to a technique to store one or more tasks, each task including one or more knowledge actions (KAs) that includes an action of one of a plurality of KA types being performed, determine a degree of similarity between each of one or more of the stored tasks and a current task, identify one of the stored tasks that most closely matches the current task based on the degree of similarity for each of the one or more stored tasks, identify one or more information objects that were open for the identified stored task, identify one or more information objects that are currently open for the current task, determine an additional information object that was open for the identified stored task but is not currently open for the current task, and provide a recommendation to a user to open the additional information object. | 01-16-2014 |
20140019976 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An information processing apparatus includes an inclusion relation memory, a correspondence relation memory, a data type identifying unit, a software application identifying unit, and a display controller. The inclusion relation memory stores inclusion relation between multiple data types. The correspondence relation memory stores correspondence relation between the data types and software applications used in input of data. The data type identifying unit analyzes acquired information to identify a data type corresponding to an input area of the acquired information. The software application identifying unit identifies a software application corresponding to each data type included in the identified data type in accordance with the inclusion relation and the correspondence relation. The display controller displays a display part in which the identified software application is used in a display. | 01-16-2014 |
20140026136 | ANALYSIS ENGINE CONTROL DEVICE - An analysis engine control device | 01-23-2014 |
20140033202 | MEDIA RESPONSE TO SOCIAL ACTIONS - A method includes enabling accessing of content via a first device. The access of the content may be suspended in response to receiving a suspending signal associated with a second device coupled to the first device in a communication session. The access of the content may be resumed via at least one of the first device or a third device coupled to the first device in the communication session. | 01-30-2014 |
20140033203 | COMPUTER ARCHITECTURE WITH A HARDWARE ACCUMULATOR RESET - A processor with an accumulator. An event is selected to produce one or more selected events. A reset signal to the accumulator is generated responsive to the selected event. Responsive to the reset signal, the accumulator is reset to zero or another initial value while avoiding breaking pipelined execution of the processor. | 01-30-2014 |
20140033204 | Background Services Launcher For Dynamic Service Provisioning - A background service launcher is disclosed that provides dynamic access to services required by clients. Clients access services by a single unified pathname space and interface environment. When a client tries to open a service, if the service is running it will receive the request immediately, however when the service is not running the background service launcher having previously registered associated paths, receive the client request, start the background service, and then redirect the client to it. The ability to dynamically launch services enables resources to be accessed such as cloud based filesystems by being dynamically mounted and accessible to clients in the operating system. | 01-30-2014 |
20140033205 | PROVIDING HIGH AVAILABILITY TO A HYBRID APPLICATION SERVER ENVIRONMENT CONTAINING NON-JAVA CONTAINERS - A method, system and computer program product for providing high availability to a hybrid application server environment containing non-Java® containers. Each hybrid application server in the cluster includes a Java® container and a non-Java® container hosting Java® and non-Java® applications, respectively. Upon detecting the non-Java® container becoming unavailable (failing), an object, such as an MBean, identifies and deactivates those Java® application(s) that are dependent on the non-Java® application(s) deployed in the unavailable non-Java® container using dependency information stored in an application framework. The deactivated Java® application(s) are marked as being unavailable. A routing agent continues to send requests to those Java® application(s) that are not marked as being unavailable within that hybrid application server containing the unavailable non-Java® container. As a result of not deactivating the entire hybrid application server containing the unavailable non-Java® container, unimpacted applications continue to service requests thereby optimally using the resources. | 01-30-2014 |
20140033206 | MONITORING THREAD STARVATION - The present disclosure includes methods and systems for monitoring thread starvation. A number of embodiments include determining an amount of time a thread is not runnable, determining an amount of CPU consumption time for the thread, and determining an amount of thread starvation time based on the amount of time the thread is not runnable and the amount of CPU consumption time for the thread. | 01-30-2014 |
20140033207 | System and Method for Managing P-States and C-States of a System - A current value of a load indicator of a system is determined, by an application entity, based on one or more of a central processor unit utilization measure of the system, a memory utilization measure of the system, a system-internal resources utilization measure, an input/output utilization measure of the system, and a secondary storage utilization measure of the system, wherein the system is associated with a plurality of P-states and a plurality of C-states. An operating mode of the system is determined, by the application entity, based on the current value of the load indicator, wherein the operating mode comprises a P-state selected from the plurality of P-states and a C-state selected from the plurality of C-states. The system is operated in accordance with the operating mode. A predictive load map associating respective time periods and respective operating modes may be generated and adaptively adjusted. | 01-30-2014 |
20140033208 | METHOD AND DEVICE FOR LOADING APPLICATION PROGRAM - Disclosed in the present disclosure is an application loading method, including: an M2M terminal module starts up an application manager after being powered up and initialized; the application manager receives a load application instruction and creates a load thread; and the load thread loads an application according to a load application instruction and ends the load thread after the execution of the application is completed. Also disclosed in the present disclosure is an application loading device. By way of the method and device in the present disclosure, the compile efficiency is improved, and it is advantageous for terminal maintenance, and the service function is realized when executing an independent application. | 01-30-2014 |
20140040897 | Function Evaluation using Lightweight Process Snapshots - A debugger creates a lightweight process snapshot of a debuggee target process and performs in-process or function evaluation (func-eval) inspection against the copy. This allows most state in the debuggee process to stay intact because changes made by the func-eval are local to the process snapshot. Debugger operations that are too destructive to the original debuggee process can be performed on the process snapshot without threatening the real process. Process snapshots allow the debugger to perform a func-eval while isolating the debuggee process and not losing the actual state of the original debuggee process. A new process snapshot of the debuggee process is created when the current snapshot is corrupt due to a func-eval side effect. The debugger may also use a lightweight machine snapshot of the host debuggee machine and perform func-evals against that machine snapshot to further isolate kernel and other side effects. | 02-06-2014 |
20140047445 | SYSTEM AND METHOD FOR CHECKING THE CONFORMANCE OF THE BEHAVIOR OF A PROCESS - A method and apparatus for checking the fit of behavior of a business process and observed behavior of the system in terms of event logs. The method includes generating a behaviorally equivalent CSP description of the business process and trace equivalent CSP description of event logs. Further the generation of CSP processes for a business process includes segregating a business process model into a set of workflow patterns with connectivity between the workflow patterns, generating a CSP process corresponding to each workflow pattern, composing the CSP processes in parallel with connectivity between the CSP processes, and synchronizing the CSP processes on common activities of the CSP processes. Lastly the generation of a CSP description of the event log is performed by constructing a CSP process for each trace in the event log and combining the CSP descriptions using external choice operator. | 02-13-2014 |
20140047446 | TECHNIQUES FOR SWITCHING THREADS WITHIN ROUTINES - Various technologies and techniques are disclosed for switching threads within routines. A controller routine receives a request from an originating routine to execute a coroutine, and executes the coroutine on an initial thread. The controller routine receives a response back from the coroutine when the coroutine exits based upon a return statement. Upon return, the coroutine indicates a subsequent thread that the coroutine should be executed on when the coroutine is executed a subsequent time. The controller routine executes the coroutine the subsequent time on the subsequent thread. The coroutine picks up execution at a line of code following the return statement. Multiple return statements can be included in the coroutine, and the threads can be switched multiple times using this same approach. Graphical user interface logic and worker thread logic can be co-mingled into a single routine. | 02-13-2014 |
20140053157 | ASYNCHRONOUS EXECUTION FLOW - Tasks can be developed and maintained with synchronous code while concurrently being asynchronously executed, e.g., during time consuming operations. The tasks need not include asynchronous flow callbacks within the task framework. The callbacks can be transparently incorporated within the execution flow utilizing a callback wrapper(s) which transparently maintains and manages the necessary callbacks for asynchronous execution of the tasks. Thus a generic solution can be easily and effectively implemented for, e.g., production/request work item processing, that can be applied to both backend services and/or client software. | 02-20-2014 |
20140053158 | COMPARING REDUNDANCY MODELS FOR DETERMINATION OF AN AVAILABILITY MANAGEMENT FRAMEWORK (AMF) CONFIGURATION AND RUNTIME ASSIGNMENT OF A HIGH AVAILABILITY SYSTEM - Redundancy models are compared to determine or assist in determining an Availability Management Framework (AMF) configuration of a highly available system based on quantified service availability of the system. Each redundancy model defines assignments of service-instances to service-units. An analysis model of the system is constructed to capture recovery behaviors of the system for each redundancy model. Service availability of the system is quantified based on the analysis model under one or more scenarios including failure scenarios and recovery scenarios. Based on a comparison of service availability levels provided by the redundancy models and subject to constraints of the HA system, one of the redundancy models is identified that provides a required level of service availability for the system. | 02-20-2014 |
20140059548 | PROCESSOR CLUSTER MIGRATION TECHNIQUES - Embodiments of the present technology provide for migrating processes executing one any one of a plurality of cores in a multi-core cluster to a core of a separate cluster without first having to transfer the processes to a predetermined core of the multi-core cluster. Similarly, the processes may be transferred from the core of the separate cluster to the given core of the multi-core cluster. | 02-27-2014 |
20140059549 | APPLICATION RECOGNITION SYSTEM AND METHOD - The disclosure provides an application recognition system and an application recognition method for an electronic device. The method includes determining whether or not there is one or more hidden running applications, where the hidden applications are associated with an inactive window under an active window associated with the current running application on a screen of the electronic device. If there is one or more hidden running applications, the system acquires a sound control instruction associated with each of the hidden running applications and controls the sound output unit to output the preset sound every preset time period. | 02-27-2014 |
20140059550 | INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM - Temperature information of each of a plurality of memories of a WideIO memory device is acquired. In a case that the execution of a function is designated, a memory to be used by a function module corresponding to the function is determined based on the memory access amount of the function module corresponding to the function and the acquired temperature information of the plurality of memories. | 02-27-2014 |
20140068616 | DATE AND TIME FORMAT GENERATION METHOD, PROGRAM, AND SYSTEM - Computing a date and time format includes obtaining a UT value of a reference time; computing intermediate data including year, month, day, hour, minute, and second, from the UT value of the reference time; computing a difference between a conversion target UT value and the UT value of the reference time using a processor; computing values of hour, minute, and second, based on the difference between the UT values; and generating a character string format representing year, month, day, hour, minute, and second, by combining the intermediate data and the values of hour, minute, and second. | 03-06-2014 |
20140082622 | METHOD AND SYSTEM FOR EXECUTING APPLICATION, AND DEVICE AND RECORDING MEDIUM THEREOF - Provided are a method and system for executing an application installed in a device, the device and a recording medium thereof. The method includes: receiving a mapping request signal for requesting a mapping between application execution information and a short run indicator, acquiring the application execution information from an execution starting point of the application up to the receiving of the mapping request signal; generating mapping information comprising a mapping of the acquired application execution information with the short run indicator; and storing the mapping information in at least one of the device and an external device. | 03-20-2014 |
20140089926 | BUSINESS PROCESS MODEL ANALYZER AND RUNTIME SELECTOR - In a method for determining appropriate runtime environments for execution of a process model, a computer receives a process model. The process model includes a plurality of activities, wherein two activities are linked by a relationship. The computer determines that the two activities linked by a relationship match a process pattern. The computer determines one or more runtime environments for execution of the process model, wherein each of the one or more runtime environments is capable of executing the process pattern. | 03-27-2014 |
20140089927 | METHOD AND APPARATUS FOR WORKFLOW VALIDATION AND EXECUTION - A computer program product comprising: a non-transitory computer readable medium; and a description of a first block comprising: a definition of one or more output port groups each comprising one or more output ports; a definition of two or more input ports, the input ports receive object streams of identical length; one or more instructions for processing input data received in the input ports and for outputting processed data in the output port groups, wherein the instructions are operative to output a same number of output objects to each output port in a same output port group, whereby the output ports of the output port group are operative to output objects stream of identical length, and wherein the instructions are operative to receive a same number of input objects from each input port, whereby the input ports are operative to receive object streams of identical length; and an indication of whether there is a constant ratio between a number of items in input streams received by the first block and a number of items in output streams outputted by the first block; and wherein said description of a first block is stored on said non-transitory computer readable medium. | 03-27-2014 |
20140101659 | METHOD AND SYSTEM OF KNOWLEDGE TRANSFER BETWEEN USERS OF A SOFTWARE APPLICATION - Knowledge transfer between users of a software application. At least some of the example embodiments are methods including: tracking steps performed by a plurality of users of a software application, and the tracking creates tracked steps; identifying a first task as a first series of steps of the tracked steps, and identifying a second task as a second series steps of the tracked steps, the second series of steps distinct from the first series of steps; and providing, on a display device associated with the software application, an indication of the first series of steps of the first task and the second series of steps of the second task, the providing to a later user interacting with the software application. | 04-10-2014 |
20140109094 | Automated Techniques to Deploy a Converged Infrastructure Having Unknown Initial Component Configurations - A technique to adaptively configure components of a converged infrastructure (CI). Component configuration information is collected from and representative of operating storage, compute, and network components of the CI. A pod descriptor is constructed from the collected information. The pod descriptor includes operating storage, compute, and network component configuration definitions for the CI based on the collected component configuration information. A package specification unit is generated based on the component configuration definitions of the pod descriptor. The package specification unit includes tasks that, when executed, automatically inventory, assess, and configure targeted ones of the CI components. The technique executes the tasks in the package specification unit to perform corresponding operations on targeted ones of the CI components. | 04-17-2014 |
20140123143 | TRANSACTION LOAD REDUCTION FOR PROCESS COMPLETION - The present disclosure involves systems, software, and computer implemented methods for reducing transaction load for process instance completion. One process includes identifying an end event triggered by an initial token of a process instance, determining a type of the end event, performing a search for additional tokens associated with the process instance that are distinct from the initial token, and performing a termination action based on the type of end event and a number of additional tokens identified in the search. The end event type may be non-terminating or terminating, and the end event type can determine the termination action to be performed. If the end event is non-terminating, then the termination action includes joining each finalization action for each process instance variable to a completion transaction if no additional tokens are found and executing the completion transaction to terminate the process instance. | 05-01-2014 |
20140130049 | MANAGING PROCESSES IN A REPOSITORY - A method of managing a plurality of processes in a repository of a computer system is disclosed. For example, the method includes forming a model associated with differences among the plurality of processes. The model associated with differences includes one or more features for expressing the differences. The method further includes forming a model of priority among the one or more features, and organizing the plurality of processes according to the model associated with differences and according to the model of priority. At least one of the one or more features is a semantic feature. One or more of the forming of the model associated with differences, the forming of the model of priority and the organizing of the plurality of processes are implemented on a processor device. | 05-08-2014 |
20140137119 | MULTI-CORE PROCESSING IN MEMORY - A memory device includes but is not limited to a substrate, a non-volatile memory array integrated on the substrate, and processing logic integrated with the non-volatile memory array on the substrate. The processing logic is operable to perform at least one general purpose processing function associated with the non-volatile memory array. | 05-15-2014 |
20140143779 | CONTEXTUAL ROUTING OF DATA ELEMENTS - A method for processing data includes receiving a data element in a first processing node, the data element including data, reading a first control word in data element and perform a first processing task with the data with a processing portion of the first processing node, the first processing task associated with the first control word, adding a first sub-header associated with the first processing task to the data element, adding metadata associated with the first processing task to the data element, removing the first control word from the data element, determining whether a second processing task should be performed with the data, and adding a second control word to the data element responsive to determining that a second processing task should be performed with the data. | 05-22-2014 |
20140143780 | PRIORITY-ASSIGNMENT INTERFACE TO ENHANCE APPROXIMATE COMPUTING - A system and method are provided for enhancing approximate computing by a computer system. In one example, an interface is provided comprising a variable-identifier module and a bit-priority module. The variable-identifier module is configured to identify one or more variables of data that are to be processed by the computer system with approximate precision. Approximate precision is a precision level at which a hardware device does not guarantee full data-correctness for the one or more variables. The bit-priority module is configured to assign bit-priorities to the one or more variables. The bit-priorities include relative levels of importance among bits of each of the one or more variables. The relative levels of importance include at least high-priority bits and low-priority bits. | 05-22-2014 |
20140143781 | METHODS AND SYSTEMS TO IDENTIFY AND MIGRATE THREADS AMONG SYSTEM NODES BASED ON SYSTEM PERFORMANCE METRICS - Methods and systems to identify and migrate threads among system nodes based on system performance metrics. An example method disclosed herein includes sampling a performance metric of a computer program thread, the computer program thread executing on a home node of a computer system having multiple nodes, and determining whether the performance metric exceeds a threshold value. The method also includes identifying a remote node associated with a remote memory if the threshold value is exceeded, the remote memory being accessed by the computer program thread, and identifying the computer program thread as a candidate for migration from the home node to the remote node if the threshold value is exceeded. In this way, a computer program thread that frequently accesses a remote memory can be migrated from a home node to a remote node associated with the remote memory to reduce the latency associated with memory accesses performed by the computer program thread and thereby improve system performance. | 05-22-2014 |
20140143782 | COMPUTERIZED INFRASTRUCTURE MANAGEMENT SYSTEM AND METHOD - An automation framework that bridges the gaps between the complete manual work and complex maintenance hungry tools. This automation framework enables business-driven automated system administration capabilities and focuses on independent task management between business needs and system administrators in order to model automation in line with the requirements of IT operations. In some embodiments, this framework minimizes manual effort, delegates complex tasks to junior resources without exposing critical systems and incorporates governance. | 05-22-2014 |
20140165064 | PROCESSING METHOD, PROCESSING APPARATUS, AND RECORDING MEDIUM - A processing method includes: collecting processing information indicating a processing state of an application executed by an information processing device and operational information indicating operational states of processing elements that are identified on the basis of configuration information stored in a storage unit and are involved in the execution of the application; determining whether or not there is a correlation between the processing state and an operational state of each of the processing elements on the basis of the processing information and the operational information when a delay of a process of the application is detected on the basis of the processing information; and extracting, from among the processing elements, a processing element of which an operational state has a correlation with the processing state on the basis of the determination. | 06-12-2014 |
20140165065 | PROCESS REQUESTING APPARATUS, METHOD OF CONTROLLING PROCESS REQUESTING APPARATUS AND RECORDING MEDIUM FOR PROCESS REQUESTING APPARATUS - A process-requesting apparatus for requesting a process-performing apparatus to perform a predefined process and querying a progress status of the predefined process includes a progress status obtaining unit for obtaining, as a response to the query about the progress status to the process-performing apparatus, the progress status from the process-performing apparatus; a completion determining unit for determining whether the predefined process has been completed based on the obtained progress status; a time interval determining unit for determining a time interval from the last time the process-requesting apparatus queried the progress status to the next time the process-requesting apparatus queries the progress status according to an elapsed time from the start of the predefined process; and a progress status querying unit for, in the case where the predefined process has not been completed, querying the process-performing apparatus about the progress status at the determined time interval. | 06-12-2014 |
20140165066 | OPTIMIZED DATACENTER MANAGEMENT BY CENTRALIZED TASK EXECUTION THROUGH DEPENDENCY INVERSION - A Datacenter Management Service (DMS) is provided as a platform designed to automate datacenter management tasks that are performed across multiple technology silos and datacenter servers or collections of servers. The infrastructure to perform the automation is provided by integrating heterogeneous task providers and implementations into a set of standardized adapters through dependency inversion. A platform automating datacenter management tasks may include three main components: integration of adapters into an interface allowing a common interface for datacenter task execution, an execution platform that works against the adapters, and implementation of the adapters for a given type of datacenter management task. | 06-12-2014 |
20140173602 | Matching Opportunity to Context - A task application for automatic task management based on content and context awareness is provided. As task items are inputted into the task application, the task items may be parsed for context data (e.g., time data, location data, people data, etc.) and associated with the task item. Additionally, context data may be input manually by a user. Task items may be stored in a “now,” “later,” “someday,” or “done” contextual task list. As context changes, (e.g., time, location, activity, people, etc.) task items with correlating context data may be prioritized. A notification may be presented to the user to alert him of an upcoming or present opportunity to achieve or complete a task item. Accordingly, a user may be provided with a list of task items that may be relevant to the user according to context. | 06-19-2014 |
20140173603 | MULTIPLE STEP NON-DETERMINISTIC FINITE AUTOMATON MATCHING - Disclosed is a hardware NFA cell array used to find matches to regular expressions or other rules in an input symbol stream scans multiple symbols per clock cycle by comparing multiple symbol classes against multiple input symbols per cycle in parallel, signaling bundles of multiple transitions from parent cells to child cells and updating NFA state status by multiple steps. To retain high frequency operation, the cell array will not resolve transition chains from a first cell to a second cell to a third cell in a single cycle. When a chain is required, the cell array takes fewer steps in one cycle to break the chain into separate cycles. To detect multi-transition chains, each cell compares symbol classes to future symbols in advance and back-communicates future match positions to parent cells in the array as launch hazards. | 06-19-2014 |
20140173604 | CONDITIONALLY UPDATING SHARED VARIABLE DIRECTORY (SVD) INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for conditionally updating shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a broadcast reduction operation header. The broadcast reduction operation header includes an SVD key and a first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD, in response to receiving the broadcast reduction operation header. Embodiments also include the runtime optimizer determining that the first SVD address does not match the second SVD address and updating the remote address cache with the first SVD address. | 06-19-2014 |
20140173605 | MANAGING RESOURCE POOLS FOR DEADLOCK AVOIDANCE - In an illustrative embodiment of a method for managing a resource pool for deadlock avoidance, a computer receives a request from a thread for a connection from the resource pool, and determines whether the thread currently has at least one connection from the resource pool. Responsive to a determination that the thread currently has at least one connection from the resource pool, a new concurrent connection from one of a reserved partition of the resource pool is allocated and the connection is returned to the thread. | 06-19-2014 |
20140181819 | VIRTUALIZATION DETECTION - A baseline set of graph information associated with a predefined execution environment of a program may be obtained, the baseline set associated with a baseline time interval. The predefined execution environment includes predefined environment objects, at least a portion of which are dynamically mutatable. A second set of graph information associated with the predefined execution environment may be obtained, the second set associated with a second time interval that is later in time than the baseline time interval. The baseline set and the second set may be compared to detect virtualization activity associated with a dynamic runtime. | 06-26-2014 |
20140181820 | FACILITATING CUSTOMIZATION OF A VIRTUAL SPACE BASED ON ACCESSIBLE VIRTUAL ITEMS - Virtual items may be unlocked in a virtual space responsive to physical token detection. A common virtual item repository may be provided in the virtual space. Once unlocked, a given virtual item may be accessible to multiple characters in the virtual space via the virtual item repository. Customization of a virtual space may be facilitated. The customization may be based on the virtual items accessible via the virtual item repository. | 06-26-2014 |
20140196043 | SYSTEM AND METHOD FOR RE-FACTORIZING A SQUARE MATRIX INTO LOWER AND UPPER TRIANGULAR MATRICES ON A PARALLEL PROCESSOR - A system and method for re-factorizing a square input matrix on a parallel processor. In one embodiment, the system includes: (1) a matrix generator operable to generate an intermediate matrix by embedding a permuted form of the input matrix in a zeroed-out sparsity pattern of a combination of lower and upper triangular matrices resulting from a prior LU factorization of a previous matrix having a same sparsity pattern, reordering to minimize fill-in and pivoting strategy as the input matrix and (2) a re-factorizer associated with the matrix generator and operable to use parallel threads to apply an incomplete-LU factorization with zero fill-in on the intermediate matrix. | 07-10-2014 |
20140201744 | COMPUTING REGRESSION MODELS - Provided are techniques for computing a task result. A processing data set of records is created, wherein each of the records contains data specific to a sub-task from a set of actual sub-tasks and contains a reference to data shared by the set of actual sub-tasks, and wherein a number of the records is equivalent to a number of the actual sub-tasks in the set of actual sub-tasks. With each mapper in a set of mappers, one of the records of the processing data set is received and an assigned sub-task is executed using the received one of the records to generate output. With a single reducer, the output from each mapper in the set of mappers is reduced to determine a task result. | 07-17-2014 |
20140201745 | METHOD AND APPARATUS FOR EXECUTING APPLICATION PROGRAM IN ELECTRONIC DEVICE - An apparatus and a method for displaying application program information in an electronic device are provided. The method for displaying the application program information includes executing a first application program, determining at least one application program capable of being executed after the first application program, and displaying, on a display unit, information on the at least one application program. | 07-17-2014 |
20140201746 | PARALLEL RUNTIME EXECUTION ON MULTIPLE PROCESSORS - A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices arc initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads. Sources and existing executables for an API function are stored in an API library to execute a plurality of executables in a plurality of physical compute devices, including the existing executables and online compiled executables from the sources. | 07-17-2014 |
20140208323 | PREVENTING UNSAFE SHARING THROUGH CONFINEMENT OF MUTABLE CAPTURED VARIABLES - The disclosed embodiments provide a system that facilitates the development and execution of a software program. During operation, the system provides a mechanism for restricting a variable to a runtime context in the software program. Next, the system identifies the runtime context during execution of the software program. Finally, the system uses the mechanism to prevent incorrect execution of the software program by ensuring that a closure capturing the variable executes within the identified runtime context. | 07-24-2014 |
20140215468 | THINNING OPERATING SYSTEMS - Thinning operating systems can include monitoring a number of functionalities of an operating system, the number of functionalities of the operating system being provided by a number of computing components loaded thereon. Thinning operating systems can include automatically identifying an undesired functionality of the number of functionalities during runtime and removing from the operating system at least one of the number of computing components providing the undesired functionality as a result of the automatic identification to thin the OS. | 07-31-2014 |
20140215469 | METHOD AND A COMPUTER PROGRAM PRODUCT FOR CONTROLLING THE EXECUTION OF AT LEAST ONE APPLICATION ON OR FOR A MOBILE ELECTRONIC DEVICE, AND A COMPUTER - A method for controlling the execution of an application with a mobile electronic device ( | 07-31-2014 |
20140245305 | Systems and Methods for Multi-Tenancy Data Processing - System and methods are provided for rotating real time execution of data models using an application instance. Input data are received for real time execution of a plurality of data models. An application instance is assigned for executing the plurality of data models simultaneously. Resources of the application instance are automatically distributed based on a set of rotation factors. The plurality of data models are executed simultaneously using one or more data processors. Execution results for one or more of the plurality of data models are output. | 08-28-2014 |
20140245306 | Adaptive Observation of Behavioral Features On A Heterogeneous Platform - Methods, devices and systems for monitoring behaviors of a mobile computing device include observing in a non-master processing core a portion of a mobile device behavior that is relevant to the non-master processing core, generating a behavior signature that describes the observed portion of the mobile device behavior, and sending the generated behavior signature to a master processing core. The master processing core combines two or more behavior signatures received from the non-master processing cores to generate a global behavior vector, which may be used by an analyzer module to determine whether a distributed software application is not benign. | 08-28-2014 |
20140282550 | Meter Reading Data Validation - A meter data management (MDM) system processes imported blocks of utility data collected from a plurality of utility meters, sensors, and/or control devices by using independent parallel pipelines associated with differing processing requirements of the data blocks. The MDM system determines processing requirements for each of the imported data blocks, selects one of the pipelines that matches the determined processing requirements for each of the imported data blocks, and directs the data blocks to the selected one of the pipelines for processing. The pipelines may include a validation pipeline for validation processing, an estimation pipeline for estimation processing and a work item pipeline for work item processing. | 09-18-2014 |
20140282551 | NETWORK VIRTUALIZATION VIA I/O INTERFACE - Network virtualization can be provided via network I/O interfaces, which may be partially or fully aware of the virtualization. Network virtualization can be reflected in the use of a first header and an additional header(s) for a data frame. A partially-aware transmit example can gather together data frame components, including its additional header(s), via a work queue entry. A fully-aware transmit example can refer to a transmit-side table to gather its additional header(s) and can track the state of its additional header(s) stored in a cache. A partially-aware receive example can handle an additional header(s), e.g., by writing it to host-memory. A fully-aware receive example can determine values from multiple headers (including its additional header(s)) to further determine where to write a data payload to host-memory. The examples can relieve a host's hypervisor from performing all the network virtualization processing. The fully-aware examples can incorporate JOY techniques. | 09-18-2014 |
20140282552 | SOFTWARE INTERFACE FOR A SPECIALIZED HARDWARD DEVICE - Embodiments of the disclosure include methods, systems and computer program products for performing a data manipulation function. The method includes receiving, by a processor, a request from an application to perform the data manipulation function and based on determining that a specialized hardware device configured to perform the data manipulation function is available, the method includes determining if executing the request on the specialized hardware device is viable. Based on determining that the request is viable to execute on the specialized hardware device, the method includes executing the request on the specialized hardware device. | 09-18-2014 |
20140282553 | META-APPLICATION MANAGEMENT IN A MULTITASKING ENVIRONMENT - Techniques are disclosed to identify concurrently used applications based on application state. Upon determining that usage of a plurality of applications, including a first state of a first application of the plurality of applications, satisfies a criterion for identifying concurrently used applications, the plurality of applications is designated as a first meta-application having a uniquely identifiable set of concurrently used applications. The first meta-application has an associated criterion for launching the first meta-application. Upon determining that the criterion for launching the first meta-application is satisfied, at least one of the plurality of applications is programmatically invoked. | 09-18-2014 |
20140282554 | COMMUNICATION APPARATUS AND COMMUNICATION METHOD - In a communication apparatus, a communication processor rebuilds, with switching of communication systems, a communication bearer to perform communication. An application processor outputs, when background communication occurs or a display unit is shifted from an off state to an on state while notification from the communication processor is stopped, a request signal to the communication processor. The application processor starts the background communication based on information of a latest communication bearer output from the communication processor in response to the request signal. | 09-18-2014 |
20140282555 | Ensuring Determinism During Programmatic Replay in a Virtual Machine - Aspects of an application program's execution which might be subject to non-determinism are performed in a deterministic manner while the application program's execution is being recorded in a virtual machine environment so that the application program's behavior, when played back in that virtual machine environment, will duplicate the behavior that the application program exhibited when originally executed and recorded. Techniques disclosed herein take advantage of the recognition that only minimal data needs to be recorded in relation to the execution of deterministic operations, which actually can be repeated “verbatim” during replay, and that more highly detailed data should be recorded only in relation to non-deterministic operations, so that those non-deterministic operations can be deterministically simulated (rather than attempting to re-execute those operations under circumstances where the outcome of the re-execution might differ) based on the detailed data during replay. | 09-18-2014 |
20140289731 | STARTING A PROCESS - According to an example, a method includes: before a first process exits, a kernel receives a connection holding request carrying a File Descriptor (FD) transmitted by the first process, with respect to the FD carried by the connection holding request, the kernel increases a reference count of a file object corresponding to the FD and puts the file object into a cache, the kernel returns cache position information to the first process, such that the first process puts a corresponding relationship between the cache position information and identifier information of a communication connection pointed to by the FD in a predefined to storage area; when a second process starts, the kernel receives an FD obtaining request carrying the cache position information transmitted by the second process, reads the file object from the cache, assigns a new FD to the file object and returns the new FD to the second process. | 09-25-2014 |
20140298341 | CLUSTERING BASED PROCESS DEVIATION DETECTION - Systems and methods for data analysis include correlating event data to provide process instances. The process instances are clustered, using a processor, by representing the process instances as strings and determining distances between strings to form a plurality of clusters. One or more metrics are computed on the plurality of clusters to monitor deviation of the event data. | 10-02-2014 |
20140304706 | METHOD AND DEVICE FOR SETTING STATUS OF APPLICATION - A method for a device to set a status of an application, including: acquiring status setting permission information of the application; determining if the acquired status setting permission information indicates that it is permitted to set the status of the application; and setting the status of the application as an inactive status, if it is determined that the acquired status setting permission information indicates that it is permitted to set the status of the application. | 10-09-2014 |
20140304707 | METHOD AND APPARATUS FOR CREATING PARAMETER SET - A first set of parameters for describing additional items of information that are needed to process a data block of a data stream at a processing time are compiled using a processing unit. The method can be used in distribution services where a user wishes to access the data block at different times. | 10-09-2014 |
20140310711 | METHOD FOR DETERMINING PERFORMANCE OF PROCESS OF APPLICATION AND COMPUTER SYSTEM - A method and a computer system for determining performance of a process of an application are provided for application in the field of computer technology. When the process of the application is started, an apparatus for determining the performance of a startup procedure of the process acquires a process startup beginning time and determines a time of when the process is ready or able to respond to user input. The time of when the process of the application is ready or able to respond to user input may be used as the process startup ending time. A startup period for the process of the application is determined. Performance of the startup procedure of the process of the application is determined based on the process startup beginning time and the process startup ending time as the time the process of the application is ready or able to respond to user input. | 10-16-2014 |
20140331228 | LIVE APPLICATION MOBILITY FROM ONE OPERATING SYSTEM LEVEL TO AN UPDATED OPERATING SYSTEM LEVEL - Provided are techniques for comparing a first fileset associated with a first operating system (OS) with a second fileset associated with a second OS; determining, based upon the comparing, that the second OS is a more current version of the first OS; in response to the determining that the second OS is a more current version of the first OS, moving, in conjunction with live application mobility, a virtual machine (VM) workload partition (WPAR) on the first LPAR to a second LPAR, the moving comprising determining a set of overlays associated with the WPAR corresponding to the second OS; removing from the W PAR any overlays associated with the first OS; applying to the WPAR a set of overlays corresponding to the second OS; check pointing processes associated with the WAPR; and copying live data associated with the LPAR from the first LPAR to the second LPAR. | 11-06-2014 |
20140331229 | Intent-Based Ontology For Grid Computing Using Autonomous Mobile Agents - A Grid application framework uses semantic languages to describe the tasks and resources used to complete them. A Grid application execution framework comprises a plurality of mobile agents operable to execute one or more tasks described in an intent based task specification language, Input/Output circuitry operable to receive input that describes a task in the task specification language, an analysis engine for generating a solution to the described task, and an intent knowledge base operable to store information contained within tasks of the plurality of mobile agents | 11-06-2014 |
20140351816 | METHOD FOR PERFORMING MULTI-TASKING USING EXTERNAL DISPLAY DEVICE AND ELECTRONIC DEVICE THEREOF - A method and apparatus for performing multi-tasking using an external display device in an electronic device are provided. A method for performing a multi-tasking work using an external display device in an electronic device includes the operations of executing at least one application, determining whether to output an application screen to the external display device, in response to determining to output the application screen, sending an emulator execution request to the external display device, and, after sending the emulator execution request to the external display device, determining an application identifier in a screen of an application whose screen is determined to be outputted to the external display device, and transmitting a signal corresponding to the screen and the application identifier to the external display device. | 11-27-2014 |
20140359624 | DETERMINING A COMPLETION TIME OF A JOB IN A DISTRIBUTED NETWORK ENVIRONMENT - Determining a completion time of a job in a distributed network environment, the method includes determining completion times of a map task and a reduce task of a job and executing at least one test to collect a training dataset that characterizes the completion times of the map task and the reduce task. | 12-04-2014 |
20140359625 | DETECTION AND CORRECTION OF RACE CONDITIONS IN WORKFLOWS - A race condition in a workflow representation is detected and corrected. First and second contracts are retrieved for respective first and second analytics of the workflow representation, wherein the contracts specify input types and output types of their analytics. Both contracts include information required to execute their respective analytics by a workflow executor. It is determined that the output type of the first analytic matches the input type of the second analytic based on a comparison of the first contract and the second contract, and that the workflow representation does not include a directed edge connecting the first analytic to the second analytic. The inclusion of a directed edge in the workflow representation connecting the first analytic to the second analytic will correct the race condition in the workflow representation. | 12-04-2014 |
20140366030 | SHARED CACHE DATA MOVEMENT IN THREAD MIGRATION - Technologies are generally described for methods, systems and processors effective to migrate a thread. The thread may be migrated from the first core to the second core. The first and the second core may be configured in communication with a first cache. The first core may generate a request for a first data block from the first cache. In response to a cache miss in the first cache for the first data block, the first core may generate a request for the first data block from a memory. The first core may coordinate with a second cache to store the first data block in the second cache. The thread may be migrated from the second core to a third core. The second core and third core may be configured in communication with the second cache. | 12-11-2014 |
20140373015 | AUTOMATION OF MLOAD AND TPUMP CONVERSION - Embodiments of the invention are directed to systems, methods and computer program products for converting MLOAD and TPUMP operations. In some embodiments, a system is configured to: receive an input production parameter, wherein the input production parameter is associated with a load utility and defines a library of parameters, wherein the library of parameters defines a first syntax; convert the first syntax of the library of parameters to a second syntax, wherein the second syntax is associated with the load utility; validate the second syntax of the library of parameters; and write an output parameter to a memory location based on positive validation of the second syntax of the library of parameters. | 12-18-2014 |
20140380317 | SINGLE-PASS PARALLEL PREFIX SCAN WITH DYNAMIC LOOK BACK - One embodiment of the present invention performs a parallel prefix scan in a single pass that incorporates variable look-back. A parallel processing unit (PPU) subdivides a list of inputs into sequentially-ordered segments and assigns each segment to a streaming multiprocessor (SM) included in the PPU. Notably, the SMs may operate in parallel. Each SM executes write operations on a segment descriptor that includes the status, aggregate, and inclusive-prefix associated with the assigned segment. Further, each SM may execute read operations on segment descriptors associated with other segments. In operation, each SM may perform reduction operations to determine a segment-wide aggregate, may perform look-back operations across multiple preceding segments to determine an exclusive-prefix, and may perform a scan seeded with the exclusive prefix to generate output data. Advantageously, the PPU performs one read operation per input, thereby reducing the time required to execute the prefix scan relative to prior-art parallel implementations. | 12-25-2014 |
20140380318 | VIRTUALIZED COMPONENTS IN COMPUTING SYSTEMS - The subject disclosure is directed towards virtual components, e.g., comprising software components such as virtual components of a distributed computing system. Virtual components are available for use by distributed computing system applications, yet managed by the distributed computing system runtime transparent to the application with respect to automatic activation and deactivation on runtime-selected distributed computing system servers. Virtualization of virtual components is based upon mapping virtual components to their physical instantiations that are currently running, such as maintained in a global data store. | 12-25-2014 |
20150020074 | SPECIFYING LEVELS OF ACCESS TO A THREAD ENTITY IN A MULTITHREADED ENVIRONMENT - Techniques are disclosed for providing thread specific protection levels in a multithreaded processing environment. An associated method includes generating a group of threads in a process, one of the group of threads opening a thread entity, and that one of the group of threads specifying one or more levels of access to the thread entity for the other threads. In one embodiment, when a first of the threads attempts to perform a specified operation on the thread entity, the method includes determining whether that first thread is the one of the group of threads that opened the thread entity. When the first thread is not that one of the group of threads, the first thread is allowed to perform the specified operation if and only if such operation is permitted by the specified one or more levels of access. | 01-15-2015 |
20150026684 | HIDDEN AUTOMATED DATA MIRRORING FOR NATIVE INTERFACES IN DISTRIBUTED VIRTUAL MACHINES - An initial request for a reference to a data container is sent to a distributed enhanced virtual machine native interface component of a distributed virtual machine in response to receiving, from a remote execution container, the initial request for the reference to the data container at a distributed enhanced remote execution container native interface component of the distributed virtual machine. A data mirror data structure including immutable data and the reference to the data container is received. The received data mirror data structure including the immutable data and the reference to the data container is stored within a local memory storage area. A reference to the locally-stored data mirror data structure is returned to the remote execution container in response to the initial request for the reference to the data container. | 01-22-2015 |
20150033231 | ELECTRONIC DEVICE AND METHOD FOR CONTROLLING THE ELECTRONIC DEVICE VIA FINGERPRINT RECOGNITION - An electronic device includes a fingerprint sensor used for controlling the electronic device to perform predetermined functions. A plurality of reference fingerprints and a plurality of functions corresponding to the reference fingerprints are set, where each reference fingerprint corresponds to a function. When a fingerprint matches one of the reference fingerprints is input upon the condition that the electronic device is in the locked state, the electronic device performs a function corresponding to the reference fingerprint that matches the input fingerprint. A plurality of predetermined operation objects corresponding to the reference fingerprints are set, where each of the reference fingerprints corresponds to a respective predetermined operation object. When an input fingerprint matches one of the reference fingerprints is detected upon the condition that the electronic device has been unlocked, an operation object corresponding to the reference fingerprint that matches the input fingerprint is activated. | 01-29-2015 |
20150067685 | Systems and Methods for Multiple Sensor Noise Predictive Filtering - The present invention is related to systems and methods for branch metric calculation based on multiple data streams in a data processing circuit. | 03-05-2015 |
20150067686 | Auto-Cloudifying Applications Via Runtime Modifications - An approach is provided in in which distributed runtime environment executes a software application that includes isolated runtime constructs corresponding to an isolated runtime environment. During the execution, the distributed runtime environment identifies isolated runtime constructs included in the software application and selects distributed runtime constructs corresponding to the isolated runtime constructs. In turn, the distributed runtime environment executes the distributed runtime constructs in lieu of executing the isolated runtime constructs. | 03-05-2015 |
20150074666 | SUPPORT SYSTEM FOR CREATING OPERATION TASK PROCESS OF COMPUTER SYSTEM AND FIRST MANAGEMENT COMPUTER FOR SUPPORTING CREATION OF OPERATION TASK PROCESS - A second management computer (a management server) acquires either all or a portion of a plurality of task components from a first management computer (a content management server), creates an operation task process based on the acquired plurality of task components, and executes an operation task of the computer system in accordance with the created operation task process. The second management computer manages the execution result of the operation task process, and supplies the execution result to the first management computer. The first management computer acquires, from the second management computer, the configuration information and the execution result of the operation task process, retrieves a task component candidate on the basis of a request from the second management computer, presents the task component candidate to the second management computer, and provides a selected task component to the second management computer. | 03-12-2015 |
20150074667 | SYSTEM, APPARATUS, AND INFORMATION PROCESSING METHOD - An apparatus includes an operating unit. The operating unit includes one or more processors each configured to transmit an execution request for executing a process based on a user's operation, and a delivery unit configured to receive event information indicating an event generated in the apparatus, and to deliver the received event information to the processors. A connection is established for performing communications between the apparatus and the operating unit every time each of the processors transmits the execution request, and a permanent connection for performing communications between the apparatus and the operating unit that is permanently established is used when the delivery unit receives the event information. | 03-12-2015 |
20150095911 | METHOD AND SYSTEM FOR DEDICATING PROCESSORS FOR DESIRED TASKS - Improving the performance of multitasking processors are provided. For example, a subset of M processors within a system with N processors is dedicated for a desired task. The M (where M>0) of the N processors are dedicate to a task, thus, leaving N−M (N minus M) processors for running normal operating system (OS). The processors dedicated to the task may have their interrupt mechanism disabled to avoid interrupt handler switching overhead. Therefore, these processors run in an independent context and can communicate with the normal OS and cooperation with the normal OS to achieve higher performance. | 04-02-2015 |
20150121378 | METHODS, SYSTEMS, AND APPARATUS FOR PERSONALIZING A WEB EXPERIENCE USING ANT ROUTING THEORY - Methods, systems, and apparatus for using ant theory to personalize a workflow are described. One or more path segments in a data structure that represents a workflow may be identified, the one or more path segments corresponding to a path traversed by a user. A weight associated with each of the identified one or more path segments in the data structure may be increased. One or more weights in the data structure associated with a plurality of path segments may be decreased based on a temporal decay rate. A guidance activity that directs a user to a more heavily weighted path at a workflow decision point may be established. | 04-30-2015 |
20150121379 | INFORMATION PROCESSING METHOD AND ELECTRONIC APPARATUS - The present invention discloses an information processing method and an electronic apparatus, the electronic apparatus has a display unit, which is able to present an interactive interface for displaying N applications running in the electronic apparatus, and N is a positive integer. The information processing method includes acquiring a first operation to a first application of the N applications at the T1 moment; controlling the first application to be in a first status in response to the first operation; acquiring a second operation at the T2 moment after the T1 moment; and controlling the electronic apparatus to terminate the process of the second application in response to the second operation. | 04-30-2015 |
20150135181 | INFORMATION PROCESSING DEVICE AND METHOD FOR PROTECTING DATA IN A CALL STACK - An information processing device comprises a control unit, a hash unit, and a comparison unit. The control unit is arranged to run a program and to store at least flow control information of the program in a call stack. The hash unit is arranged to generate a first hash value by applying a hash function to selected data in response to a first context change of the program, the selected data comprising at least one or more selected items of the call stack, the first context change comprising a termination or interruption of a first process or thread of the program. The control unit is further arranged to start or resume a second process or thread of the program only when the hash unit has generated the first hash value. The hash unit is further arranged to generate a second hash value by re-applying the hash function to the selected data in response to a second context change, the second context change comprising a termination or interruption of the second process or thread. The comparison unit is arranged to determine whether the first hash value and the second hash value are identical. | 05-14-2015 |
20150301857 | Activity Interruption Management - In response to determining that an activity has been postponed (e.g., interrupted or deferred), a computer system stores a record indicating that the activity is postponed. In response to determining that another activity has become active, the computer system stores a record indicating that the other activity is active. The computer system reminds a user to return to the postponed in response to determining that a reminder condition associated with the postponed activity has been satisfied. For example, the computer system may remind the user to return to the postponed activity in response to determining that the other activity has been completed. | 10-22-2015 |
20150309809 | ELECTRONIC DEVICE AND METHOD OF LINKING A TASK THEREOF - A method of linking a task of an electronic device and the electronic device are provided. The method includes determining whether generation of an event satisfying a predetermined condition is detected; selecting another electronic device that is linkable to the electronic device when the generation of the event satisfying the predetermined condition is detected; and generating task environment information of an application and transmitting the task environment information to the other selected electronic device. | 10-29-2015 |
20150309836 | AVOIDING TRANSACTION ROLLBACK - A method and apparatus for avoiding a transaction rollback is provided. The method includes determining, by a service status check unit, whether at least one available path exists through a logic flow during an execution of a transaction. The service status check unit forecasts a successful logic flow completion on the determining the existence of at least one available path through the logic flow. The transaction is terminated, based on there being no available path through the logic flow. | 10-29-2015 |
20150324242 | General Purpose Distributed Data Parallel Computing Using A High Level Language - General-purpose distributed data-parallel computing using a high-level language is disclosed. Data parallel portions of a sequential program that is written by a developer in a high-level language are automatically translated into a distributed execution plan. The distributed execution plan is then executed on large compute clusters. Thus, the developer is allowed to write the program using familiar programming constructs in the high level language. Moreover, developers without experience with distributed compute systems are able to take advantage of such systems. | 11-12-2015 |
20150339210 | Method And System For Resource Monitoring Of Large-Scale, Orchestrated, Multi Process Job Execution Environments - A system and method for monitoring the process resource consumption of massive parallel job executions is disclosed. The described system uses byte code instrumentation to place sensors in methods that receive job execution requests. Those sensors detect start and end of job executions by the process they are deployed to and extract identification data from detected job execution requests that allow to identify the job request. This job identification data is used to tag resource utilization measures, which allows to assign measured resource consumptions to specific job executions. The job identification data is also used to tag transaction tracing data describing transaction executions performed during a specific job execution with job identification data that identifies the job execution that triggered the transaction. The generated job specific measures and transaction traces may be used to identify resource intensive job executions and to identify the root cause of the resource consumption. | 11-26-2015 |
20150347195 | Multi-Core Processor System for Information Processing - This multi-core processor system for processing information, of the kind including a data exchange engine ( | 12-03-2015 |
20150347260 | SYSTEM AND METHOD FOR RECORDING THE BEGINNING AND ENDING OF JOB LEVEL ACTIVITY IN A MAINFRAME COMPUTING ENVIRONMENT - A system writes to a replicated direct access storage device (DASD) a record of each step within a job as each step begins and as each step completes. The records are maintained on the replicated DASD for a predetermined period of time. The predetermined period of time is, for example, the greatest amount of lag in replication of all storage systems operating within the system. The records are stored, for example, in an open jobs and datasets (OJD) file, where the file itself is a dataset. The dataset is written to by an online task (e.g., OJDSTC) which gathers input from two sources. Upon job completion, the records are stored, for example, in an OJD journal and removed from the OJD file. | 12-03-2015 |
20150355912 | Integrated Systems and Methods Providing Situational Awareness of Operations In An Orgranization - A system which comprises a series of native applications, suited to run on mobile devices, and a series of web-based applications for which functionality and processing are optimized. The native applications and the web-based applications are coordinated to optimize processes of acquiring, storing and disseminating data for speed, integrity and security. | 12-10-2015 |
20150355935 | MANAGEMENT SYSTEM, MANAGEMENT PROGRAM, AND MANAGEMENT METHOD - A plurality of process content is retained, said process content including identifiers of a plurality of part content included in each process and information which denotes dependencies among the plurality of part content. When information is inputted which designates a first process and the part content of a problem portion which is included in the first process, a process similar to the first process is retrieved. On the basis of whether there is a change in any of the plurality of part content which is included in the retrieved process, an evaluation value of the retrieved process is either incremented or decremented, and the information relating to the plurality of processes is outputted on the basis of the evaluation value. | 12-10-2015 |
20150363228 | INTEGRATING SOFTWARE SOLUTIONS TO EXECUTE BUSINESS APPLICATIONS - Various embodiments of systems and methods to integrate software solutions to execute business applications are described herein. A request is received at a first software solution to execute a business application. In one aspect, the request is forwarded to a second software solution when a resource required to execute the business application is associated with the second software solution. A response is received from the second software solution corresponding to the execution of the business application. In another aspect, the business application is executed at the first software solution when the resource required to execute the business application is associated with the first software solution. The response corresponding to the execution of the business application is rendered on a computer generated UI associated with the first software solution. | 12-17-2015 |
20150370577 | REMOTELY EXECUTING OPERATIONS OF AN APPLICATION USING A SCHEMA THAT PROVIDES FOR EXECUTABLE SCRIPTS IN A NODAL HIERARCHY - A schema is provided that logically represents a nodal hierarchy relating to execution of an application. The hierarchy includes multiple nodes, including one or more category nodes and one or more content nodes. An executable script is provided with the schema. The script may be associated with at least one node of the hierarchy. Each of multiple user inputs from the computing device are processed using the schema. The individual user inputs may be selective of nodes of the hierarchy. In response to processing each of multiple user inputs, user interface content is provided to the computing device. The user interface content for each user input corresponds to one of (i) one or more nodes, or (ii) a script content, generated as an output of an executed script that is associated with a selected node. | 12-24-2015 |
20150370598 | COMMON SYSTEM SERVICES FOR MANAGING CONFIGURATION AND OTHER RUNTIME SETTINGS OF APPLICATIONS - Managing settings of applications is provided. A request from an application to store runtime settings, currently being used by the application, is identified by a processor executing program instructions for managing settings of applications. In response to identifying the request, the runtime settings are then stored on in a repository of runtime settings. In one or more examples, the application is running on an operating system on a computer system, and the request is communicated through a common system service of the operating system. | 12-24-2015 |
20150378773 | COMMUNICATION SYSTEM, PROGRAMMABLE INDICATOR, INFORMATION PROCESSING DEVICE, OPERATION CONTROL METHOD, INFORMATION PROCESSING METHOD, AND PROGRAM - One aspect of the present invention provides a PLC system that can cause a programmable indicator to acquire a log in desired timing of a user without directly performing setting operation on the programmable indicator. | 12-31-2015 |
20160004541 | METHOD, APPARATUS, AND SYSTEM FOR RUNNING AN APPLICATION - According to an example, a computer creates an application entry in a microblog page, receives a triggering operation command associated with the application entry, generates, based on the triggering operation command, a floating layer at a predetermined position on the microblog page, receives application data at the floating layer, and runs an application in the floating layer based on the application data. | 01-07-2016 |
20160004560 | METHOD FOR SINGLETON PROCESS CONTROL - A method for singleton process control in a computer environment is provided. A process identification (PID) for a background process is stored in a first temporary file. A determination operation is performed for determining if the parent process is alive for a predetermined number of tries. The PID of the background process is written from the first temporary file into a first PID variable when the parent process ends. A determination operation is performed for determining whether a second, global temporary file is empty. The background process is exited if an active PID is determined to exist in a second, global temporary file. The PID from the first temporary file is stored into the second, global temporary file. A singleton code block is then executed. | 01-07-2016 |
20160004566 | EXECUTION TIME ESTIMATION DEVICE AND EXECUTION TIME ESTIMATION METHOD - A device includes: a memory configured to store a condition of exclusive execution for a plurality of processes, and execution time ranges of each of one or more modules, the execution time ranges indicating from a shortest estimation time to a longest estimation time regarding; and a processor configured to estimate an entire execution time by executing estimation processing so as to cause simulation of the estimation processing to progress, the estimation processing including: generating one or more cases, for each of the plurality of processes, in order of the one or more modules based on the execution time ranges, determining whether there is a possibility that exclusion waiting occurs based on the condition, for each of the one or more cases, and setting the exclusion waiting for a certain case in which it is determined that there is the possibility, from among the one or more cases. | 01-07-2016 |
20160011916 | COMPUTER, ASSOCIATION CALCULATION METHOD, AND STORAGE MEDIUM | 01-14-2016 |
20160011917 | METHOD FOR PERFORMING TASK ON UNIFIED INFORMATION UNITS IN A PERSONAL WORKSPACE | 01-14-2016 |
20160041833 | SYSTEM AND METHOD FOR FULLY CONFIGURABLE REAL TIME PROCESSING - Provided are systems, methods, and architectures for a neutral input/output (NIO) platform that includes a core that supports one or more services. The core may be thought of as an application engine that runs task specific applications called services. The services are constructed using defined templates that are recognized by the core, although the templates can be customized. The core is designed to manage and support the services, and the services in turn manage blocks that provide processing functionality to their respective service. Due to the structure and flexibility provided by the NIO platform's core, services, and blocks, the platform can be configured to asynchronously process any input signals from one or more sources and produce output signals in real time. | 02-11-2016 |
20160041843 | INVOCATION OF WEB SERVICES BASED ON A POLICY FILE INCLUDING PROCESSES OF WORKFLOW ASSOCIATED WITH USER ROLES - Techniques for orchestrating workflows are disclosed herein. In an embodiment, a method of orchestrating a workflow is disclosed. In an embodiment, data is stored in a policy file which associates attributes with processes. User input is received. A process associated with an attribute is selected, where the attribute is based on the user input. The selected process is performed as part of the workflow. Also, processes may be added dynamically as part of any category inside the policy file without having to recompile or redesign the logic of the BPEL project. | 02-11-2016 |
20160062773 | METHOD, TERMINAL AND HEAD UNIT FOR AUTOMATICALLY PROVIDING APPLICATION SERVICES USING TEMPLATES - The present invention relates to a method for automatically providing an application service by an interaction with a head unit at a terminal. The method includes steps of: (a) the terminal receiving a request for running of a specific application, if the specific application is selected by a user of the head unit from a list including information on one or more runnable applications, which are installed in the terminal, to be interacted with the head unit; and (b) the terminal running the specific application by interacting with a template application run by the head unit. | 03-03-2016 |
20160062793 | METHOD AND APPARATUS FOR MANAGING BACKGROUND APPLICATION - A method for managing a background application is provided. The method includes determining whether an operating feature of the background application satisfies a preset condition, and when it is determined that the operating feature of the background application satisfies the preset condition, displaying an operating interface in a foreground interface of a mobile device for a user to close the background application. | 03-03-2016 |
20160077849 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND PROGRAM - There is provided an information processing device including a receiving unit for receiving a command to be input to a first operating system and a command to be input to a second operating system different from the first operating system, a storage unit for storing a table in which given information included in the given command received by the receiving unit and information for identifying an application are related to each other, a generation unit for generating an application selection command for selectively executing the application based on the given command received by the receiving unit and the table stored in the storage unit, and an execution unit for executing the application selection command generated by the generation unit to selectively execute the application. | 03-17-2016 |
20160103687 | DISPLAY CONTROL DEVICE, AND DISPLAY CONTROL METHOD - A display control device for controlling a display unit in a vehicle, including a dedicated middleware that executes a dedicated application program on a vehicle side, a general purpose middleware that executes a general purpose application program from an external of the vehicle, and an interface that exchanges necessary information between the dedicate middleware and the general purpose middleware, includes: an activation device that activates the dedicated middleware first, and activates the general purpose middleware after the dedicated middleware; and a dedicated display control device that displays, before an activation of the general-purpose middleware is completed, a dedicated menu screen for activating the dedicated application program on the display unit via the dedicated middleware when the dedicated application program on the dedicated middleware is available. | 04-14-2016 |
20160110192 | APPARATUS AND METHODS FOR COGNITIVE CONTAINTERS TO OPTIMIZE MANAGED COMPUTATIONS AND COMPUTING RESOURCES - A cognitive container includes a set of managers for monitoring and controlling a computational element based on context, constraints and computing resources available to that computational element. Collectively, the set of managers may be regarded as a service regulator that specifies the algorithm context, constraints, connections, communication abstractions and control commands which are used to monitor and control the algorithm execution at run-time. The computational element is the algorithm executable module that can be loaded and run. The managers may communicate with external agents using a signaling channel that is separate from a data path used by the computational element for inputs and outputs, thereby providing external agents the ability to influence the computation in progress. | 04-21-2016 |
20160139938 | ENFORCING SOFTWARE COMPLIANCE - An apparatus for enforcing a compliance requirement for a software application in execution in a virtualised computing environment, the apparatus comprising: an identifier component operable to identify a resource instantiated for execution of the application; a retriever component operable to retrieve a compliance characteristic for the application, the compliance characteristic being retrieved based on the identified resource and having associated a compliance criterion based on a formal parameter, the compliance criterion defining a set of compliant resource states; a first selector component operable to select a software component for providing an actual parameter corresponding to the formal parameter, the actual parameter being based on data concerning the resource; an evaluator component operable to evaluate the compliance criterion using the actual parameter; an application modifier component operable to, in response to a determination that the resource is outside the set of compliant resource states, the determination being based on the evaluation of the compliance criterion, modify the software application to a modified software application having associated a resource with a state belonging to the set of compliant resource states; and a detector component operable to detect a change to one or more of the resources, wherein the identifier component, selector component and evaluator component are operable in response to a determination by the detector component that one or more resources is changed, and wherein the selector selects the software component based on an identification of one or more data items that the software component is operable to provide. | 05-19-2016 |
20160139951 | SERVICE CLEAN-UP - Versions of a service not reachable by a set of service requestors that use the service are removed. Multiple, different versions of a service are stored, along with metadata associated with the multiple, different versions of the service. The metadata is examined to determine one or more of the multiple, different versions of the service that are not reachable by the set of service requestors that use the service. Those versions are deleted. | 05-19-2016 |
20160147631 | WORKLOAD SELECTION AND CACHE CAPACITY PLANNING FOR A VIRTUAL STORAGE AREA NETWORK - Exemplary methods, apparatuses, and systems receive a first input/output (I/O) trace including storage addresses that were subject to a plurality of I/O requests from a first workload during a first period of time. The first I/O trace is run through a cache simulation using a plurality of simulated cache sizes. A first state of the cache simulation is stored upon completing the first I/O trace simulation. The first I/O trace is deleted in response to storing the first state. A second I/O trace including storage addresses that were subject to a plurality of I/O requests from the first workload during a second period of time is received. A cumulative miss ratio curve for the first workload is generated by loading the stored first state as a starting point for simulating the second I/O trace and running the second I/O trace through the cache simulation. | 05-26-2016 |
20160154673 | METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR PROVIDING A MINIMALLY COMPLETE OPERATING ENVIRONMENT | 06-02-2016 |
20160162301 | Maintaining state information in a multi-component, event-driven state machine - A method, apparatus and computer program product that allows for maintaining correct states of all sub-components in a state machine, even as sub-components leave the state machine and later rejoin in some previous state. Preferably, this is achieved without requiring the system to remember the states of all sub-components or a log of every event that was fed into the state machine. Thus, the technique does not require any knowledge of the previous state of the sub-components nor the need to preserve a complete log of events that were fed into the state machine. The state machine may be used to enhance the operation of a technological process, such as a workload management environment. | 06-09-2016 |
20170235597 | DETERMINING LIFE-CYCLE OF TASK FLOW PERFORMANCE FOR TELECOMMUNICATION SERVICE ORDER | 08-17-2017 |